added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2021-11-19T16:23:17.381Z
2021-11-01T00:00:00.000
244367375
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.cureus.com/articles/76675-prophylactic-administration-of-alpha-blocker-for-the-prevention-of-urinary-retention-in-males-undergoing-inguinal-hernia-repair-under-spinal-anesthesia-interim-analysis-of-a-randomized-controlled-trial.pdf", "pdf_hash": "4c8bd6c5d2b38a5db6ace6052f0acc087cdb5a58", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42164", "s2fieldsofstudy": [ "Medicine" ], "sha1": "29cfd0002f081afb051a91e839e404429a6a9194", "year": 2021 }
pes2o/s2orc
Prophylactic Administration of Alpha Blocker for the Prevention of Urinary Retention in Males Undergoing Inguinal Hernia Repair Under Spinal Anesthesia: Interim Analysis of a Randomized Controlled Trial Introduction: This randomized controlled study aims to investigate the prophylactic effect of tamsulosin on the development of postoperative urinary retention (POUR) in men undergoing elective open inguinal hernia (IH) repair under spinal anesthesia. The study also focused on potentially predisposing factors for POUR. Methods: 100 eligible patients were randomized into two groups. Patients in the experimental group were given two doses of tamsulosin 0.4 mg orally 24 hours and 6 hours before surgery. In the control group, two doses of placebo were administered, in the same manner as the study group. The following parameters were also recorded: the International Prostate Symptom Score (IPSS) questionnaire scores, the presence of scrotal hernia, operation duration, perioperative administration of IV opioids and/or atropine, postoperative pain, and preoperative anxiety. Results: Overall, the incidence of POUR was 37% (37/100) with no difference between the two groups. Among patients receiving tamsulosin, 39.2% (20/51) developed POUR, compared to 34.7% (17/49) in the control group. Preoperative patients’ high anxiety visual analog scale (VAS) score (>51mm) (P=0.007) and the intraoperative use of atropine (P=0.02) were detected as risk factors for POUR. Conclusion: This interim analysis of our prospective randomized trial showed no benefit from the prophylactic use of tamsulosin in preventing POUR after IH repair under spinal anesthesia. This type of anesthesia was also correlated with an overall high incidence of POUR. Preoperative anxiety and administration of atropine were identified as statistically significant factors for POUR. In patients with preoperative high anxiety, VAS score a different type of anesthesia may be used. Introduction Inguinal hernia (IH) is the most common abdominal wall defect, with a lifetime risk of 27% for men and 3% for women. Surgical repair is the treatment of choice, and it is estimated that more than 20 million IH repairs are performed every year worldwide [1,2]. Moreover, a positive correlation between the IH incidence in men and age is noted, with the former being almost 200/10,000 person-years for patients aged 75 years and over [2]. Postoperative urinary retention (POUR) is a frequent adverse event after both emergency and elective procedures, with an incidence ranging from 5% to 70% [3]. POUR is generally defined as the postoperative inability to pass urine, but the definitions, still, vary widely. Regarding IH, POUR results in prolonged hospitalization and reduced patient satisfaction. Among the various risk factors that have been proposed are the male gender, elderly patients, history of benign prostatic hypertrophy (BPH), and spinal anesthesia [3]. The latter has been also confirmed in a previous study from our group, where 32% of patients under spinal anesthesia developed POUR [4]. However, despite these, spinal anesthesia, still, remains an attractive option for IH repair [5], since regional anesthesia is associated with favorable results in terms of hypotension, postoperative nausea, vomiting, and pain [3,6]. In order to reduce the incidence of POUR after IH repair, several authors have proposed the prophylactic use of alpha receptors' antagonists [7][8][9][10]. However, there is, still, no consensus on whether prophylactic alphablockers administration can reduce rates of POUR in adult males. The aim of this double-blinded, controlled randomized study was to investigate the prophylactic effect of tamsulosin, a selective alpha-1 adrenergic blocking agent, on the development of POUR in men undergoing elective open IH repair under spinal anesthesia. Materials And Methods The study was approved by the Hospital Ethics Committee and all participants provided written informed consent. The trial protocol was registered in ClinicalTrials.gov (NCT03976934). Since September 2019, all male patients of 50 years and older, referred to the Outpatient Clinic of our Surgical Department for elective unilateral IH repair were evaluated for their eligibility. The following exclusion criteria were considered: 1) American Society of Anesthesiologists (ASA) score >3, 2) female patients, 3) history of orthostatic hypotension, 4) prostatic hypertrophy, 5) neurological diseases, 6) previous lower urinary tract operations, 7) complicated IHs, 8) administration of general or local anesthesia, and 9) patients with contraindication for tamsulosin administration. All eligible patients were admitted one day prior to operation and were randomized into two groups. Patients in the experimental group were given two doses of tamsulosin 0.4 mg orally 24 hours and 6 hours before surgery. In the control group, two doses of placebo were administered, in the same manner as the study group. Randomization was based on a computer-generated table of random numbers. Opaque and sealed envelopes, numbered for each subject, were used and opened upon the arrival of the patient to the surgical clinic. All hernia repairs were done in a tension-free manner, with plug and/or mesh placement under spinal anesthesia. Postoperative management was standardized for all patients and included paracetamol 1 g every 8 hours, low molecular weight heparin and omeprazole. The patients were encouraged to mobilize. Per os feeding was administered, provided the absence of nausea and vomiting. The primary endpoint of our study was the difference between the experimental and the control group in terms of POUR. POUR was defined as the inability to void 8 hours postoperatively. The following parameters were also recorded: the International Prostate Symptom Score (IPSS) questionnaire scores, the presence of scrotal hernia, operation duration, perioperative administration of IV opioids and/or atropine, postoperative pain, and preoperative anxiety. Pain assessment was based on the visual analog scale (VAS) score at 6, 12, and 24 hours after the operation (VAS score scale from 0 to 10, 0 no pain, 10 max pain). Preoperative anxiety was quantified by the anxiety VAS (A-VAS: 0-100 mm) score. A-VAS scores were summarized in two subgroups (Low A-VAS: 0-50 mm and high A-VAS: 51-100 mm), based on the respective literature reports [11]. Statistical analysis Prior to any statistical analyses, all data underwent a Shapiro-Wilk normality test. In variables where normality was confirmed, a parametric approach was applied; in any other cases, a non-parametric analysis was implemented. Independent samples' t-test and Mann-Whitney U test were used for the comparison of normal and non-normal continuous variables, respectively. Pearson chi-square test was calculated for categorical variables. The relation between two continuous data was assessed with a regression analysis. To further confirm the factors associated with the abovementioned study outcomes, a logistic regression model was used. The effect estimates of these analyses were displayed with the corresponding odds ratio (OR) and 95% CIs. Based on the normality test results, continuous data was reported as mean (standard deviation) or median (interquartile range-IQR). Moreover, categorical variables were reported as N (percentage). Statistical significance was considered at the level of P<0.05. All analyses were performed in SPSS Statistics v.22 software (SPSS Inc. Chicago, IL, USA). Sample size analysis indicated a total sample size of 196 patients (98 per group) to detect a 50% decrease in the POUR rate (32%) when a1 blocker was administered. An interim analysis was planned after completion of the first half of patients and the results are presented and discussed herein. Results Between September 2019 and June 2021, 100 patients were randomized to either the tamsulosin group (group 1, n: 51) or the control group (group 2, n: 49). The mean age was 63.54 years. In total, 73 indirect, 25 direct, and two combined hernias were included. No statistically significant differences in terms of base demographics were found ( Table 1). In 75 patients, a mesh and plug combination was introduced, whereas a mesh or a plug-only approach was applied in 18 and seven patients, respectively. Operation duration was comparable between the two groups. Overall, the incidence of POUR was 37% (37/100) with no difference between the two groups. Among patients receiving tamsulosin, 39.2% (20/51) developed POUR compared to 34.7% (17/49) in the control group. Overall, eight patients had IPSS scores >15. Bladder catheterization was applied in all POUR cases according to the study's protocol followed by an attempt for removal the next morning. Catheter removal was successful in less than 24 hours in 34 patients (17 patients in each group), while in one patient the catheter was removed on the second postoperative day. Two patients required prolonged catheterization. No complications or side effects of therapy were encountered during the treatment with tamsulosin or placebo. Discussion Urinary retention is a common complication after any surgical procedure and especially after IH repair [12]. Although POUR is considered a minor complication, it is painful and often requires catheterization for relief which can cause urethral trauma or catheter-related infections, it delays discharge and increases costs [13]. POUR in male patients undergoing IH repair varies widely in published series ranging from less than 1% to greater than 34% which can be attributed to many factors [14]. The underlying physiological mechanism leading to POUR relates to α-adrenergic overstimulation following IH repair. Sympathetic nerve activity during the perioperative period leads to catecholamine release and α-adrenergic stimulation of bladder neck muscles preventing bladder emptying [14]. The innervation of the lower urinary tract is mainly by sympathetic and parasympathetic divisions of the autonomic nervous system and somatic portions of the pudendal nerve. The parasympathetic efferent nerves via various muscarinic receptors in bladder smooth muscles excite the bladder and relax the urethra while the sympathetic efferent nerves inhibit the bladder body and excite the bladder base and urethra nerve. The smooth muscle of the bladder is also rich in βreceptors which initiate relaxation when stimulated by norepinephrine or epinephrine while the bladder neck and urethra contain mainly α-receptors that initiate contraction when stimulated by norepinephrine [15]. Suggested factors which may interrupt with the voiding reflex after IH are the perioperative fluid management, the type of anesthesia, the use of narcotic analgesia, increased outlet resistance, postoperative pain, and patient age and sex [12]. Excessive perioperative fluid intake can lead to bladder overdistention which increases the risk of POUR [12]. The type of anesthesia can also affect the incidence of POUR, particularly when general or regional anesthesia is used. General anesthetics may cause bladder atony while regional anesthesia may interrupt the micturition reflex leading to detrusor blockage [13]. On the other hand, outlet closure is done via increasing α-receptors-mediated tone in the bladder outlet [16]. Certain sympathomimetic and anticholinergic drugs, such as phenylephrine and atropine, inhibit bladder tone during surgery leading to a distended bladder with decreased urge to void. In the postoperative period, pain in the groin area can also stimulate α-adrenoreceptors in the prostate and proximal urethra causing increased urethral and bladder resistance which can lead to retention [17]. Furthermore, it is generally accepted that the incidence of POUR increases in males with age, and one of the most likely causes of this is BPH. One hypothesis for the cause of urinary retention occurring in the presence of BPH following IH repair is due to adrenergic overstimulation of the smooth muscle in the bladder neck and prostate which are rich in α-adrenergic receptors [3]. The rationale for pharmacologic prevention of POUR is based on increasing detrusor contractility or relaxing proximal urethra. Alpha-adrenergic blockers act by reducing the tone in the bladder outlet and thus decrease outflow resistance and facilitate micturition. Prophylactic administration of these drugs has been shown to be effective in preventing POUR after IH repair, and in the latest international guidelines for groin hernia management, there is a statement that prazosin, phenoxybenzamine hydrochloride, or tamsulosin may be effective in preventing urinary retention [18]. In six published prospective studies comparing POUR rates after elective unilateral IH repair, 625 patients received prophylactic alpha-blockers vs placebo or no treatment. All studies included males only and four of the six studies included only those over the age of 50 [7][8][9]19], while one study included males between the ages of 20 and 70 [10] and one study included males 18 years of age or older [20]. The prophylactic alpha-blocker was tamsulosin in three studies [7,8,20], prazosin in two studies [10,19], and phenoxy benzamine in one study [9]. Treatment regimens varied in time of dosage while in one study, no placebo was used in the control group [9]. The type of anesthesia was either spinal or general anesthesia [7,9,19], general anesthesia only [10,20], or not specified [8]. The method of IH repair was described only in two studies as open [7,10]. In five studies, group comparability was ensured by assessing preoperative urinary function with a number of internationally recognized assessment scores and tools [7][8][9][10]19]. In four studies, there was a statistically significant reduction in POUR rates in the groups receiving alpha-blocker compared to placebo [7][8][9][10] while two studies found no improvement in retention rates [19][20]. In the present study, we investigated the prophylactic effect of tamsulosin, a selective alpha-1a adrenergic blocking agent, on the development of POUR in men ≥ 50 years old undergoing elective open IH repair. Tamsulosin was chosen on the basis that is inexpensive, easy to administer, has a low adverse effect, profile and reaches peak serum levels at 4 hours after administration. Since there is limited data in the literature regarding the timing of tamsulosin administration in preventing POUR after IH repair, we decided to administrate the drug 24 hours and 6 hours before surgery, similar to the Mohammadi-Fallah et al. study [7]. This way was practical, ensured two doses of tamsulosin before surgery with a high interval time between the doses, and allowed patients' monitoring for the development of any adverse events before surgery. Our results showed no difference in the rates of POUR between the tamsulosin group and the control group which is not in accordance with the results of other studies which used the same drug [7,8]. Possible reasons for this could be that we focused only on patients receiving spinal anesthesia, the time frame for development of urinary retention was shorter (8 hours), and we used different times of tamsulosin dosage than in the other two studies. On the other hand, Caparelli et al. also used tamsulosin in their study and similar to us found no improvement in POUR rates between the placebo group and the tamsulosin group. However, in their study, only patients with laparoscopic IH repair were included [20]. The short time frame for the diagnosis of POUR (only 8 hours) in our study can also explain the overall high incidence of urinary retention (37%). Another reason could be the use of spinal anesthesia which predisposes to higher rates of POUR after IH repair [4,13] and the fact that in near half of the patients (46%) opioids were used for the spinal anesthesia [21]. Regarding the predisposing factors of POUR, only preoperative anxiety related to the surgical procedure and the intraoperative use of atropine were statistically significant. The importance of the A-VAS score is that it can be easily measured and can the patients with higher risk for POUR. These patients might need a different approach like a thorough explanation of their surgery or an alternative type of anesthesia. Our study holds limitations. This is an interim analysis of a single-center study, with a small number of patients included. We also used VAS scores for assessing preoperative anxiety and postoperative pain, even though A-VAS and P-VAS have high sensitivity and specificity, these are subjective methods. Conclusions In conclusion, this interim analysis of our prospective randomized trial showed no benefit from the prophylactic use of tamsulosin in preventing POUR after IH repair under spinal anesthesia. This type of anesthesia was also correlated with an overall high incidence of POUR. The study also focused on potentially predisposing factors for POUR. Among the measured factors, only preoperative anxiety and the intraoperative use of atropine were identified as statistically significant factors. In patients with preoperative high anxiety, VAS score of a different type of anesthesia may be used.
v3-fos-license
2018-12-27T09:22:48.478Z
2011-10-10T00:00:00.000
55362535
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://thescipub.com/pdf/ajassp.2011.1232.1240.pdf", "pdf_hash": "cc6bc6f4333bd4957383968940c53d2e45097ff9", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42165", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Chemistry" ], "sha1": "f9da6d212b09a837799ebc8fe948fb32b034d605", "year": 2011 }
pes2o/s2orc
The Contribution of Lube Additives to the Life Cycle Impacts of Fully Formulated Petroleum-Based Lubricants Problem statement: Previous applications of the Life Cycle Assessment (LCA) methodology to lubricants are not sufficiently detailed and comprehensive for R and D purposes and there are no LCAs of lube additives and fully formulated lubricants. The aim of this study is to integrate and expand previous LCAs of lubricants and to investigate on the contribution of lube additives to the environmental impacts of fully formulated lubricants. Approach: This study considers three base oils (mineral, poly-alpha olefins and hydrocracked) and a set of lubricating additives typically used in fully formulated engine oil. The LCA model is based on both industry and literature data. Results: The contribution of additives to the life cycle impacts of commercial lube oil was found to be remarkably high, particularly for land occupation and metal depletion (more than 50%) and for climate change (30%). Trends in the lubricants industry towards more sophisticated base oils correspond to remarkably higher environmental impacts per kg of product, but are likely to lead to reduce impacts per km. Conclusion: While the application of LCA to lubricants can be considered fully operational for general purposes outside the lubricants industry, this is not the case for R and D purposes within the industry. Additives should not be excluded from LCAs of modern lubricants, as their contribution in terms of environmental impact can be considerably high. As base oil is concerned, this study made the point on data availability and provided a contribution in order to integrate and expand previous LCAs of fully formulated lube oils. INTRODUCTION Oil refineries produce, by distillation and processing of crude oil, a large series of petroleum products and by-products that are commonly used as fuels or as feedstock for petroleum and petrochemical products. The importance of the sector is crucial, considering that in May 2010 there were 104 refineries operating in the European Union with a refining capacity of 778 million tonnes per year (ECDGE, 2010a). Lubricants are an important family among products of the refining industry and they are widely used to reduce frictions between moving components and in modern engines. Beyond the typical applications in internal combustion engines, vehicles and industrial gearboxes, a large variety of specific tailored products has been introduced in time, so that today there are nearly unlimited applications for lubricants. The world demand on 2010 has been 34.5 million tonnes, divided as 56% automotive lubricants, 26% industrial lubricants, 8% greases and 10% process oils. It is estimated that from 5000-10000 different lubricants are necessary to satisfy more than 90% of all applications (Mang and Dresel, 2007). Fully formulated lubricants are constituted by one or more base oils blended with additives, which are used in order to enhance the performance of lubrication and mitigate drawbacks such as corrosion and wear (Mortier et al., 2010). Base oils are produced from crude oil refining and they can be mainly divided in two categories (minerals and synthetics), while additives result from chemical processes and they include several categories with different effects on lubricants performances. Due to the technological and economical relevance of the petrochemical sector in modern industry, environmental impacts of petroleum products, and thus lubricants, have become more and more matter of concern. Consequently, oil companies have increased their investments in cleaner technologies and more environmentally friendly products (Bevilacqua and Braglia, 2002). In this context, Life Cycle Assessment (LCA) has been used to investigate lubricants with the following aims: evaluating the environmental impacts of lube products and understanding the role of lubricants in the LCAs of other products and services. An analysis of databases such as Boustead Model (2005), ELCD (EC, 2010a) and Ecoinvent (2007a) shows that Life Cycle Inventory (LCI) data on lube products are included in several processes, wherein lubricants are direct or indirect involved. LCAs intended as a tool to investigate, and possibly reduce, the environmental impacts of lubricants, are quite limited, and there are few scientific papers focused on this topic (Serra-Holm, 2004;Wang et al., 2004;Vag et al., 2002). More recently, LCA was used to compare bio-lubricants against generic (or not well defined) mineral base oils (Ekman and Borjesson, 2011;Miller et al., 2007;Subramaniam et al., 2008;Vijaya et al., 2009). Furthermore, all the above references only evaluate the environmental burdens associated to the base oils fraction, while additives are always excluded. This choice is justified either with the low quantity of additives in fully formulated lubricants (Ecoinvent, 2007a;Ekman and Borjesson, 2011), or, in comparative LCAs, with the assumption that quantities are similar in all products (Miller et al., 2007). Nevertheless, authors like Ekman and Borjesson (2011) admit that in applications where the quantity of additives can be up to 30% of lubricants composition, and in case of indicators related to human and/or ecotoxicity, the environmental consequences could be remarkable and, finally, they recommend to include additives in future LCAs. As lubricants are concerned, the state-of-art suggests that LCA can be considered fully (or almost) operational for general purposes outside the lubricants industry, where LCIs of mineral and synthetic base oils can be used interchangeably and where additives can be excluded. On the contrary, given that in recent years there was a clear shift towards more sophisticated base oils, which are likely to correspond to more environmental impacts per kg, and there was a tendency to massively use conventional and innovative additives in several applications, it clearly emerges that LCA is far to be operational within the lubricants industry. This lack of detailed, updated and reliable LCIs is the situation faced by the partners of the EU-FP7 research project AddNano (http://sites.google.com/site/addnanoeu/), a 11 million euros project that involves worldwide leading companies in the lubricating sector and aims to develop and scale-up of innovative fully formulated lubricating oils incorporating nano-particles. Advanced nanomaterials, presently under study, have shown initial promising attitudes for reducing friction and enhancing protection against wear (Feldman et al., 1996;2000). With focus on engine oil (crankcase) applications, among other technological goals, the AddNano project is using LCA in order to evaluate the effects of nano-particles on the environmental performances of lubricants. Given that the new nano-components are intended to be used in substitution or in mix with conventional additives, and that several base oils are to be tested, it clearly emerged that background LCI data were not sufficient and that an additional effort was required to the project partners in order to reasonably expand and complement the background dataset. This appeared to be a situation where the direct involvement of important industrial partners could partially fill the gap of data related to the LCA of additives. Such a data gap can partially be justified by the enormous variety of additives presently available and by the fact that additive producers are extremely conservative and seldom available to supply data and information that they consider strictly confidential. Keeping in mind all the above considerations, this study presents a from-cradle-to-gate LCA of fully formulated lube oil, including both base oil and additives. As main element of novelty, a simplified LCA of the most common lube additives is carried out, using both data from the industry and from the literature. This is intended to appraise the contribution of additives to the environmental impacts of fully formulated lubricants. Although simplified, this LCA has to be considered a first step to start a process of co-operation with additives producers, in order to better understand/improve the environmental performances of lube products. As a preparatory step for the LCA of fully formulated lube oil, a critical review of literature LCAs of base oils is carried out and updated eco-profiles of mineral base oil and poly-alpha olefins (PAO) base oil are compared with hydrocracked base oil from Ecoinvent (2007a). MATERIALS AND METHODS This study considers three base oils (mineral, polyalpha olefins and hydrocracked) and a set of additives typically used in an engine lube oil. The average composition of a fully formulated engine oil, assumed as reference during the analysis, is that reported in Table 1. Mineral base oils are produced via refining the residual fraction of crude oil, while synthetic base oils usually are prepared through the reaction of chemical compounds, which are often petroleum-derived (Mang and Dresel, 2007). Lubricating additives constitute today an important fraction of a fully formulated oil and are necessary to meet the stringent requirements of modern engines, enhancing the performance characteristics of lube oils as well as enlarging and stabilising the range of operability under severe conditions of aging and temperatures (Mang and Dresel, 2007). Some additives only affect one of the lubricating proprieties, while others may have multiple effects (e.g., zinc dithiophosphates). For most lube products, additives components are mixed together in additive packages and blended with one or more base oil. The additives categories considered in this study are: Detergents, dispersants, viscosity modifiers, antioxidants and antiwear (Table 1). From-cradle-to-gate environmental implications of fully-formulated lubricants and components were investigated using the LCA methodology according to the standard ISO 14040 (ISO 14040, 2006). Authors of this study assume that the reader has access to the ISO standards (ISO 14040, 2006) and EU-JRC guidelines (EC, 2010b) on LCA of products, so that general information on the LCA methodology is not provided here. Only key methodological assumptions are therefore presented in the next paragraphs. Data sources: A critical review of the literature references relevant to the LCA of base oils has been carried out and summarised in Table 2. Among the main data sources for base oils LCA, there are databases such as Boustead Model (2005), Ecoinvent (2007a), European Reference Life Cycle Database-ELCD (EC, 2010b) a technical report issued by the lubricants industry (Fehrenbach, 2005) and the Reference Document on Best Available Techniques for Mineral Oil and Gas Refineries-BREF (European IPPC Bureau, 2003). The LCA models of mineral base oil and synthetic poly-alpha olefins (PAO) were developed on the basis of data retrieved from the literature (Fehrenbach, 2005;European IPPC Bureau, 2003;ECDGE, 2010b) and personal communication from the AddNano partners (PETRONAS). The hydrocracked base oil included in the Ecoinvent (2007a) was used as term of comparison. As stated above, all the cited studies exclusively consider the production process of the base oil in a life cycle perspective, while additives are not taken into account. The simplified LCA of additives was developed using personal communication from the AddNano partners (petronas and infineum) and specific literature information that will be discussed in paragraph 3.3. System boundaries, functional unit and allocation criteria: The LCA models here proposed cover the phases of extraction, transportation and production until the exit of the refinery/factory of both base oils and additives, in a from-cradle-to-gate perspective. The functional unit is 1 kg of final product. The production process has been divided, where possible, in sub-units ( Fig. 1-2). In each sub-unit, raw materials and energy consumption are considered as input and products and co-products as output, as well as inputs to the successive process. The analysis has been set on European average data, representative of the European refinery industries. In this study, the allocation criterion is mass. This was considered the most appropriate in the context of the AddNano project, which is more focused on additives and fully formulated lubricants than on base oils. The consequences of a different choice on allocation are discusses Ekman and Borjesson (2011) and Wang et al. (2004). Selection of environmental impact indicators: The Life Cycle Impact Assessment (LCIA) method used in order to show the results of this study is ReCiPe (http://www.lcia-recipe.net). This LCIA method (Goedkoop et al., 2009) is composed of 18 midpoint indicators. The fact that refineries are highly integrated and multiple-output production plants determines the need to define a criterion of allocation to distribute the environmental burdens to each product. The allocation can be done considering different parameters: mass, energy content, market price (Wang et al., 2004;Ekman and Borjesson, 2011). Systems description: A brief description of the life cycle models of base oils (mineral and synthetic) is provided first, while the lubricating additives categories are described immediately after. Mineral base oil is produced from crude oil through several processes of distillation and refinery. Detailed process data were retrieved from the IFEU Report (Fehrenbach, 2005). Average values of energy and materials consumption are calculated on the basis of the BREF (European IPPC Bureau, 2003). A crude oil mix coming from different countries has been considered, based on the data of the DG Energy of the EC (ECDGE, 2010a) on oil imports and deliveries. The production flow chart considered in the study is shown in Fig. 1. The basic distillations (atmospheric and vacuum) are the first steps of the process, aimed to separate the base oil feedstock from the other petroleum products. Afterwards, the waxy distillate gets through specific refining stages that purify the base oil from unwanted components. Around 112 kg of base oil are produced from 1 t of crude oil. Airborne emissions are mainly a consequence of energy use. Water emissions and wastes were retrieved from BREF (European IPPC Bureau, 2003), which reports data for the refinery as a whole. Quantities were allocated by mass, considering that the 11.2% of crude oil is converted to base oil. Synthetic lubricants can be obtained from many kinds of base oils stocks, such as poly-alpha olefins, alkylated aromatics, polybutenes, etc. In this study, Poly-Alpha Olefins (PAO) are considered as representative of synthetic bases. Data relevant to energy and materials consumption and emissions were retrieved from the IFEU Report (Fehrenbach, 2005). The production chain has been modelled considering processes and relative intermediate products shown in Fig. 2. In particular, the feedstock input is composed by naphta from crude oil distillation (around 60%) and by gas condensate from natural gas processing; 10% of feedstock is converted in PAO, through three stages of synthesis (steam cracking, LAO and PAO synthesis). Fig. 2: Production of PAO base oil Lubricating additives: Additives industry is in continuous evolution, with large margins for improvement in the development of new products that can enhance lubricants performances and reduce environmental burdens. With focus on automotive, the main objectives are reduce fuel consumptions and increase the life-time of engines (Rudnick, 2009). Due to the absence of specific literature and the low availability of primary data, a simplified methodology has been adopted to carry out a LCA of lube additives in co-operation with the AddNano project partners. The following steps were undertaken: • Identification of the additive categories typically used in average engine lube oil. The chemical composition of a conventional engine oil, reported in Table 1, has been identified using personal communication from PETRONAS and literature data (Mang and Dresel, 2007;Rudnick, 2009) • Selection of a representative additive for each category • Identification of the correspondence between the selected additive and an industrial product available in the Ecoinvent database (Ecoinvent, 2007a): Proxy product criterion • When no straightforward correspondence between additives and Ecoinvent units could be found, a secondary criterion was adopted: proxy synthesis process criterion. The correspondence with one or more Ecoinvent entries has been searched through a simplified chemical composition and according to a proxy synthesis process The main data and assumptions for the simplified LCA of additives are reported in Table 3. A short description of additives and a few comments on their function in fully formulated lubricants are reported in the following paragraphs. Detergents: Detergents represent an important class of the so called over based additives, which are colloidal particles of calcium carbonate and hydroxide, stabilized by a surfactant layer. They are formed by long chain of oleophilic hydrocarbons and a polar hydrophilic head. The oleophilic hydrocarbons serve as a solubilised in the base fluid, while the head attracts the contaminants within the lubricant. The dispersants acts enveloping solid contaminants with the polar group that prevents the adhesion of soot particles on metal surfaces. This process is generally known as peptidization. The detergents are metal-containing and the most diffused are the sulphonates, followed by phenates, salicylates and phosphonates (Rudnick, 2009;Hudson et al., 2006). The chosen additive for representing this category has been identified in the alkylbenzenesulfonic acid. Alkylaromatic sulfonic acids are derived either from the sulfonation of alkylaromatics, such as alkylbenzenes and alkylnaphthalenes, or from petroleum refining. The steps involved in producing alkylbenzenesulfonic acids are shown in Fig. 3 (Rudnick, 2009). Dispersants: The role of dispersants is to prevent agglomeration of particles produced by oil degradation and metallic parts wear (sludge) and maintain them in suspension in the oil. Even if the principle of operation is similar to detergents, they differ from detergents because they are by definition free of metals (Hui et al., 1997). The main class of dispersant is polybutenes (Mang and Dresel, 2007;Mortier et al., 2010). The polyisobutenyl succinimide has been considered as representative of the category. This dispersant is produced by the reaction of a Polyisobutenyl Succinic Anhydride (PIBSA) with either a polyamine or alcohol, as shown in Fig. 4 (Rudnick, 2009). Considered that the oil-soluble fraction represents the highest part in terms of mass, Ecoinvent synthetic rubber has been chosen as proxy product for alkylbenzenesulfonic acid. This is a ethylene-propenediene terpolymer (EPDM) rubber (Ecoinvent, 2007b). Viscosity modifiers aim at optimising the working efficiency, reducing the lubricant's change in viscosity when subjected to changes in temperature. They are constituted by high molecular polymers, with a flexible molecular chain structure. Examples of viscosity modifiers are: Polymetacrylates (PMAs), Polyethylenecopropylenes or the so called olefins copolymers (OCPs) (Rudnick, 2009;Souza de Carvalho et al., 2010). Olefins have been chosen as representative group for this study (Ecoinvent, 2007c). Antioxidant: Antioxidants play in lubricating oil the important role to prevent processes of ageing that can deteriorate the quality of lubrication. Aged lubricants can be typically characterized by common aspects such as discoloration or burnt odour (Rudnick, 2009). Methylene bridged hindered phenolic antioxidants and alkylated diphenylamine antioxidants have demonstrated high performance to this purpose. These antioxidants are prepared by alkylation reactions, resulting in the formation of complex product mixtures (Greene and Gatto, 1999). Fig. 5: ZDDPs synthesis The chemical substances chosen to represent antioxidants in the study are: phenolic antioxidants and Zinc Dithiophosphates (ZDDP). For ZDDP, due to the complex chemical composition, a direct correspondence with an industrial product was not found. Therefore, the proxy synthesis process criterion was adopted in order to identify suitable entries in the database. The chemical reaction considered for ZDDP synthesis is reported in Fig. 5 (Rudnick, 2009). Antiwear: The wear between sliding surfaces is an inevitable drawback of machines during start-up, running-in, and transient operation. Antiwear additives in modern engines have to control wear at acceptable levels. Zinc Dialkyl Dithiophosphates (ZDDPs) have been extensively used as antiwear additive since 1940s. They work with the principle of "boundary lubrication", protecting the moving parts against wear due to the formation of tribochemical films on the surfaces in contact (Barnes et al., 2001;Lin and So, 2004;Varlot et al., 2001). RESULTS Base oils: Midpoint impact indicators related to mineral base oil, PAO base oil and hydrocracked base oil are reported in Table 4. Lube additives in fully formulated engine oil: As previously said, fully formulated oils are composed by base oil and additives. Given the composition reported in Table 1, where the base oil is assumed to be mineral, the results of the life cycle analysis of fully formulated lube oil are reported in Fig. 6. The first column shows the composition in mass, so that it is possible to visually identify those components that give a contribution to the overall environmental impacts higher than the contribution in mass. DISCUSSION Base oils: PAO shows the highest impacts in most of categories, with the exception of photochemical oxidant formation, freshwater eutrophication, freshwater and marine ecotoxicity, metal depletion and agricultural and urban land occupation, where the highest impacts are those of hydrocracked base oil. It can be observed that greenhouse emissions of PAO are almost twice than those of mineral base oil, due to higher quantities of refinery gas burned for heat consumption and, in general, to a more energyconsuming production process. These results show that trends in the lubricants industry towards more sophisticated base oils, produced by more complex and energy consuming processes, correspond to remarkably higher environmental impacts associated to 1 kg of product. However, it has to be considered that modern lubricating oils remarkably increase the life time of engine oil and, consequently, the mileage that can be covered. In practical terms, the reduced number of oil changes corresponds to reduced impacts per km, i.e., a reduction in total impacts in a life cycle vision that will also be investigated in the future stages of the AddNano project. Lube additives in fully formulated engine oil: With reference to Fig. 6, it can be highlighted that the contribution of additives to the life cycle impacts of a commercial lube oil cannot be considered negligible, as they can be up to 80% of the total impacts, while they represents only 20% in mass. In particular for agricultural land occupation and metal depletion the contribution of additives is more than 50% and for human toxicity and ionising radiation is more than 35%. Moreover, additives impact on climate change results 30%. These findings confirm and expand the statement of Ekman and Borjesson (2011) according to which additives should be explicitly considered in LCAs of lubricants. Moreover, the percentage of additives is increasing in time and gaining importance in modern lubricants. Therefore, if the contribution of lubricating additives to the environmental impacts could be considered negligible in previous LCAs, this simplification is not anymore valid for modern lubricants. As a further result, it was highlighted that the relative contribution of antiwear and antioxidant, that are likely to be substituted/modified by the new nanobased components under development within the AddNano project, is remarkable, thus further justifying the search for new environmentally friendly additives. CONCLUSION The results presented in this study confirmed that the application of LCA to lubricants can be considered fully operational for general purposes outside the lubricants industry, while for research and development purposes LCA of lubricants is still far to be operational. More detailed and comprehensive LCAs of different base oils are necessary and lube additives must be included in the LCA of modern fully formulated lubricants. This study made the point on data availability and provided a contribution in order to integrate and expand previous LCAs of lubricants. On the side of additives and fully formulated lubricants, the main conclusion of this research is that in modern lubricants the contribution of additives in terms of environmental impact can be remarkably high and, therefore, they cannot be excluded. Although simplified, this LCA of additives could represent a first step of a desirable co-operation with the additives industry, which so far kept information and data on processes and products strictly confidential. As far as LCA of base oil is concerned, this study was useful to highlight how lubricants based on modern synthetic lubricating base oils have higher impacts per kg in comparison to traditional mineral oils. However, in reason of the fact that modern engines requires lubricating oils that can lead to higher performance, reducing frictions and fuel consumption, this can lead to environmental benefits in a life cycle perspective. Synthetic oils offer a longer life time and require less oil changes, leading to a decrease of environmental impacts per distance covered. However, these overall environmental gains can be quantified only if specific and detailed inventory data are available. Moreover, it was highlighted that there is room for improvement in the production of additives and fully formulated lubricants through the deployment of new technologies such as those proposed in the AddNano project.
v3-fos-license
2022-07-17T15:12:33.768Z
2022-07-15T00:00:00.000
250600787
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1088/1748-9326/ac816d", "pdf_hash": "898d3144627d00f5cf0840696d95de52c946ef6b", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42166", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "26f458ced88cc09d3ab741730c16d9aa69cfa044", "year": 2022 }
pes2o/s2orc
Unprecedented climate extremes in South Africa and implications for maize production Maize is the most important crop grown in South Africa, but yields can be severely reduced by extreme high summer average temperatures and low precipitation, potentially adversely affecting both domestic consumption and regional food security exports. To help understand and manage climate risks to food security in Southern Africa it is essential to quantify the present-day likelihood and magnitude of climate extremes in South Africa’s maize-growing region and explore the potential for unprecedented climate conditions which would likely result in record low maize yields. We analyse a large ensemble of initialised climate model simulations, which provides almost 100 times as many plausible present-day summers as the equivalent observational dataset. We quantify the risk of unprecedented climate extremes affecting maize production in South Africa and examine the role of the El Niño-Southern Oscillation. We find that the South African maize region is at risk of experiencing record-breaking hot, cold, dry or wet events under current climatic conditions. We find that the annual chance of unprecedented high temperatures in South Africa is approximately 4%, increasing to 62% during very strong El Niño years. We also find that the chance of exceeding the present day seasonal high temperature record has increased across the 1979–2018 period, being five times more likely now than it was in 1980. These extreme events could result in a record-breaking number of days above the optimum, or even the maximum, temperature for maize production, and lead to more severe floods or droughts. Under climate change scenarios, the magnitude and frequency of climate extremes is projected to increase meaning that the unprecedented extremes studied here could become commonplace in the future. This suggests that significant investment is needed to develop adaptations that manage the climate-related risks to food systems now and build resilience to the projected impacts of climate change. Introduction Maize is the most important staple crop grown in South Africa, accounting for 46% of the total crop area in 2020 (FAO 2022). South Africa was ranked 9th in the world's largest maize-producing countries in 2020 and, as the largest in Africa, is a crucial regional exporter often relied on to achieve food security across Southern Africa. For example, in 2020 South Africa provided nearly all the maize imported by Botswana and Namibia (FAO 2022). However, due to heavy reliance on rainfed agriculture (van Niekerk et al 2018), maize harvests in South Africa can be severely reduced by extreme weather events such as heatwaves, droughts and floods. In 1992 and 2015/16, droughts destroyed maize crops across South Africa and the wider sub-Saharan region necessitating substantial humanitarian assistance ( Callihan andEriksen 1994, Rembold et al 2016), while ongoing flooding in 2022 has destroyed 60% of planted maize (Coleman 2022). Limited access to technology and agrochemicals contributes to low maize yields compared to other countries, such as the USA and Argentina, making the food system more vulnerable to extreme climate events. Trade networks mean the impacts of these extreme events can be felt across domestic, regional, and global scales. South Africa has been warming at a rate of 0.2 • C per decade since 1961, which is slightly below the global average but equivalent to the rest of Africa (IPCC 2021). Whilst South Africa has not yet seen major reductions in average yields associated with the observed warming trend because of increases in production inputs over the same period (Akpalu et al 2011), evidence from other countries globally suggests that climate change will negatively impact maize yields (Pachauri et al 2014). Furthermore, rising temperatures and changing rainfall patterns are likely to increase the occurrence of climate extremes (including unprecedented events, i.e. magnitudes that have not been observed before) and further reduce maize yields (Müller et al 2011, Thornton et al 2011, Hoffman et al 2018, Mangani et al 2018, 2019, Chapman et al 2020. Maize in Southern Africa has therefore been identified as one of the most important crops requiring adaptation option investment (Lobell et al 2008). The El Niño-Southern Oscillation (ENSO) mode of natural climate variability is associated with farreaching global teleconnections affecting temperatures and precipitation (e.g. Davey et al 2014) that link strongly to agricultural production (e.g. Iizumi et al 2014). ENSO is in a positive (El Niño) phase when sea surface temperatures (SSTs) in the tropical eastern Pacific are anomalously warm, and in a negative (La Niña) phase when anomalously cool. Some of the largest climate extremes on record in South Africa are associated with ENSO variability; for example, two-thirds of the extreme hot temperature events in South Africa between 1970 and 2015 can be related to El Niño (Nangombe et al 2018) and La Niña events are associated with the development of tropical-temperate troughs which increase rainfall over South Africa (Cook 2001, Mulenga et al 2003. Climate models project increasingly extreme ENSO states in the future (Timmermann et al 1999, Yeh et al 2009, Cai et al 2014, even if global mean temperature is stabilised at 1.5 • C above preindustrial temperatures (Wang et al 2017). To assess future risk to agriculture of climate extremes, we first need to understand the presentday risk due to natural climate variability. A key part of this assessment involves quantifying the risk of unprecedented climate extremes in the present-day, i.e. events more extreme than any on record. This new information about the current risk of record-breaking extremes will provide a vital first step in guiding adaptations to future climate change. However, our understanding of extremes is limited by the short duration of the observational record. To better understand the present-day likelihood of climate extremes in South Africa and their large-scale drivers, we apply the UNprecedented Simulated Extremes using ENsembles (UNSEEN; Thompson et al 2017) approach to a large ensemble of high-resolution initialised climate simulations. The ensemble consists of nearly 100 times more plausible realisations of the climate than observations over the same period, enabling a more comprehensive analysis of extremes and providing the opportunity to explore the characteristics of unprecedented events. In this first application of the UNSEEN approach in sub-Saharan Africa we: • Estimate the annual chance of experiencing unprecedented seasonal temperature and precipitation extremes during the peak of the maize growing season, January-March (JFM; FAO 2021) • Assess how the likelihood of unprecedented seasonal extremes has already altered due to climate change • Explore how the likelihood of extremes is linked with ENSO variability, and • Investigate the implications for maize production and trade. Observational data The observational climate data used is the WATCH Forcing Data methodology applied to ERA5 reanalysis data, hereafter 'WFDE5' (Cucchi et al 2020). The WFDE5 dataset runs from 1979 to 2018 at a horizontal resolution of 0.5 • by 0.5 • (∼50 km × 50 km in South Africa). UNSEEN climate model data The UNSEEN approach is defined as using a large ensemble of initialised climate model simulations to identify plausible climatic conditions that could have occurred during the recent historical period (see Thompson et al 2017, Squire et al 2021. November ensemble and months 9-11 of the May ensemble. Due to WFDE5 starting in 1979, we use only 1979-2018 DePreSys data. Crop data Maize crop yield data for 1979-2018 is taken from the FAOSTAT database (FAO 2022). The climate model and observational data were restricted to the northeast area of South Africa, where maize is the predominant crop (figure 1). Pre-processing steps Because climate model data can exhibit spatial and temporal biases, we first assess where the simulations are consistent with the observations using a set of fidelity tests which compare the mean, standard deviation, skewness, and kurtosis. Both the temperature and precipitation variables required a bias correction to the mean to pass the fidelity tests. Any long-term trends were removed to make the climate estimates representative of the current climate (taken as the year 2018). Full details of the steps, tests and results are given in the Supplementary Information (figures S1-S3). To isolate year-to-year climate variability in the maize yield data, we remove the long-term trend by subtracting a 2nd order polynomial line of best fit, giving a time series of yield anomalies (shown in figure 2(D)). Analysis steps The UNSEEN approach was used to identify the model realisations that produce record-breaking extremes for the maize region of South Africa and to calculate the annual chance of experiencing recordbreaking climate extremes (the number of realisations in which the observational record was broken divided by the total number of realisations). At large spatial scales, maize yields can be characterised by a two-dimensional Gaussian function of temperature and precipitation (the yield response function (YRF); Shirley et al 2020). To improve robustness, in this study we linearise the relationship by taking the natural logarithm of the relative yield (Y; defined as the yield time series divided by the polynomial best fit) and use ordinary least squares regression to fit a quadratic relationship with temperature (T) and precipitation (P) that represents the argument of the Gaussian YRF, i.e. where Y 0 is the yield at the optimal temperature and precipitation total, T opt is the optimal temperature, P opt is the optimal precipitation total, W T and W P are the respective widths of the Gaussian YRF; a, b, c, d, e, and f are regression coefficients, from which simultaneous equations enable the parameters of the Gaussian YRF to be obtained (see supplementary information). JFM temperature and precipitation are strongly anti-correlated and do not cover a sufficient range to be able to constrain a physically meaningful value of e. For this reason, we set e = 0, so that yield does not depend on the interaction between temperature and precipitation. While this simplifying assumption affects some details of the results, the overall conclusions are not strongly dependent on it. This method is more robust than a non-linear fitting procedure for the small sample of available observations, while still allowing some exploration of non-linear yield dependence on temperature and rainfall without needing cardinal temperatures. This transparent framework allows determination of optimal JFM temperature and precipitation (at YRF maxima) for maize varieties grown in South Africa and estimation of yield reductions associated with record-breaking extremes. The amplitude of ENSO is measured through several different indices; here we use one of the most common, the Niño 3.4 index: the SST anomaly to climatology in the tropical Pacific Ocean region (5 • N-5 • S, 120 • -170 • W); hereafter referred to as the N3.4 region. As no WFDE5 bias-corrected version is available for SST, ERA5 SST data (Hersbach et al 2020) are used to calculate the observed ENSO timeseries. ENSO indices are also calculated from the DePreSys SSTs. To understand how ENSO affects the chance of record-breaking extremes, DePreSys data are split into 0.5 • C bins according to the SST anomaly in the Niño 3.4 region. For each bin, the chance of unprecedented hot, cold, wet and dry events, and their combinations, are calculated as the fraction of ensemble member realisations that exceed or subceed 7 the observed records, together with how those fractions have changed over time. Closer examination of the relationship between maize yields in South Africa and the observed JFM seasonal temperatures and precipitation amounts (figure 3) shows that yields tend to be higher when JFM temperatures are lower and precipitation amounts are higher than average, and that all the yield shocks (yield anomalies <−0.25 t ha −1 ; shown in pink in figure 3) occur when the JFM seasonal temperature is higher and precipitation amounts are lower than average. The relationship between maize yields, temperature, precipitation and ENSO In addition, maize yields in South Africa are correlated with observed SSTs in the tropical Pacific Ocean ( figure 4(A)). The observed relationship between the ENSO phase and maize yields is strong, with a Pearson correlation of −0.47 (p-value = 0.005), equivalent to other recent findings (Anderson et al 2019). This relationship is likely to be the result of ENSO's influence on JFM temperature/ precipitation in the maize growing area. Figures 4(B) and (C) show the relationship between ENSO and JFM temperature and precipitation respectively and demonstrate that the DePreSys model reproduces the relationship found in the observations for JFM temperature, while the relationship with JFM precipitation is less well represented (although the sign of the correlation generally agrees). As well as affecting domestic maize supply, ENSO-related yield shocks can affect maize availability to trading partners, which is reflected in the South Africa's maize import and export data (figures 2(E) and (F)). Maize yield and exports are positively correlated (Pearson's correlation 0.50, p-value = 0.000 89) whereas yield and imports are negatively correlated (Pearson's correlation −0.63, p-value = 0.000 01), i.e. exports tend to increase and imports tend to decrease for higher maize yields. Figure 5 shows that South Africa typically exports to both nearby Southern African Development Community (SADC) countries in Africa and to Southeast Asia. The 2015-2016 El Niño event was one of the strongest climatic warming events of its kind recorded to date and caused extreme drought conditions in southern Africa. The agricultural impacts in sub-Saharan Africa were severe, with over 40 million people in the SADC (23%) being food insecure and requiring international aid (South African Development Community 2016), ∼35% more than the five-year average (National Vulnerability Assessment Committee 2017). South Africa experienced notably lower than average yields (figure 2(D)), necessitating 3.3 million tonnes of imports (∼2 million tonnes more than the 2014-2018 average), one-third of which was redistributed and exported primarily to nearby countries (figures 2(E) and 5, SAGL 2016, FAO 2022). Table 1 shows the observed seasonal records, the annual chance of breaking those records, and the maximum and minimum unprecedented amounts. South Africa experiences some of the most extreme droughts in the world, and even multiyear droughts are not uncommon (Rouault and Richard 2003). For example, according to the WFDE5 data, the lowest observed precipitation amount for JFM occurred in 2007, when South Africa's maize-growing region received less than half the expected amount (115 mm compared to the climatological mean of 252 mm). Figure 2 shows that this was associated with reduced maize yield, decreased maize exports, and increased imports. Chance of an unprecedented climate event and impact on maize yield The estimated chance of subceeding the observed JFM precipitation record of 115 mm in the maize region of South Africa is 0.8% yr −1 (figure 6). The lowest simulated rainfall total for JFM is 67 mm, which would represent a severe drought, likely resulting in significantly reduced maize production, unless irrigated. For comparison, the total growing season precipitation requirements for maize are optimally 600-1200 mm and absolutely 400-1800 mm (FAO 2010). The WFDE5 data show that the maize growing region of South Africa receives on average just 493 mm between October and April, and typically half falls in the JFM season. This means that the current South African climate sits on the borders of suitability for maize production, as indicated by figure 7. There is also a 4.1% yr −1 chance of exceeding the present-day JFM temperature record (figure 6), which could contribute to damaging soil moisture reductions even during years of average JFM rainfall. Temperature requirements for maize production are optimally 18 • C-33 • C (FAO 2010), showing that South African temperatures are well suited to maize. Combinations of extremely high temperature and low rainfall also pose a significant risk to maize productions, with a 0.2% chance per year of both subceeding the JFM precipitation record at the same time as exceeding the JFM seasonal temperature record (figure 6). Figure 7(A) shows the best-fit model for relative yield as a function of JFM precipitation (adjusted R 2 = 0.42), and figure 7(B) shows the equivalent model expressing yield as a function of JFM temperature (adjusted R 2 = 0.59). Figure 7(C) shows the best-fit bivariate Gaussian model for yield as a function of both JFM temperature and JFM precipitation (adjusted R 2 = 0.66). The darker green contours show where the yield is expected to be higher, suggesting that yield tends to be maximised for a JFM temperature of approximately 21 • C and JFM precipitation of 292 mm. Figure 7(A) shows that observed precipitation totals are generally below the optimal amount (292 ± 206 mm) and figure 7(B) shows that observed JFM temperatures are generally above the optimum (20.9 ± 2.3 • C). In addition, figure 6 and table 1 show the potential for climate events that are more extreme than any recorded over the past 40 years. These unprecedented events fall well outside the central peak of the YRF (see figure 7), likely resulting in very large yield reductions. For example, using the best-fit YRF, for the minimum simulated JFM precipitation (66.6 mm; table 1) and corresponding temperature (24.7 • C), the estimated relative yield is 0.16 (0.12-0.22), i.e. 16% of the yield obtained at the optimal JFM temperature and precipitation. For the maximum simulated JFM temperature (25.4 • C; table 1) and corresponding precipitation total of 152 mm, the estimated relative yield is 0.12 (0.067-0.22), i.e. 12% of the yield obtained at the optimal temperature and rainfall. These results suggest that temperature extremes are likely to be the stronger driver of exceptionally low maize yields in South Africa. However, caution is needed in interpreting these results because they extrapolate the YRF far beyond the range of recent observations, and we do not account for interactions between temperature and rainfall. Previous work (Sazib et al 2020) has found that maize yield decreases associated with El Niño events tend to be larger than corresponding yield increases during La Niña events. The YRF derived here suggests that this could in part be because maize yield has a non-linear dependence on temperature and precipitation. La Niña years bring the growing conditions closer towards the optimum, where the YRF is flatter whereas El Niño years push the growing conditions (B) Same as (A) but expressing maize yield as a function of JFM temperature. The best-fit parameters of the bivariate Gaussian YRF in brackets show the 5-95th percentile range estimated from 10 000 bootstrap resamples of the data. Note that the interaction between temperature and precipitation (ρ) was set to zero. Topt is the optimal temperature, Popt is the optimal precipitation total, W T and WP are the respective widths of the YRF. (C) Maize yield as a function of JFM temperature and precipitation; the contours show the best-fit bivariate Gaussian function (adjusted R 2 = 0.66) expressed on a relative scale, with a value of 1 indicating where the yield is maximised at the optimal temperature and precipitation. Observed (red) and modelled (pink) JFM temperature and precipitation are plotted in the contours to show the expected yield. The realisation highlighted by the black circle shows the lowest estimated yield. Note that the statistical model shows larger uncertainties when the predictions go beyond the observed range meaning that caution is required when interpreting the implications of climate extremes. further away from the optimum, where the YRF is steeper. Dependence of unprecedented high temperature events on ENSO ENSO is a strong driver of interannual JFM temperature variability in South Africa, and therefore maize yield. We now build on this finding by evaluating the influence of ENSO on the likelihood of unprecedented high JFM temperatures (see Squire et al 2021). Because DePreSys and the observations both show a similar relationship between ENSO and JFM temperature ( figure 4(B)), we have confidence that the UNSEEN approach remains suitable for this analysis. increasing chance of unprecedented hot events and a decreasing chance of unprecedented cold events with time. For example, the annual chance of JFM average temperatures higher than 24 • C is now five times more likely than in 1980 and occurs for all N3.4 region SST anomaly categories. Discussion Our analysis shows that the risk of unprecedented high JFM average temperatures is increasing, posing a growing threat to agriculture in South Africa. Warmer seasonal temperatures speed up plant development, which shortens the growing season for optimal growth leading to reduced yields (Lizaso et al 2018). There is a strong relationship between the mean JFM seasonal temperature and the maximum daily temperature in JFM (figure S6 in the supplementary information). This suggests an increasing risk of high daily maximum temperatures which could exceed the optimal daily mean temperature for maize in JFM (estimated to be 30.5 • C; Sánchez et al 2014) and damage crops through reductions in stomatal conductance and therefore transpiration and photosynthesis (Sabagh et al 2020). The WFDE5 observations show days above the optimal temperature for maize are already experienced in the current climate (figure S6), and that some parts of the maize-growing region of South Africa also experience daily maximum temperatures that approach the maximum maize temperature threshold during anthesis of 37.3 • C (Sánchez et al 2014). During unprecedented hot seasonal temperatures, it is likely that this maximum daily temperature threshold would be exceeded more frequently, particularly in the north and west, which are generally warmer than the regional average temperature. Similarly, there is a clear relationship between JFM precipitation and the maximum number of consecutive wet and dry days over the same season (figure S7 in the supplementary information). This means that unprecedented wet and dry seasonal JFM events could lead to more severe floods or droughts. Increasing dry spell duration is of concern because maize yields reduce when water stressed, especially during the most sensitive reproductive growth stages (Daryanto et al 2016)-which occurs in the JFM season in South Africa. Increasing wet spell duration is also concerning because maize yields decrease as excessive wetness increases (Kanwar et al 1988). From the observational record we find significant correlations between JFM temperature and precipitation in South Africa's maize region and ENSO. It may, therefore, be possible to provide early warnings of extreme conditions that could affect maize production because ENSO has predictability (Monerie et al 2019, L'Heureux et al 2020. However, whilst the model reproduces the observed temperature relationship with ENSO, the precipitation relationship is less well simulated. To obtain a more complete understanding, it will be important to explore other large-scale drivers of climate extremes in South Africa and how they interact with ENSO variability. For example, the influence of the Antarctic Oscillation pattern on South Africa precipitation has been shown to be stronger during La Niña years (Pohl et al 2010). While this study has examined the likelihood and magnitude of potential record-breaking extremes in the current climate, the magnitude and frequency of temperature extremes is expected to be higher in the future due to climate change ( Coumou andRobinson 2013, Coumou et al 2013). This implies future climate changes in the region could result in the unprecedented extremes of our analysis becoming commonplace by the 2040s, with an increasing likelihood of experiencing temperatures exceeding the maximum threshold for maize production. Even though the critical temperature thresholds for maize may not regularly be exceeded under the current climate, warmer temperatures can reduce maize yields and quality by making conditions more favourable for weeds, pests and diseases and by causing more rapid crop development (Mukanga et al 2010, Luo 2011. For example, estimates suggest that each degree day above 30 • C can reduce the yield by 1% under optimal rain-fed conditions and by 1.7% under drought conditions (Lobell et al 2011). Future changes in precipitation may also lead to further erosion and water-logging of soil in the region (Chapman et al 2021). Conclusions Maize production in South Africa is a crucial component of food security domestically and internationally, particularly through exports to neighbouring countries. Yields in South Africa are strongly dependent on summer temperature and precipitation, tending to be reduced during hot and dry conditions. In turn, hot summer conditions and low maize yields are strongly associated with El Niño events. Using a large ensemble of initialised climate model simulations, we find that South Africa's maize region is at risk of experiencing record-breaking hot, cold, dry, or wet events under current climatic conditions. The likelihood of hot conditions has already increased and is likely to increase further, suggesting that significant investment is needed to develop adaptations that manage the risk to sub-Saharan African food systems now and to build resilience to the projected impacts of climate change. Adaptations could include changing sowing dates, using and developing suitable crop varieties, and building irrigation capabilities (Fisher et al 2015). Although breeding new drought and heat-tolerant maize varieties is likely a research priority for the region, maize yields in southern Africa are among the lowest in the world and breeding alone is unlikely to be sufficient to build the required resilience. Changes to agricultural management practices are also likely to be critical, such as adopting climate-smart agriculture techniques to increase soil water storage (Cairns et al 2013) and increasing crop diversification, which could have the added benefit of improving nutrition security (Renwick et al 2021). In addition, because the influence of El Niño events is felt globally, actions taken in South Africa to combat poor harvests can benefit food systems in other African countries. To manage the impact of El Niño events, we therefore recommend a coordinated regional approach, such as was conducted during the devastating El Niño event of 1992 (Callihan and Eriksen 1994, World Food Programme 2016), and that trade relationships are built and maintained with countries that experience the opposite impacts to the same ENSO phase, such as Argentina. Data availability statement Any data that support the findings of this study are included within the article.
v3-fos-license
2021-01-16T14:10:21.694Z
2020-12-14T00:00:00.000
231618041
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://zenodo.org/record/4712270/files/10.1109:CDC42340.2020.9304426.pdf", "pdf_hash": "bd8a1b94a64329e8ecee6e691b61f641adf2fe2f", "pdf_src": "IEEE", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42167", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "sha1": "bd8a1b94a64329e8ecee6e691b61f641adf2fe2f", "year": 2020 }
pes2o/s2orc
Dynamic Certainty Equivalence Adaptive Control by Nonlinear Parameter Filtering This paper presents a novel solution to the problem of designing an implementable (i.e., differentiator-free) model-reference output-feedback direct-adaptive controller for single-input-single-output linear time-invariant systems with relative degree possibly larger than one. The new paradigm is based on a version of the Dynamic Certainty Equivalence (DyCE) principle. The approach proposed in this work consists in realizing the DyCE control through surrogate parameter derivatives, made available by a Nonlinear Parameter Filter (NPF), instead of feeding the DyCE controller with the derivatives of the estimates produced High-Order Tuner (HOT). The proposed adaptive controller does not require error augmentation or normalization, allowing the use of large adaptation gains for fast convergence speed. Moreover, the proposed architecture can be easily equipped with well-known robust modifications of tuning laws. The performance of the proposed algorithm is demonstrated via comparative simulations with an error augmentation-based method and a simplified HOT algorithm. I. INTRODUCTION AND PROBLEM FORMULATION Model Reference Adaptive Control (MRAC) is undoubtedly one of the most intensively studied problems by the adaptive control community, dating back to the 1950s. Despite this long and rich history, there are still several open issues in MRAC design that involve, in particular, system with non-unitary relative degree. The different MRAC approaches that have emerged from decades of research can be roughly grouped into two main categories: The first one, known as indirect-adaptive control, include identifiers of the plant and observers of the states. Its main restriction is the dependence of control and observation performance on the identification process, where lack of persistence of excitation (PE) of the regressor may degrade the control performance or even cause instability. The second category involves direct-adaptive control schemes, where the controller parameters are directly updated online without the need of estimating plant parameters. Typically, directadaptive schemes enjoy the favourable property of guaranteeing asymptotic convergence of the tracking error even in absence of PE. In this paper, we will only consider the directadaptive case. The majority of direct MRAC schemes for uncertain linear time-invariant (LTI) systems reported in the literature relies on the certainty equivalence (CE) principle. This principle reposes on the fact that the ideal model-matching controller is time-invariant and affine in the state variables of the system. Based on this paradigm, the unavailable (unknown) parameters of the ideal model-matching controller are replaced by their estimates. CE type controllers can be readily applied to Strictly Positive Real (SPR) systems. Conversely, when applied to systems with relative degree larger than 2, direct MRAC schemes require error-augmentation and normalization of the parameter adaptation law [1]. Normalization makes the convergence speed trajectory-dependent (i.e., nonuniform in the initial conditions) and typically slower in presence of large initial parametric mismatch. These issues prompt an investigation of adaptive controllers that are not based on the CE paradigm and do not require normalization. In this regard, an alternative approach to CE is DyCE adaptive control. This technique dates back to 1987, when Mudgett and Morse presented it in [2], although the name and the acronym for this method have been coined by Ortega in [3]. In order to apply DyCE control to systems with arbitrary relative degrees, high-order derivatives of the parameter vector are needed. Unfortunately, the tuners (adaptation laws for the parameters) available at the time of Widgett and Morse were only able to provide the first derivative [4], [5]. In the seminal work [6], Morse provided DyCE with a modified parameter adaptation scheme, named HOT, which did not require normalization. HOT update laws are able to produce -without direct differentiation -the estimated parameter vector plus its first ρ derivatives, where ρ is the relative degree of the plant. Compared to the augmented error CE methods, the DyCE+HOT architecture is characterized by an enhanced transient performance, due to the possibility of using large adaptation gains without normalization. The flip side of the coin is the increased complexity of the original HOT method of Morse, which -in absence of robust modifications -is known to suffer from a lack of robustness in case of poor excitation, which may cause parameter drift in the presence of unstructured perturbations. Nikiforov [7] incorporated a leakage modification to the HOT to make it robust in presence of bounded perturbations (in the sense of guaranteeing boundedness of all signals). Moreover, the dynamical order of the tuner was reduced by a factor of 2n via a simplification of its structure. However, it is known that leakage modifications alter the adaptation dynamics so that the parameters are not guaranteed to converge to the true values, even in case of Persistence of Excitation (PE). Other robust modifications used in conventional CE adaptive control, most noticeably parameter projections, are not affected by this problem and allow to achieve exact parameter convergence under PE conditions and in absence of unstructured perturbations. However, these modifications cannot be applied easily to the DyCE+HOT framework, due to the nonlinearity of the tuner. Finally, it is worth mentioning that the Adaptive Backstepping method by Krstic, Kanellakopolous and Kokotovic [8], as is the case of DyCE+HOT and its variants, does not need normalization and achieves enhanced performances compared to CE adaptive control. Adaptive backstepping is inherently nonlinear as it involves the use of nonlinear tuning functions. However, backstepping design is less procedural than DyCE+HOT, in the sense that it calls for specific customization when applied to different plant models. This increased complexity with respect to DyCE+HOT comes with the advantage of bearing the possibility to cope with unmatched model uncertainties. As all other adaptive schemes, backstepping requires robust techniques such as projection, dead-zone and leakage modifications to enforce signal boundedness in case of poor excitation. Nonetheless, due to its complexity, it shares with the DyCE+HOT a certain lack of popularity among practitioners. In this paper, we propose a new approach to DyCE adaptive control. Instead of relying on Morse's HOT, our method consists of filtering the estimated parameter vector through a NPF, combined with a conventional first-order tuner. The NPF devised in this work is able to produce surrogate signals to be used in the DyCE control law in place of the unavailable higher-order derivatives of the parameters. The proposed adaptive controller does not require error augmentation or normalization, allowing the designer to use large adaptation gains, thus achieving -in principle -a faster speed of convergence. As a noteworthy advantage versus conventional DyCE+HOT methods, the proposed architecture can be easily equipped with all robust modifications found in conventional adaptive control, like leakage and parameter projection. Notation : The Laplace transform of a signal u(t) : R → R in the s-domain will be denoted by u(s) = u(t) . This notation slightly departs from the one (typical of the adaptivecontrol literature) making use of simple square brackets. Simple square brackets in this paper will be used for vectors and matrices, e.g. v = [1 0 0] ∈ R 3 . | · | denotes the absolute value, whereas (·) denotes the transpose of vectors and matrices. II. CERTAINTY EQUIVALENCE CONTROLLER BY NONLINEAR PARAMETER FILTERING Consider the single-input-single-output (SISO) LTI system described by: where u, y ∈ R denote the plant input and output, respectively. N (s) and D(s) are monic and coprime polynomials with unknown coefficients, b is the high-frequency gain of the system. The following standard assumptions of outputfeedback MRAC [5], [9] are made: (A.1) The degrees n and m of N (s) and D(s) are known and relative degree ρ := n − m ≥ 1; (A.2) b is non-zero, has known sign and is norm-bounded from above by a known constantb; (A.3) The polynomial N (s) is Hurwitz. The objective is to determine u(t) using a differentiator-free controller such that the trajectories of the closed-loop system are bounded, and the plant output y(t) tends asymptotically to the output y r (t) of the reference model where r(t) is referred to as the "reference command", and The reference model satisfies the following assumptions: (B.1) The command r(t) is a uniformly bounded piece-wise continuous function of time; (B.2) The polynomial D r (s) is a monic and Hurwitz polynomial of degree ρ = n − m. Next, the DyCE will be used to design the MRAC. Using well-known results from adaptive control theory, the tracking errorỹ(t) := y(t) − y r (t) can be written in the form where (t) is an exponentially-decaying signal which incorporates the effect of the unknown initial conditions, θ ∈ R 2n is a vector of unknown constant parameters and η(t) ∈ R 2n is the regressor vector obtained by filtering the input, output and reference signals being L(s) an arbitrary Hurwitz polynomial of order n − 1. The DyCE method consists in transforming (3), of relative degree ρ, into a new error equation of relative degree 0, for which the design of the parametric adaptation law becomes much simpler. Define a vector of filtered regressors, obtained by filtering η(t) as: Then the tracking error can be written as The DyCE paradigm applied to output-feedback MRAC consists in choosing a control law of the form whereθ(t) is the estimated parameter vector, whose update law will be determined later. Defining the estimation error θ(t) :=θ(t) − θ and substituting (6) in (5) yields the relative degree-0 (a.k.a. "algebraic" or "static") error model The main difficulty in this approach arises when one expresses (6) in time-domain, as u(t) will depend on the first ρ time-derivatives of both ξ(t) andθ(t), as seen in the following: where d i ∈ R, i = 0, . . . , ρ are the coefficients of the polynomial D r (s), with d ρ = 1. It is noted that, in equation (7), the required ρ derivatives of ξ(t) are available without direct differentiation, due to the fact that ξ(t) is obtained from η(t) through the filter (4) having relative degree equal to ρ. Conversely, the need to compute the first ρ time-derivatives ofθ prompts the development of specific adaptation laws capable to produce all the required derivatives without direct differentiation. In this regard, the DyCE control scheme will be applied using a NPF to develop the tuner, alternative to HOT. The NPF method takes the current estimate of the parameter vector as input, and produces a filtered version together with all the needed derivatives. The parameter vector feeding the NPF can then by updated via a conventional first-order tuner (see (13) below), typically used in CE adaptive control. The proposed DyCE+NPF control law takes the following form: equivalent to the time-domain representation whereθ NPF (t),θ NPF (t) are the filtered parameter vector and its derivatives, all generated by the NPF to be determined. Substituting the control law (8) into the error equation (5) and adding the term −b ξ(t) θ (t) to each side yields where the effect of the exponentially-decaying term (t) has been neglected. Defining the NPF error vector θ NPF (t) :=θ NPF (t) −θ(t), and the scalar signals (projected errors) equation (10) can be expressed in time-domain as Let the update law forθ(t) be given by the first-order tuneṙ where µ > 0 is an arbitrary scalar gain. Substituting (12) into (13) and considering (11), one obtainṡ This expression will be used later in the stability analysis. The NPF that generates the filtered parameter vector θ NPF (t) takes on the following cascaded structure where Γ(t) ∈ R 2n×2n is a (possibly time-varying) gain matrix to be determined. Remark 2.1: The use of a conventional first order adaptive law makes it easy to apply usual robust modifications, including σ and e 1 modifications. In particular, parameter projection can be used whenever a convex admissible parameter set is known a priori. The main advantage of using parameter projection is that in nominal noise-free conditions, the estimated parameters are guaranteed to converge to the true ones in case of sufficient excitation. III. STABILITY ANALYSIS In this section, we present the stability analysis of the proposed scheme using the following intermediate result: Lemma 3.1: (Invariance-like Lemma) If two scalar functions V (t), W (t) : R ≥0 → R ≥0 satisfy the following conditions: The proof of the above Lemma follows from the Lyapunovlike Lemma in [10] (that, in turn, is a derived from Barbȃlat's Lemma) and is omitted here due to the space limitations. To aid the discussion, let us introduce the parametric differences between successive layers of the NPF:θ 1 (t) :=θ 1 (t) −θ(t) and Moreover, the layer-to-layer differences obey the dynamics: where we have taken advantage of the relatioṅ θ i (t) = −Γ(t)θ i (t). It is worth noticing that the NPF vector error,θ NPF (t), can be expressed as Moreover, defining the scalar signals then ε NPF (t) can be expressed as The main result concerning the stability of the proposed scheme is reported in the following theorem. Theorem 3.1: If Assumptions (A.1)-(B.2) hold, then for system (1) in closed-loop with the adaptive controller comprising the DyCE controller (9), the tuner (13) and the filter (15), there exists a proper choice of Γ(t) such that the trajectories of the closed-loop system originating from any initial condition are bounded and the tracking errorỹ(t) converges to zero asymptotically. Proof: In view of (12), the following implication holds: Therefore, the proof consists in showing that the DyCE+NPF scheme makes ε θ (t) and ε NPF (t) converge to zero simultaneously. With this in mind, consider the following quadratic function as a building block of the overall candidate Lyapunov-like function: where α : 0 < α < 1 and σ Ξ > 0 are arbitrary positive scalars, and Ξ(t) := ξ(t)ξ(t) . Thanks to (11), (17) and (19), one can use V NPF as a brick of the candidate Lyapunovlike function to establish the convergence of ε NPF (t). Before proceeding further, Γ(t) is given the following structure: with Λ(t) to be determined. Taking into account (16) and (22), the derivative of V NPF along the trajectories of the closed-loop system reads aṡ To study the stability of the closed-loop adaptive system, consider the following candidate Lyapunov-like function The challenge is to design Λ(t) to make V a Lyapunov-like function for the closed-loop adaptive system. In what follows, the notation will be streamlined by omitting the explicit time-dependence of all signals, unless strictly required. By exploiting (14), the derivative of V along system's trajectory can be written aṡ Assigning the matrix Λ(t) = σ ΞΞ (t) + Λ(t), with Λ(t) a (possibly time-varying) symmetric positive-definite matrix to be determined, one obtainṡ Application of Young's inequality and the identity ε NPF = ρ i=1 ξ θ i to selected terms of the above expression yields where Ψ(t) is a time-varying non-negative-definite symmetric matrix defined as Ψ(t) := (I + σ Ξ Ξ(t))ξξ (I + σ Ξ Ξ(t)). Application of Young's inequality to the last two terms of the right-hand side of the inequality forV yields is a time-varying strict positive-definite symmetric matrix defined as Φ(t) := σ Ξ (I +Ξ(t) Ξ (t)). Accordingly, the derivative of the candidate Lyapunov-like function can be bounded as followṡ The previous inequality can be rewritten in compact form as: where γ 1 (Ψ, Φ, Ξ), γ ρ (Ψ, Φ, Ξ), γ i (Ψ, Φ, Ξ) are defined as for i = 2 · · · ρ − 1. Next, a selection of Λ must be made so that γ 1 , γ ρ and γ i become strictly positive for all t ≥ 0, to make the right-hand side of (24) negative semi-definite. To this end, Γ and Λ are selected as with λ > 0 is an arbitrary scalar constant, while the matrix Λ * will be assigned to be a time-varying positive-definite matrix dependent on available signals. Substituting (26) in (25) one obtains Next, Λ * = Λ * (Ψ, Φ, Ξ) will be designed such that γ j > 0, for all j = 1, 2, · · · ρ and t ≥ 0. Noticing that Λ ≥ Λ * +λΞ, by virtue of (26), one obtains where i = 2, .., ρ − 1 andρ := µb 1 + ρ 2 . Choosing for all i = 2, .., ρ − 1. As a result, both terms on the righthand side of (28) are positive, as for all i ∈ {2, .., ρ − 1}. Since 0 < α < 1, one can easily see that (1 + α)/(1 − α) > 1. Moreover, due to the choice of λ made in (29), we have so that the following bound holds for γ ρ : Finally, (29) also guarantees that for any α ∈ (0, 1). Therefore, one obtains Substituting the lower bounds in (31), (33) and (35) into (24), and due to the relations defined in (11), and (18) and recalling that Ξ(t) = ξ(t)ξ (t), one finally obtainṡ where It is straightforward to see that V (t), defined in (23), and W (t) are positive semi-definite, whileV (t) is negative semi-definite in view of (36). Therefore, from (21), (23) and (36) we can infer thatθ,θ i , i ∈ L ∞ and θ , i ∈ L 2 , for i = 1 · · · ρ. This result in turn implies that NPF ∈ L 2 , and thereforeỹ ∈ L 2 . By invoking to Assumptions A.3 and B.2 ( i.e., customary minimum-phase arguments) and owing to the fact that y is the sum of y r ∈ L ∞ andỹ ∈ L 2 , one can easily show that the both the filtered regressors'vector ξ and its first derivativeξ are bounded (the vector ξ can be written as the output of a bank of strict-proper stable linear filters taking the signal y as input). From the boundedness of ξ,ỹ we can conclude that alsoθ ∈ L ∞ , and consequently (being each layer of the filter (16) stable by design), alsoθ i ,θ i ∈ L ∞ , for i = 1 · · · ρ . Then, the boundedness ofθ,θ, ξ,ξ,θ i ,θ i yields thatẆ (t) is bounded. Hence, W (t) is a uniformly continuous function. Then, according to Lemma 3.1, W (t) converges to zero asymptotically, implying that ε θ (t) and ε i (t), with i = 1 · · · ρ, converge as well. Thanks to (19), the asymptotic convergence of ε NPF (t) follows. Finally, in the light of (20), the convergence of ε θ (t) and ε NPF (t) implies that ofỹ(t). Remark 3.1 (Implementability-Causality): This remark is aimed at showing that the DyCE+NPF adaptive control is implementable. Compared to the HOT of Morse, which contains matrix-gain terms depending on Ξ(t) = ξ(t)ξ(t) , the gain matrix Γ(t) of the NPF reads as hence, Γ(t) depends on the regressor vector ξ(t) and its derivativeξ(t). The dependence of Γ(t) onξ(t) calls for further considerations about the implementability (that is, the realizability ofξ(t) via a causal system) of the ρ-th derivative of the filtered parameter vector. It is noted that the derivatives ofθ NPF (t) required in the implementation of the DyCE control satisfy the following functional dependencê θ (i) NPF (t) = f ρ θ ρ (t), . . . ,θ 1 (t),θ(t), ξ(t), . . . , ξ (ρ) (t) . Since the derivatives of ξ(t) are available up to the ρ-th order, thenθ IV. ILLUSTRATIVE EXAMPLE In this section, a numerical example is provided to demonstrate the effectiveness of the proposed adaptive controller. The proposed algorithm is compared with the classical augmented-error based MRAC with normalization [9] and the simplified HOT [7] of Nikiforov. The Runge-Kutta integration method with fixed sampling interval T s = 10 −3 s has been employed for all simulations. Consider the relative degree-two unstable LTI plant with unknown parameters a 2 = 2, a 1 = −1 and b = 2 and external disturbance d(t). The upper boundb = 4 is assumed on the high-frequency gain b, whose sign is is known a priori. The reference model is selected as y r (t) = 1 s 2 +2s+1 r(t) . We first consider a sinusoidal reference signal r(t) = 4 sin(0.8t) in a disturbance-free scenario, i.e., d(t) = 0. For all the three methods considered, the plant model is initialized with the same initial condition, y(0) =ẏ(0) = 0, whereas the gains are tuned so to achieve comparable convergence speed. More specifically, the AugE ( short for augmented error adaptive controller with normalization) is tuned with the following selection of the gains: Λ(s) = s + 2, γ = 0.5 and Γ = 0.5I 2×2 ; conversely, the parameters of HOT(short for Nikiforov's HOT controller) have been selected as λ = 1, µ = 1, γ = 1 and σ = 0. The parameters of the proposed controller, denoted by acronym NPF, have been chosen as: µ = 1, α = 0.5 and λ = 10, whereas Γ(t) is given by (27) and (29). The behavior of the tracking error for the three MRAC algorithms is shown in Figure 1. From the analysis of Figure 1, it is noted that the three methods all succeed in tracking the reference signal with similar convergence time. However, the DyCE methods based on HOT and NPF display a better transient behavior. Next, the performance of three algorithms in presence of the bounded disturbance d(t) = 2 sin(3t) + 0.1 sin(20t) is compared. The results of the simulations are reported in Figure 2, which suggests that the proposed method achieves a higher tolerance to high frequency disturbances than the other two methods. V. CONCLUDING REMARKS In this paper, we have proposed a new model-reference output-feedback controller for SISO LTI systems with relative-degree possibly larger than one. The new adaptive controller is based on the DyCE principle and consists of a nonlinear parameter filter equipped with a first-order tuner. The rationale behind the proposed architecture consists in designing a cascaded nonlinear low-pass filter that, once applied to the parameter vector obtained by standard adaptation laws, produces a filtered parameter vector that has computable derivatives, making the DyCE control law implementable. One remarkable feature of the proposed adaptive controller is that it does not require normalization, that is instead mandatory in augmented-error adaptive controllers for systems with relative degree larger that two. Hence, the proposed DyCE adaptive controller with NPF is able to use large adaptation gains, thus achieving fast convergence, similarly to the HOT formulation of Morse. Moreover, the proposed architecture can be easily equipped with the robust modifications of update laws, including leakage modifications and parameter projection.
v3-fos-license
2024-07-14T15:45:09.956Z
2024-07-09T00:00:00.000
271131280
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3389/fchem.2024.1412349", "pdf_hash": "b6e02a9532360b5e82b499f9add302ce8509c72e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42169", "s2fieldsofstudy": [ "Medicine" ], "sha1": "25d2f7d95dee3ac5e4264d23c1074ef17a58722f", "year": 2024 }
pes2o/s2orc
Pharmacophore-based virtual screening of commercial databases against β-secretase 1 for drug development against Alzheimer’s disease β-secretase 1, one of the most important proteins, is an aspartate protease. This membrane-associated protein is used for treating Alzheimer’s disease (AD). Several inhibitors have been pursued against β-secretase 1, but they still have not resulted effectively. Virtual screening based on pharmacophores has been shown to be useful for lead optimization and hit identification in the preliminary phase of developing a new drug. Here, we screen the commercially available databases to find the hits against β-secretase 1 for drug discovery against AD. Virtual screening for 200,000 compounds was done using the database from the Vitas-M Laboratory. The phase screen score was utilized to assess the screened hits. Molecular docking was performed on compounds with phase scores >1.9. According to the study, the 66H ligand of the crystal structure has the maximum performance against β-secretase 1. The redocking of the co-crystal ligand showed that the docked ligand was seamlessly united with the crystal structure. The reference complex had three hydrogen bonds with Asp93, Asp289, and Gly291; one van der Waals interaction with Gly74; and three hydrophobic interactions. After equilibration, the RMSD of the reference compound sustained a value of ∼1.5 Å until 30 ns and then boosted to 2.5 Å. On comparison, the RMSD of the S1 complex steadily increased to ∼2.5 Å at 15 ns, displayed slight aberrations at approximately ∼2.5–3 Å until 80 ns, and then achieved steadiness toward the end of the simulation. The arrangements of proteins stayed condensed during the mockup when bonded to these complexes as stable Rg values showed. Furthermore, the MM/GBSA technique was employed to analyze both compounds’ total binding free energies (ΔGtotal). Our research study provides a new understanding of using 66H as anti-β-secretase 1 for drug development against AD. Introduction Neurodegenerative disorders (NDDs) are characterized by the gradual deterioration and loss of specific groups of neurons, primarily those affiliated with the central nervous system (Behl et al., 2021).Alzheimer's disease (AD) causes neurodegenerative changes characterized by an advanced, irreversible, and subtle weakening of cognitive function, including memory loss and various cognitive impairments (Yusufzai et al., 2018).AD is a widespread and frequently encountered type of dementia (Bogdanovic et al., 2020), associated with decreased memory and cognition (Marelli et al., 2020;Wilson et al., 2012).A sixth leading cause of death is in the geriatric population (Gaugler et al., 2019).The progression of AD comprises three primary aspects.First, a lack of cholinergic transmission results from losing cholinergic neurons.Second, the buildup of extracellular residues of β-amyloid protein occurs, owing to the catalytic action of βsecretase 1 (BACE1).Last, neurofibrillary bundles comprise a tau protein in the phosphorylated form (Falco et al., 2016;Selkoe and Hardy, 2016;Gaugler et al., 2019).The development of extracellular residues of the β-amyloid peptide and the buildup of unsolvable plaques in neurons are central to the amyloid hypothesis, which connects AD to this pathophysiological process.This process starts the transmembrane protein (amyloid precursor protein-APP) breakdown by the enzyme BACE1.Another enzyme, γ-secretase, terminates this cleavage process and generates the β-amyloid peptide (Aβ), aggregating into oligomers.These oligomers form plaques that accumulate in many regions of the brain, mainly in neurons located in the cortex entorhinal, hippocampus, basal nucleus, and associative cortex (Sabbah and Zhong, 2016).A recently FDA-approved drug called Aduhelm © (aducanumab) is among the few medications utilized to address the amyloid hypothesis (FDA Grants Accelerated Approval for Alzheimer's Drug | FDA).This drug functions as a monoclonal antibody specifically designed to target combined forms of amyloid beta agglomerates, thereby reducing the accumulation of extracellular deposits of the β-amyloid peptide.BACE1 is one of the most essential membrane-associated aspartate protease proteins focused on treating AD (Ghosh et al., 2012;Kandalepas and Vassar, 2012;Yan and Vassar, 2014).Beta-amyloid peptide (Aβ) development in AD can be terminated by inhibiting BACE1 (Boutajangout et al., 2011;Kwak et al., 2011;Yan and Vassar, 2014).The formation of BACE1 inhibitors, which is followed for many years, has still not been established as an effective treatment method.However, constant improvement in this sphere has led to the formation of inhibitors that display widespread activity, from nano to micromolar.Consequently, evolving inhibitors for BACE1 have been an effective curative approach for AD drug discovery.A pharmacophore is an organization of structural elements and molecular features related to biological activity (Wermuth, 2006).Lately, this phrase has become one of the most well-known icons in the discovery of drugs.As an esteemed tool for drug planning, virtual pharmacophore screening has been established as valuable for lead optimization and hit identification in the preliminary phase of the novel drug development (Gautam et al., 2023).The benefit of this method is that virtually, most compounds can be screened for hit identification.Points in 3D space usually signify pharmacophore features.A feature of pharmacophores could be composed of functional groups, such as the hydrogen bond acceptor (HBA), hydrogen bond donor (HBD), anions, cations, hydrophobic area (Hyp), and aromatics (Dror et al., 2004;Hou et al., 2006).Cautiousness should be employed while controlling the structural flexibility for generating pharmacophores where the active conformation of particles is hypothesized. The study aimed to address the urgent need for effective treatments for AD, characterized by the accumulation of amyloidbeta plaques in the brain.BACE1, being a crucial protein involved in the production of amyloid-beta, represents a promising therapeutic target for AD.However, despite extensive efforts, existing BACE1 inhibitors have not been sufficiently effective in clinical trials. The method of receptor or ligand-based pharmacophore virtual screening includes a diversity of chronological computational steps: target identification, preparation of database, pharmacophore model creation, 3D screening, and arrangement of complexes for the final confirmation of biological activity (Köppen, 2009;Liu et al., 2023). Virtual screening provides a profitable, time-saving method in the novel lead compound search (Muhammed and Esin, 2021).Screening virtually is an obligatory part of the drug discovery pipeline and a vital procedure for finding hits or chemical probes (Kumar and Zhang, 2015;Leung and Ma, 2015).In this study, we have targeted the commercially available databases to discover the pharmacophorebased virtual screening against BACE1 for drug discovery against AD. The study offers medicinal chemists, biochemists, and pharmacologists a promising avenue for advancing AD therapeutics through the identification and characterization of a novel compound, 66H, targeting BACE1.Through virtual screening and molecular docking, 66H emerged as a lead candidate, with subsequent molecular dynamics simulations confirming its stable binding to BACE1.The study also explored the key molecular interactions and assessed the compound's binding affinity using MM/GBSA analysis, providing crucial insights for further medicinal chemistry optimization and biochemical validation. Pharmacophore hypothesis development Protein Data Bank (https://www.rcsb.org/)was used for retrieving the crystal structures of the BACE1 protein.One of the studies suggested the co-crystal ligands' activity against BACE1 was taken into consideration (Stamford and Strickland, 2013;Egbertson et al., 2015;Gupta et al., 2020).The pharmacophore model, which was receptor-ligand-based, was developed according to the inhibitor with the highest activity against BACE1.The Schrödinger phase tool acquired the pharmacophore hypothesis (Dixon et al., 2006).The protein-binding pocket and ligand sites were targeted for building the hypothesis.Moreover, the receptor was prepared by following the steps in Section 2.4 before the hypothesis was developed. Preparation and virtual screening of database The VITAS-M Laboratory database comprised 1.4 million compounds, from which 0.2 million complexes were selected.(Dixon et al., 2006), transferred, and arranged through phases.Ten conformers were produced for each ligand to expand the search for chemical space.Epik generated diverse likely states at pH 7, while the tautomeric states (Shelley et al., 2007), having highenergy, were eliminated from the database.Then, virtual screening was initiated from the prepared database, agreeing to the developed hypothesis.The phase screen score was used to assess the screened hits according to the mixture of volume score, RMSD, and site matching.Phase scores >1.9 were identified for the molecular docking studies. ADMET and drug-likeness The selective commercial complexes were subjected to absorption, metabolism, distribution, excretion, and toxicity (ADMET) in the QikProp module of Maestro, and ADME (http://www.swissadme.ch) and ADMETlab 2.0 (https://admetmesh.scbdd.com/) to evaluate ADMET and drug-likeness parameters.Compounds which passed the Lipinski's rule of five and toxicity parameters were considered for further analysis.Different software tools, such as QikProp, ADME, and ADMET, have been employed to comprehensively assess the pharmacokinetic and pharmacodynamics properties (ADMET) of the identified compound.QikProp is known for its ability to predict a wide range of physicochemical properties.ADME specializes in offering insights into the compound's bioavailability and metabolic stability.ADMET mainly focuses on predicting potential adverse effects and toxicity profiles. Molecular docking The β-secretase 1 (PDB ID: 5HU0) crystal structure was prepared in Maestro (Madhavi Sastry et al., 2013).The receptor was preprocessed by introducing hydrogens and charges, eliminating water and setting the residue side-chain atoms.The redundant sequences were eliminated, while the tautomeric states at pH 7 were produced (Sadeer et al., 2019), employing PropKa.The protein receptor was further organized and minimized by OPLS_2005 force field (Shivakumar et al., 2012).The framework was created by choosing the site-specific crystal ligand to complete docking.To unstiffen the activity of non-polarized receptor slices, the radii of the receptor atom, i.e., the van der Waals, were graduated to 1.0, with the partial charge limit set to 0.25.The coordinates X, Y, and Z results were 23.55, 10.39, and 21.58, respectively.After grid creation, the ligands were primed by the LigPrep tool of Maestro before docking (Matsuoka et al., 2017).Diverse ionization shapes were produced at pH 7 by employing Epik (Shelley et al., 2007).The isomers of complexes with definite chirality are produced using the OPLS_2005 force field.A glide docking tool was used to stop ligands' arrangement to the set protein receptor, and the binding positions were evaluated according to the glide score. Molecular dynamics (MD) simulation The best binding poses of the particular hits and reference complex with protein were subjected to 100 ns using NAMD (Phillips et al., 2020) and VMD (Humphrey et al., 1996) to discover their strength.As an initial phase, the preliminary records necessary to execute the simulation were arranged through the elements (Case et al., 2021) of AmberTools 21.The components of an antechamber created the parameters of the conjugate solution, while the LEAP program (Case et al., 2005) added the lost hydrogen atoms in the protein arrangements.TIP3P water particles were introduced to the structures (Jorgensen and Chandrasekhar, 1983) in a periodic box after the parameterization of 10 Å and then defused by adding sodium cations.The energy conflicts were eliminated by minimizing the technique using the ff14SB force field for GAFF ligands and protein (Duan et al., 2003).After depreciation, solvation was equilibrated for 10,000 steps, which was trailed by temperature balancing at 200, 250, and 300 K.The concluding equilibrated procedures were then exposed to a 100-ns production run, and the trajectories were saved at every 2 ps for the evaluation.The Bio3D package of R was used to calculate the MD trajectories (Grant et al., 2021). Alignment of protein structures Protein Data Bank was used to retrieve the crystal structures of the BACE1 protein.The literature was searched for the IC 50 cut-offs of the co-crystal ligands.As evident, the ligand of co-crystal 66H has shown maximum performance against the protease protein between the selected ligands; thus, it was chosen to study further.The IDs of PDB crystals and the IC 50 cut-offs are presented in Table 1. Generation of the receptor-based pharmacophore A five-feature pharmacophore model was created by choosing ligand sites and pocket residues with specific binding.The pharmacophore hypothesis comprised features such as R9, R10, R11, D9, and D7, along with their coordinates in the structure of protein (Table 2; Figure 1A).The descriptions of the binding pocket's cavity are witnessed in Figure 2B. Virtual screening The database of Vitas-M Laboratory library was utilized to screen the hypothesis of pharmacophores virtually.At least four features have been selected to identify a complex as a hit.The final ranking of hits from the screening, contributing to the phase fitness score, was determined by comprising vector arrangements, volume scores, and matching RMSD site.The range of vector scores is from −1 to 1, where greater scores indicate improved alignment.On the other hand, the volume scores ranging from 0 to 1 indicate a higher overlap among the reference and aligned ligand levels with higher scores.It is determined as the overlaying of both the volumes of the ligand divided by the total volume of both ligands.The score is zero if no reference ligand is present.A cutoff score of 1.9 during the phase screen was chosen to identify the potential hits above this threshold (Table 3).The structures of 84 hits have been provided in Supplementary Figures S1-S84. β-secretase 1 structure and sequence analysis The sequence of the BACE1 precursor (P56817) was acquired from UniProt. Physiochemical features: The physiochemical features of the sequences of the BACE1 precursor were determined by ProtParam.The amino acid profile of the BACE1 precursor showed 9.8% residues of leucine along with 8.6% and 7.4% residues of glycine and valine, respectively.The molecular weight was approximately 55,763.79.There were 42 positively charged arginine and lysine residues along with the 55 negatively charged aspartate and glutamic acid residues.The isoelectric point (PI) value of 5.31 specifies that the protein is slightly acidic, whereas the instability index of 44.23 shows that it is somewhat unstable.This instability was predicted due to the existence of certain dipeptides that are lacking from steady proteins.A greater aliphatic index (88.14)revealed that the protein was mildly thermostable, while a lesser GRAVY score (−0.064) implied that the protein may interact well with water.The extinction coefficient is 85,425 M -1 cm -1 at 280 nm, similar to cysteine, tryptophan, and tyrosine concentrations.To calculate protein-ligand and protein-protein interactions in the solution, this coefficient is valuable. ADMET The computational tools QikProp, SwissADME, and ADMETlab 2.0 were used to make predictions for a variety of physiochemical (Supplementary Table S1), medicinal chemistry (Supplementary Table S2), absorption, distribution (Supplementary Table S3), metabolism, excretion (Supplementary Table S4), and toxicity parameters (Supplementary Table S5) for a total of 84 distinct chemicals.As a result, ligands with significant pharmacokinetic features fall within acceptable ranges when using ADMET analysis.The ADMET properties show that all compounds were found to have good pharmacokinetic characteristics and no notable side effects.It was also thought that the potential for medical use was positive. Molecular docking The BACE1 receptor was docked by the hits employing the typical precise procedure of the glide tool.Before the docking of screened hits with protein, the effectiveness of this procedure was measured by the redocking of the co-crystal ligand.The redocking of the co-crystal ligand showed that the docked ligand is aligned with the crystal structure (Figure 3A).The docked hits were compared with the reference ligand, and two hits were selected for further analysis.The selected hits with the glide scores are specified in Table 4.The connections of the molecules for the chosen hits were analyzed and compared with the reference compound.The reference complex made three hydrogen bonds with Gly291, Asp93, and Asp289; one van der Waals interaction with Gly74; and three hydrophobic interactions (Figure 3B). In comparison, S1 made one hydrogen bond with Gly291 and five van der Waals interactions with Gln73, Gly74, Leu91, Trp176, and Ser290.Moreover, the reference complex had three hydrophobic interfaces (Figure 3C).S2 also made one hydrogen bond with Gly291; three van der Waals interactions with Gly74, Asp93, and Ser290; and five hydrophobic interactions (Figure 3D).The plausible binding modes of selected docked hits were also analyzed (Figures 4A-C). MD simulation The docked poses of the selected hit ligands were superposed on the co-crystal ligand, as shown in Figures 5A, B, and then subjected to MD simulation for the protein-ligand stability analysis. The significant molecules of the protein structure complexed with the reference compound, and hit root mean square deviation (RMSD) values were calculated from the trajectories to assess the steadiness of the protein-ligand compound (Sargsyan et al., 2017).It was witnessed that at 5 ns, all compounds were balanced (Figure 6A).After balancing, the deviation of the reference compound upheld a value of ~1.5 Å near 30 ns and then enhanced to 2.5 Å. Passing the 30-ns milestone, the RMSD stayed at approximately~1.75-2Å until the end.On comparison, the RMSD of the S1 complex slowly amplified to ~2.5 Å near 15 ns, showed slight deviations at approximately ~2.5-3 Å near 80 ns, and then achieved steadiness toward the end of the simulation.The RMSD values of the S2 complex were approximately ~1.25-1.5 Å during the simulation.The protein structure's physical density, when bound to hit and reference complexes, was evaluated by analyzing the radius of gyration (Lobanov et al.).Lesser values of Rg indicated structural solidity, while greater Rg values meant structural deformities throughout the simulation.The Rg profiles for the compounds indicated that Rg values were kept within the range of approximately 20.75-21.5 Å after a 5-ns equilibration period.The Rg value for the S1 complex consistently remained near 21.5 Å during the simulation, whereas the Rg values for the S2 complex remained at approximately 20.75 Å.These stable Rg scores imply that the arrangements of proteins stayed compressed throughout the simulation in the presence of these compounds (refer to Figure 6B).When bound to these ligands, the dynamic behavior of protein restudies was calculated by the root mean square fluctuations (RMSFs) (Martínez, 2015).The RMSF scores of protein deposits fluctuate less than 1 Å throughout the simulation period, excluding the loop regions (Figure 6C).The RMSF figure indicated that the protein residues were rigid and did not show major fluctuations during simulation, suggesting the steadiness of the protein-ligand complex.The RMSF value of the loop residues extended to the highest value of ~5 Å in the reference compound-complex. The RMSF graph reveals some regions with high fluctuations within the protein structure, indicating areas of increased flexibility.Specifically, residues around index 100, between indices 200-210, and near index 300 exhibit significant peaks, with the peak around residue 300 being the most pronounced.Additionally, there is also a noticeable flexibility near residue 390.These fluctuations suggest that these particular residues are more dynamic, potentially due to specific interactions in the protein's structure. Molecular mechanics/generalized born surface area The molecular mechanics/generalized born surface area (MM/ GBSA) system helped analyze both complexes' total binding free energy (ΔGtotal).The result is typically used to evaluate the strength of the ligand-protein compound (Du et al., 2011).The lower ΔG total values specify that the compound is steadier and conversed.It was calculated as a sum of the ligand-protein compound and the difference of protein and its ligands' free energies.The total binding free energy assessed utilizing the MM/GBSA method is the result of the input of several protein-ligand interfaces such as electrostatic energy (ΔE ele ), van der Waals energy (ΔE vdW ), and electrostatic contribution to solvation-free energy by generalized born (ΔG GB ).The total binding free energies are presented in Table 5.The ΔE vdW role of the S1 compound was more than that of the reference and S2 complexes.At the same time, the electrostatic contribution was more in the reference complex.The GB contribution showed that the reference has a higher GB value than the hits.Both hits' total binding free energies were more than those of the reference compound, as shown in the table.The total binding free energy and its influence on each energy module are shown in Figure 7. The structural assessment of STK346841 and STK122203 shows that both have feature benzene rings as foundational elements, characteristic of many aromatic compounds (Figure 8).Each structure also incorporates a heterocyclic ring; specifically, a pyridine ring is present in both, indicating a common structural motif where a nitrogen atom is integrated into a six-membered aromatic ring.This shared feature suggests a similarity in some aspects of their chemical reactivity and possible applications. The post-simulation interaction analysis was performed at 0 ns and 100 ns (Figure 9).There was no significant difference at 0-ns and 100-ns snap shots.All three ligands (references: BACE1, S1-BACE1, and S2-BACE1) show a pi-pi stacked interaction with Tyr75, indicating a critical role of this residue in stabilizing the ligand through aromatic interactions.Similarly, both references BAC1 and S1-BAC1 exhibit pi-alkyl interactions with Leu34, suggesting that hydrophobic interactions with this residue are also important.Moreover, the reference BACE1 has conventional hydrogen bonds with Asp36 and Aap232, which are not present in S2-BACE1.S1-BACE1 retains the hydrogen bond with Asp36 and exhibited one more with Trp80. Discussion The prevalence of neurodegenerative diseases has significantly increased in the medical field over recent years, posing a significant health concern.Various molecular targets are implicated in the pathogenesis of these diseases.These clusters of diseases, including AD and other related disorders such as spinal muscular atrophy (SMA), Parkinson's disease (PD), Huntington's disease (HD), spinocerebellar ataxia (SCA), prion disease, and motor neuron diseases (MND), have been reported to affect millions of people worldwide (Dugger and Dickson, 2017).BACE1 is an aspartate protease.This membrane-associated protein treats AD (Ghosh et al., 2012;Kandalepas and Vassar, 2012).The development of betaamyloid peptide (Aβ) in AD can be terminated by inhibiting BACE1 (Boutajangout et al., 2011;Kwak et al., 2011;Yan and Vassar, 2014).The formation of BACE1 inhibitors, which is followed for many years, has still not been established as an effective treatment.However, constant improvement in this sphere has led to the formation of inhibitors that display widespread activity, from nano to micromolar.Consequently, evolving inhibitors for BACE1 have been an effective curative approach for AD drug discovery. In this study, out of 1.4-M compounds in the database of Vitas-M Laboratory, 0.2-M complexes were selected, transferred, and arranged through phase (Dixon et al., 2006).Several groups of researchers have performed similar studies, such as for the discoveries of vaccines (Stokes et al., 2020), for in silico drug repositioning for AD (Galeana-Ascencio et al., 2023), and some other neurodegenerative diseases (Ishola et al., 2021).Others have Role of individual binding energy components within the overall binding free energy. Frontiers in Chemistry frontiersin.org11 Han and Guo 10.3389/fchem.2024.1412349performed similar approaches for de novo drug design (Wang et al., 2022).The Protein Data Bank (https://www.rcsb.org/)was used for retrieving the crystal structures of the BACE1 protein.One of the studies suggested the co-crystal ligands' activity against BACE1, which was taken into consideration (Stamford and Strickland, 2013;Egbertson et al., 2015;Gupta et al., 2020).The pharmacophore model, which was receptor-ligand-based, was developed according to the inhibitor with the highest activity against BACE1.The Schrödinger phase tool acquired the pharmacophore hypothesis (Dixon et al., 2006).Some previously reported BACE1 inhibitors, verubecestat (MK-8931) and its analog umibecestat (CNP-520), reached phase II/III clinical trials (Neumann et al., 2018;Thaisrivongs et al., 2018).However, these inhibitors were discontinued in February 2018 (Merck 2018.) and July 2019 (NIA, 2019), respectively, because they were associated with a decline in cognitive functions in participants. BACE1 harbors two aspartate amino acids (aa) within its extracellular protein domain (aa 93-96 and 289-292), both crucial for its protease function (Hussain et al., 1999).These residues are strategically located to facilitate the cleavage of APP at the β-site.In our study, both ligands S1 and S2 are responsible for hydrogen bonding with residues Asp>93, Gly>291, and Ser>290. The protein-binding pocket and ligand sites were targeted for building the hypothesis.Virtual screening of pharmacophores has shown beneficial for hit identification and lead optimization in the initial phase of new drug development programs (Gautam et al., 2023).The main advantage of this approach is that virtually, millions of compounds can be screened for hit identification.Recently, virtual screening has been an obligatory part of drug research and development pipeline and an essential technique for discovering hits or chemical probes (Kumar and Zhang, 2015;Leung and Ma, 2015).Database screening from Vitas-M Laboratory library was executed virtually.At least four features must be matched to identify a complex as a hit.The final ranking of hits from the screening, contributing to the phase fitness score, was determined by vector arrangements, volume scores, and matching RMSD sites.The structures of the BACE1 protein were recovered from the Protein Data Bank.The literature was searched for the IC 50 cut-offs of the co-crystal ligands.As evident, the ligand of crystal 66H showed maximum performance against the protease protein between the considered ligands.The identification of PDB, the arrangements, and the complementary IC 50 values of other ligand studies are shown in Table 1 3).Recently, pharmacophore-based virtual screening and molecular docking studies of cyclin-dependent kinase inhibitors (CDKIs) have been reported (Shawky et al., 2021).Others studied applying pharmacophore modeling techniques to protease inhibitor development (Pautasso et al., 2014).A group of researchers recently reviewed the general aspects of AI and ML from the perspective of drug discovery in the CNS (Gautam et al., 2023).It is found that the co-crystal ligand 66H has the highest activity against BACE1 and can be potentially considered an inhibitor in drug development. Conclusion BACE1 is one of the most critical membrane-associated aspartate proteases that targets AD.Several inhibitors of BACE1 have been introduced, but effective therapies are still unavailable.Here, we attempted to find the most effective inhibitors against BACE1 for drug development against AD.We downloaded and prepared 200,000 compounds from the Vitas-M Laboratory database for virtual screening.We generated 10 conformers for each ligand to enhance the search in chemical space.It was found that among the studied ligands, the 66H crystal ligand exhibited the maximum performance against the protein.Our study provides a new perception of using 66H as anti BACE1 for drug development against AD. FIGURE 1 FIGURE 1Pharmacophore: (A) characteristics of the binding pocket and (B) pharmacophore hypothesis and the binding pocket cavity. FIGURE 3 ( FIGURE 3 (A) Redocking of the reference compound.Orange sticks show the reference pose, and cyan sticks show the docked pose.(B) Molecular interactions of reference compounds.(C) Molecular interfaces of S1. (D) Molecular interactions of S2. Green lines display the hydrogen bonds; light green shows the van der Waals interactions; magenta lines show hydrophobic interactions; purple lines show pi-sigma bonds, and pi-sulfur interactions are shown by orange lines. FIGURE 4 FIGURE 4Reference's plausible binding modes and selected compounds are represented with the sticks in the binding pocket of BACE1.(A) Reference compounds: (B) S1 hit.(C) S2 hit. FIGURE 9 FIGURE 9Post-simulation comparison of BAC1 interactions with references S1 and S2. TABLE 1 IDs of PDB, ligands, and the chemical mechanism of a co-crystal ligand against the BACE1 protein. TABLE 2 Coordinates and scores for the features within the hypothesis of pharmacophores. TABLE 3 Hit alignment and phase screen scores of a pharmacophore model. TABLE 3 ( Continued) Hit alignment and phase screen scores of a pharmacophore model. TABLE 3 ( Continued) Hit alignment and phase screen scores of a pharmacophore model. TABLE 4 Docking details of the selected and reference complexes. TABLE 5 MM/GBSA module and the binding free energies.
v3-fos-license
2017-11-17T00:59:55.994Z
2014-08-01T00:00:00.000
19348490
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1088/1367-2630/16/8/085010", "pdf_hash": "57495ac5546f1e9fbc46ceeafe790f430eb577ff", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42170", "s2fieldsofstudy": [ "Physics" ], "sha1": "d0cbd7b39e8c19189cd87e4a9046faf24c8d0e4c", "year": 2014 }
pes2o/s2orc
Focus on dynamics of particles in turbulence Studies on the dynamics of particles in turbulence have recently experienced advances in experimental techniques, numerical simulations and theoretical understandings. This ‘focus on’ collection aims to provide a snapshot of this fast-evolving field. We attempt to collect the cutting-edge achievements from many branches in physics and engineering, among which dynamics of particles in turbulence is the common interest. In this way, we hope to not only blend knowledge across the disciplinary boundaries, but also to help the identification of the pressing, far-reaching challenges to be addressed in a topic that spans such a breadth. measurements. Understanding the dynamics of the dispersed particles in turbulent flows is therefore of fundamental importance to astrophysicists who are studying the formation of planets, to cloud physicists and meteorologists who are predicting precipitation, to environment policy makers who aim to prevent the occurrence of dust storms, and to engineers who strive to design the best cars. The problem, on the other hand, is by no means easy. Even a meaningful categorization requires some elaboration. Let us start with dilute inclusion of passive particles. When the particles are small and their densities are comparable with the carrying fluid, they usually follow the local fluid motion. Particles of this type are used as fluid tracers in experimental studies of fluid turbulence. If any of these conditions are not fulfilled, the dynamics of the particles deviate from those of the fluid. Such particles are generically called 'inertial particles'. The situation becomes more complicated if the particles are 'active', e.g., if they can self-propel, such as zooplankton in the ocean, or if the particles exchange mass, momentum or energy with the carrying fluid, such as water droplets in clouds. For both passive and active particles, if the amount of inclusion is high, their presence will modify the underlying turbulent flow itself, which in turn feed back on the dynamics of particles. In facing such a challenging problem (or problems), our available weapons are rather limited. Accurate and detailed measurements have long been difficult. We are still disputing the exact form of equation of motion for non-tracer particles. Most of what we know (or believe to know) today relies on analysis using simplified limiting cases (as the 'point particle' approximation for instance). However, the situation is changing rapidly. Very important progress has been achieved during the last decade. Advances in measurement techniques have given access to new data with unprecedented temporal and spatial resolutions. Promising numerical approaches have emerged and various theoretical analyses and models have been developed. We are very proud that the contributions to this 'focus on' collection present a balanced coverage of the specific fields involved and the methods used (experimental, numerical and theoretical). Methodological advances The scientific advances in the comprehension of particle-turbulence interactions represented in this 'focus on' collection are naturally concomitant with new methodological developments, with which more complex situations can be investigated and subtler phenomena can be elucidated. New methods in numerical simulations. On the numerical aspects, new strategies of largeeddy-simulations (LES) are proposed, with novel sub-grid scale models based on a stochastic differential equation to account for particles inertia [1] and coupling hybrid Eulerian-Lagrangian approaches [2], improving the capacities of LES to handle pair separation and collisions for point-like particles. A long standing limitation of simulations of particles in turbulence was their insufficient capability to address the effects of finite particle sizes, as usual models for particle motion are based on the Maxey-Riley-Gatignol equation that was derived for particles with vanishingly small sizes [3,4] 3 . New methods have emerged in the past few years in order to handle numerically the finite-particle-size effects by fully resolving the flow around particles with size larger than the dissipative scale of the carrier turbulent flow. Simulations based on one of such methods, the immersed boundary technique, are presented in [6]. New methods in theory. From the theoretical point of view, this 'focus on' collection exhibits several new approaches capable of addressing important key questions and comparing theories with experiments and numerics. These include new Lagrangian perspectives on the intermittency of both the velocity [7] in fluid turbulence and the magnetic field in magnetohydrodynamic (MHD) turbulence [8], and on the dynamics of rotating turbulence [9]. The non-trivial definition of the 'slip velocity' for finite size particles is discussed in two contributions to this 'focus on' collection [6,10]. As a natural consequence of being a topic crossing several fields, many theories and mechanisms have been proposed to explain the dynamics of inertial particles in turbulence. It is very welcome to see the illustration of the differences and similarities between several leading theories on the spatial distribution [11] and the relative velocities between inertial particles [12], and the probing of the equivalence between several clustering scenarios [13]. New methods in experiments. On the experimental side, several important advances are also worth mentioning. The development of instrumented particles [14] now gives access to physical quantities (not only kinematic) in the Lagrangian frame, directly measured with sensors embedded in a moving particle. Although limited to relatively large particles (currently in the centimeter range), this new tool opens a whole new range of possibilities, e.g., to probe the fluctuations of temperature and chemical concentration along particle trajectories. After more than a decade of development, image-based particle tracking technique has gained wide application. It was developed for measuring tracer trajectories in turbulence [7], but has been extended to study the dynamics of non-spheric solid particles [10] and gas bubbles [15], as well as to study the collision rate between water droplets in a turbulent air flow [16], an important but very challenging experimental task. At the same time, bias errors in new data analysis methods, such as using Voronoï tessellation of experimental images for preferential concentration diagnosis, are now well understood [17]. Laser Doppler velocimetry (LDV) coupled with particle size analysis, despite being a single-point measurement technique, has the advantage of simultaneously resolving particle velocity and size. When used in a wind tunnel, it could therefore provide measurements of spatial clustering of polydispersed inertial particles by invoking Taylorʼs frozen-turbulence hypothesis [18], which is not easily achievable with common particle tracking techniques. In the following sections we briefly summarize the main results that can be found in this collection of papers, which we have organized into the following sub-topics: • turbulent dynamics of fluid particles (Lagrangian turbulence), • single particle dynamics of inertial particles and finite-size effects, • collective dynamics of particles. Turbulent dynamics of fluid particles (Lagrangian turbulence) When the inertial effects of particles diminish, such as when particle sizes are much smaller than the Kolmogorov scale of the flow and when particle densities match that of the fluid, they follow the fluid motion faithfully. These particles are used extensively in modern turbulence experiments and flow visualizations. Investigating the dynamics of these particles provides us a direct handle on the Lagrangian properties of fluid turbulence. By studying the evolution of the probability density function (PDF) of the temporal velocity increments along particle trajectories, Wilczek et al [7] showed that the non-selfsimilar, non-Gaussian PDFs (or intermittency) evolve under the control of particle acceleration, a small scale quantity, conditioned on velocity increments, an inertial range quantity. This finding is in full agreement with the commonly accepted view that intermittency comes from the direct interaction between small and large scales, but it also clearly points to the interaction mechanism, at least for the velocity increments. Further insight on Lagrangian dynamics of turbulence can be obtained from multi-particle statistics. In the past, studies on the dynamics of pairs and tetrads of particles have for instance shed light on dispersion processes [19], and on the role of velocity gradients [20,21]. In this 'focus on' collection, Naso and Godeferd investigated tetrad dynamics numerically in the context of rotating turbulence [9], which led them to relate turbulence strain and enstrophy production with flow topology. The role of the Zeman scale, at which the local eddy-turnover time and the rotation time scale are equal, was demonstrated to influence multi-scale dynamics of rotating turbulence. Using direct numerical simulations (DNS), Homann et al [8] studied the Lagrangian properties of turbulence dynamics and the magnetic field in a Taylor-Green dynamo. Their result showed a significant impact of the magnetic field, with a strong increase of the correlation time of velocity and magnetic field fluctuations experienced by tracer particles, and an intermittent scaling regime of the Lagrangian magnetic field structure functions. Single particle dynamics of inertial particles and finite-size effects In spite of the apparent simplicity of the problem, full understanding of the turbulent dynamics of individual particles in turbulence has not emerged yet. The case of small, heavy particles, whose dynamics can be reasonably approximated using the linear Stokes equation (with the Stokes number as the only parameter characterizing particle inertia), has been extensively investigated numerically in recent decades using high resolution DNS in homogeneous isotropic conditions. The knowledge accumulated from this canonical situation offers a solid ground for the development of new numerical strategies in more realistic flow configurations using LES, as mentioned above [1,2], and for addressing more complex situations where collective effects can arise (see section 5). Another challenge for a better understanding of particle dynamics concerns the effects of finite particle size. Past experimental results have revealed that these effects cannot be modeled as a simple filtering on the point-particle dynamics [22][23][24]. Numerical simulations including Faxén corrections have been shown to be accurate only for particles with diameter smaller than a few dissipative scales [25,26]. These studies therefore called for the development of dedicated numerical tools to investigate finite-particle-size effects [26][27][28][29]. In this 'focus on' collection, Kidanemariam et al [6] applied the immersed boundary method [27] to study particle transport in non-homogeneous turbulence (channel flow). They showed that the apparent lag of particles dynamics compared to that of the carrier flow was due to the preferential distribution of particles in low-speed streaks. Their work revealed the necessity to redefine the notion of relative velocity between the particle and the fluid, or the 'slip velocity', for the finite-size case, due to local and global inhomogeneities at the scale of the particle. Several strategies, based on local averaging and on velocity fluctuations of the carrier flow in the vicinity of the particle have been proposed in two separate articles of this 'focus on' collection [6,10]. From the experimental point of view, finite-size effects are of primary importance when measurements with instrumented particles are considered [14], which are in the centimeter scale at present due to technological limitations. Understanding the dynamics of finite-sized particles, for which we do not even have an appropriate equation of motion, is crucial to the interpretation of the information actually gathered by such particles. Collective dynamics of particles Collective dynamics of inertial particles is probably one of the richest topics of particle-turbulence interaction. The simplest manifestation of such collective effects is the preferential concentration phenomenon, the accumulation of inertial particles in certain regions of the flow due to the interaction between particles and turbulence structures. This inhomogeneous distribution further influences other processes such as particle mixing and dispersion, particle collision and coalescence, settling, flocking, etc. Many theories have been proposed for quantitative description of the preferential concentration in turbulent flows. Despite all considering only the linear Stokes drag on particles, the available theories differ in appearance, partly due to the different assumptions made and partly due to the complicated derivation involved. In two companion articles, Bragg and Collins [11,12] analyzed several popular theories on the spatial distribution of small inertial particles in homogeneous and isotropic turbulence and the relative velocities between them. They illustrated clearly the similarities and differences between these theories. By comparing with DNS results, they also showed the ranges of Stokes numbers in which individual theories stayed valid. This 'unification' work is particularly useful in clarifying misconceptions and in identifying the mechanisms that cause the failure of individual theories. In a similar spirit, Gustavsson et al [13] investigated the concentration fluctuations of particles in a random flow from kinematic simulations at various Kubo numbers, which characterizes the correlation time of the velocity field. They compared three mechanisms of particle collective dynamics: random uncorrelated motion, caustics and spatial clustering as a consequence of the deformation tensor, and showed in particular the equivalence of the last two. Enhancement of the droplet-droplet collision frequency by the interaction between water droplets and turbulence is believed to be a key mechanism in the acceleration of rain initiation in warm clouds [30,31]. Collision rate is directly related to the radial distribution function (RDF) and the radial relative velocity (RRV) between pairs of particles. Using high resolution DNS up to ∼ λ R 500, Rosa et al [32] investigated numerically the RDF and the RRV, addressing particularly their dependence on Reynolds number and gravity. The results suggested a saturation of the behavior of pair statistics at high Reynolds numbers, and a complicated effect of gravity on the collision rates of large particles (roughly corresponding to cloud droplets above 20 μm in diameter), while smaller ones are insensitive to gravity. Bordas et al [16] measured experimentally the collision rates between water droplets in wind tunnel turbulence, and compared experimental results with collision rates obtained theoretically that include: (i) only gravitational settling; (ii) gravitational settling and turbulence; and (iii) settling and turbulence and change of collision efficiency due to hydrodynamic interactions between particles when approaching. Although quantitative agreement was only partially supported, theories including both turbulence and collision efficiency gave closer predictions when confronted with experimental measurements. The role of turbulence on collision enhancement was also clearly established in the LES simulation by Riechelmann et al [2]. In real particle-laden flows, such as in clouds, the particle sizes are not uniform but distributed over a range. This polydispersity further complicates the interaction between particles, and between particles and turbulence. In this 'focus on' collection, the role of polydispersity was addressed in two companion articles by Saw et al [18,33], which combined theoretical, numerical and experimental studies of the radial distribution function for both monodisperse and polydisperse situations. Their study pointed out the leading role of dissipative motion on the clustering process (at least for particles with Stokes number below 0.3) and the necessity to correctly disentangle large scale mixing effects from the preferential concentration in experiments. The numerical investigation of a population of particles with two different Stokes numbers exhibited a saturation effect, which was limited by the least clustered population. Based on theories and simulations on polydispersed systems, they proposed a new analytical form for the radial distribution function for any distribution of particle sizes. Particles could also interact with the carrier fluid through phase change, for example, the condensation/evaporation of water droplets in clouds, during which the particles exchange both mass and thermal energy with the fluid. Kumar et al [34] investigated numerically the role of evaporation at the entrainment edge of clouds. Their study illustrated the effect of the Damköhler number, Da, which compares the typical flow time scale to the typical evaporation time scale, on the evolution of droplets size distribution: minimal broadening of size distribution was observed when ≪ Da 1 while a strong negative skewness developed when ≫ Da 1. The collective dynamics become even more complex when particles are active. Khurana and Ouellette [35] studied the effect of environmental fluctuations, which could be random or with turbulence-like structures, on the stability and the dynamics of model particle flocks. Their surprising result was that even a low level of turbulence-like fluctuations was sufficient to destabilize flocks. This work revealed an unexpected impact of flow on collective animal motion, whose accurate modeling needs to take realistic background fluctuations into account. Conclusion and perspectives This collection of articles reflects significant progress achieved during the last decade in the understanding of particle-turbulence interactions. It provides a snapshot of this fast-evolving field, with the latest methodological developments (theoretical, numerical and experimental). Advances in Lagrangian measurement techniques (optical particle tracking, shadowgraphy, instrumented particles) now give access to new data with unprecedented resolution. New, promising numerical approaches have emerged. For instance, the dynamics of finite size particles can be fully resolved and coupled with the DNS of the carrier flow without any a priori modeling. Various theoretical approaches have also been proven successful, including stochastic and PDF models, and analysis capable of giving new insights on relevant physical mechanisms, such as the polydispersity of particle sizes. Let us finish by noting that, compared to the breadth of the particle-turbulence interaction problem, this 'focus on' collection is far from being exhaustive. Although it highlights some of the most important latest developments, it only covers a small part of the full landscape of related ongoing research activities. There are many aspects for which new developments are still crucial. To name a few, almost all theoretical investigations and most of the numerical simulations of the dynamics of inertial particles consider only the Stokes drag. Some may include finite Reynolds number corrections and some may include the added mass. The effects of other terms, such as the history forces, however, have been largely ignored even without solid justification. No comprehensive investigation of the consequences of all these simplifications exists at the moment, which might help explain why the few available numerical studies including these extra forces do not seem to give the same conclusion (see, e.g., [36][37][38]). This collection is awaiting clarification, most likely by extensive numerical simulations. On the front of measurement techniques, an important step forward concerns the ability to access simultaneous conditional diagnosis. For instance it would be extremely useful if the velocities and sizes of all particles in the observation region could be simultaneously resolved, which would allow accurate study of the collision rates and will be invaluable for field measurements where the particle sizes are not under control. Furthermore, simultaneously accessing the velocities of the particles and the local velocity of the carrier flow, as demonstrated for large neutrally buoyant particles in turbulent flows [39], will help in gaining a better insight into the coupling mechanisms between particles and the flow. These experimental challenges require the combination of several techniques (Lagrangian particle tracking, particle sizing, local tomographic or holographic methods around particles, etc). Given the rapid development we are experiencing, we are optimistic that all these mentioned above will be adequately addressed in the near future.
v3-fos-license
2017-09-16T04:20:51.713Z
2012-03-14T00:00:00.000
51855570
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/31102", "pdf_hash": "04661e569bf4bb281aa1e1be66019fe513526e80", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42171", "s2fieldsofstudy": [ "Medicine" ], "sha1": "dc8caf869ddee0afe592a5491a5e0fd6c0c1e047", "year": 2012 }
pes2o/s2orc
Role of ING Family Genes in Head and Neck Cancer and Their Possible Applications in Cancer Diagnosis and Treatment on use of multiple molecular biomarkers. Head and Neck Cancer provides an interesting and comprehensive overview of all aspects of head and neck cancer including overviews of the disease, basic science aspects pertaining to the disease, diagnosis, treatment and outcomes for patients with this disease. The chapters written by world renowned experts cover the entire discipline of head and neck oncology and include discussions of regional disparity is, advances in basic science understanding, advances in her radiotherapy, chemotherapy and targeted agents as well as a focus on reconstruction, prostheses, and aspects of quality of life and health outcomes. The book is designed to be both practical and comprehensive for every physician treating his complex disease. Introduction Cancer is one of the most common diseases, which treat human life. It produces huge psychological, economical and social burdens. Enormous preventive, diagnostic and therapeutic research efforts have been done to eradicate this deadly disease. Though some success have been obtained in terms of disease control for some of the neoplasms such as some of the lymphoma types, thyroid cancer and for some of the other solid tumors such as breast cancer in case of early detection, most still are left as a deadly disease. On the other hand current treatment modalities including surgery or/and chemoradiotherapy bring a huge hand local damage to the tissues and thus decrease the quality of life yet most shows recurrence and metastasis, which also questions the efficiency of these treatments. Head and neck squamous cell carcinoma (HNSCC) is one of the most frequent cancers that lead to death, making it a major health problem in the world. HNSCC includes oral, oro/nasopharyngeal and laryngeal cancers and accounts for more than 644,000 new cases worldwide, with a mortality of 0.53 and a male predominance of 3:1 [1,2]. Despite advanced technology in the detection and treatment of HNSCC, it continues to pose a great threat for human life. Most patients suffering from this malignancy are at an advanced stage upon diagnosis in which 51% present regional metastasis and 10% with distant metastasis. The 5year relative survival rate with regional metastasis is about 51% and with distant metastases is 28% [1,3]. Though much development has been obtained in surgical techniques, chemoradiation protocols, little progress was shown in terms of long-term survivals during several decades. In the current review, we will focus on a recently identified TSG group, ING family tumor suppressors. Thus this review will mainly include alterations of ING family tumor suppressor in human cancer and their possible applications in molecular diagnosis and therapy of cancer. The five members of the ING family were recently identified by our group and other researchers in the field [11][12][13][14][15][16][17]. All proteins of ING family genes contain a highly conserved plant homeodomain (PHD) finger motif in the carboxy (C)-terminal end that is commonly detected in various chromatin remodeling proteins [18,19]. The N-terminal part of each ING protein seems to be unique, which determines the structure and different functions of various ING genes [11,12,16]. Although exact functions of ING family genes h a v e n o t b e e n c l a r i f i e d , t h e g e n e p r o d u c t s h a v e b e e n r e p o r t e d t o b e i n v o l v e d i n transcriptional regulation, apoptosis, cell cycle, angiogenesis and DNA repair through p53dependent and -independent pathways. Moreover, ING family genes have also been known to constitute complexes with histone acetyltransferases (HAT) and histone deacetylases (HDAC) [11,12,16,20]. Within the members, ING1 is the founding member and thus most information about the family genes comes from the researches on ING1. ING1 was first isolated using subtractive hybridization between short segments of cDNAs from normal and a number of breast cancer cell lines [21]. These randomly fragmented cDNAs interfered with the activity of tumor suppressors by either blocking protein production through anti-sense sequences or abrogating function in a dominant-negative fashion through truncated sense fragments [21]. The other four members of ING gene family have been identified through sequence homologies with ING1, followed by functional in-vitro and then in-vivo cancer patient tissue analysis [11,12,19,22,23]. We characterized the genomic structure of I N G 1 g e n e a n d s h o w e d t h a t I N G 1 g e n e produced at least 4 mRNA variants from 3 different promoters for the first time. Two of these variants, p33ING1b consisting of exons 1a and 2, and p24ING1c consisting of a truncated p47ING1a message including the first ATG codon in exon 2, are expressed majorly, while p47ING1a consisting exon 1b and exon 2, was not detected in head and neck tissues [13]. Our continuous efforts led to identification of ING3 [14]. Following these works, other groups and our group published investigations on other members of ING family including ING2, ING4 and ING5 [15,16,[24][25][26][27][28][29][30][31][32]. Although almost all of the ING family members are known to be negative regulator of the cell growth, recent studies also demonstrated some of the members or splicing variants also functioning as oncogene, thus complicating the role of these genes in human carcinogenesis [31,32]. Disorders of ING family genes in human tumors Rearrangement of ING1 gene locus was demonstrated in one neuroblastoma cell line and reduced expression in primary cancers and cell lines in early clinical studies at the time of ING1 cloning [11][12][13][14][15][16][17]21]. Following ING1 cDNA cloning, we identified the genomic structure of the human ING1 gene and showed its tumor suppressor character for the first time by finding its chromosomal deletion at the 13q34 locus and tumor-specific mutations in a number of head and neck squamous cell carcinoma (HNSCC) samples [13]. Regarding with the mRNA expression status of ING family genes, only a few studies exist in the literature. Toyama et al. detected 2-10-fold decreases in ING1 mRNA expression in 44% www.intechopen.com of breast cancer and in all of 10 breast cancer cell lines examined [33]. Interestingly, the majority of breast cancers showing decreased ING1 expression had metastasized to regional lymph nodes, whereas only a small subset of cancers with elevated ING1 expression compared to adjacent normal tissues were metastatic. Another study also revealed reduced expression of breast cancer samples [34]. Down-regulation of ING1 mRNA has also been demonstrated in various other cancer types, including lymphoid malignancies, gastric tumors, brain tumors, lung cancer, ovarian cancer and esophagogastric carcinomas, though no comprehensive clinical correlation was performed [11,12,17,20,[35][36][37][38][39][40][41][42][43]. Uncommon missense mutations and reduced protein expression of ING1 have also been detected in esophageal carcinomas [44], and colon cancer cell lines [36] while no mutation was detected in leukemia [37,45], oral cancers [46] and lymphoid malignancies [35]. For loss of ING gene and their protein functions, loss of heterozygosity (LOH), promoter CpG hypermetylation and nucleo-cytoplasmic protein mislocalization have been proposed [11,12,17,20]. Using methylation-specific PCR, the p33ING1b promoter was methylated and silenced in almost a quarter of all cases in primary ovarian tumors [42]. No differences or increased expression of ING1 were observed in recent studies of myeloid leukemia or melanoma [45,47]. Recently reduced expression of ING2 mRNA as well as protein was observed in hepatocellular carcinoma (HCC) [48]. Decreased ING2 expression (but not ING2 mutation) has been observed in lung cancer [49]. Decrease of nuclear ING2 protein was observed in melanoma [50]. On the other hand increased expression of ING2 mRNA was shown in colon cancer [51]. Moreover, ING2 may play a role in melanoma initiation, since reduction of nuclear ING2 has been reported in radial as well as vertical growth phases in metastatic melanoma as compared to dysplastic nevi [52]. On the other hand, reduced ING2 expression was associated with tumor progression and shortened survival time in HCC [48]. These epidemiological studies suggest that ING2 loss or reduction may be important for tumor initiation and/or progression [11,12,17,20]. As shown for ING2, decreased nuclear ING3 protein expression was associated with a poor survival rate. The survival rate was 93% for the patients with strong nuclear ING3 staining, whereas it declined to 44% for the patients with negative-to-moderate nuclear staining [52]. In a recent study, we also demonstrated frequent deletion of chromosomal locus of each of ING family member including ING3 in ameloblastomas [53]. ING4 mRNA was decreased in glioblastoma and associated with tumor progression [54]. Decreased ING4 has been associated with increased expression of IL-8 and osteopontin (OPN) in myeloma [11,55]. In both reports, decreased ING4 expression was associated with higher tumor grade and increased tumor angiogenesis. In myeloma, it was also associated with increased expression of interleukin-8 and osteopontin [11,55]. Expression of ING4 was decreased in malignant melanoma as compared to dysplastic nevi, and was found to be an independent poor prognostic factor for the patients [56]. ING4 was found to suppress the loss of contact inhibition and growth. Moreover some mutation and deletion were detected in cell lines derived from human cancers such as breast and lung [57]. Significant reduced expression of ING4 was detected in gliomas as compared with normal human brain tissue, and the extent of reduction correlated with the progression from lower to higher grades of tumours [54]. Klironomos et al. investigated immunohistochemically the expression pattern of ING4, NF-kappaB and the NF-kappaB downstream targets MMP-2, MMP-9 and u-PA in human astrocytomas from 101 patients. They found that ING-4 expression was significantly reduced in astrocytomas, and it was associated with tumor grade progression. Expression of a NF-kappaB subunit p65 was significantly higher in grade IV than in grade III and grade I/II tumors, and a statistical significant negative correlation between expression of ING4 and expression of nuclear p65 was noticed [58]. Recently Nagahama et al. reported up-regulation of ING4 in a human gastric carcinoma cell line (MKN-1) by promoting mitochondria-mediated apoptosis via the activation of p53 [59]. Both mRNA and protein of ING4 expression were down regulated in hepatocellular carcinoma tissues. ING4 expression level correlated with prognosis and metastatic potential of hepatocellular carcinoma [60]. In another recent study, ING4 mRNA and protein expression were examined in gastric adenocarcinoma tissues and human gastric adenocarcinoma cell lines by RT-PCR, real-time RT-PCR, tissue microarray immunohistochemistry, and western blot analysis [61]. Their data showed that ING4 mRNA and protein were dramatically reduced in stomach adenocarcinoma cell lines and tissues, and significantly less in female than in male patients. Decrease of ING4 mRNA expression was found to correlate with the stage of the tumour [61]. Wang et al. examined ING4 protein expression in 246 lung cancer samples and overall reduced ING4 expression and higher ING4 expression in cytoplasm than in nucleus of tumour cells were detected, suggesting its involvement in the initiation and progression of lung cancers [62]. Examination of ING4 protein expression levels in colorectal cancer samples from 97 patients showed that ING4 protein was down regulated in adenoma relative to normal mucosa and further reduced in colorectal cancer tissues. Decrease of ING4 protein expression was also related to the more advanced Dukes' stages and ING4 expression levels in patients with lymphatic metastasis were lower than those without metastasis, suggesting that ING4 play a role in colorectal carcinoma progression [63]. Xing et al. analyzed ING5 expression in gastric carcinoma tissues and cell lines (MKN28, MKN45, AGS, GT-3 TKB, and KATO-III) by Western blot and reverse transcriptasepolymerase chain reaction. An increased expression of ING5 messenger RNA was found in gastric carcinoma in comparison with paired mucosa and lower expression of nuclear ING5 protein and cytoplasmic translocation was detected in gastric dysplasia and carcinoma than that in nonneoplastic mucosa [64]. Nuclear ING5 expression was negatively correlated with tumor size, depth of invasion, lymph node metastasis, and clinicopathologic staging, whereas cytoplasmic ING5 was positively associated with depth of invasion, venous invasion, lymph node metastasis, and clinicopathologic staging in colorectal carcinomas [65]. Abnormalities of ING family genes in head and neck cancer At the time of ING1 cloning, deletion of chromosome 13q34 was shown in head and neck cancer but ING1 gene was not known to be responsible for this deletion. Later in a comprehensive study, our group demonstrated tumor specific missense mutations in ING1 gene and frequent deletion at long arm of chromosome 13 for the first time in a human cancer [13]. Of 34 informative cases of head and neck squamous cell carcinoma, 68% of tumors showed loss of heterozygosity at chromosome 13q33-34, where the ING1 gene is located. By this study, ING1 has been recognized to be an important TSG at least in head and neck cancer. These mutations were found in the PHD zinc finger domain and putative nuclear localization signal, which may abrogate the normal function of ING1 protein (Figure 3). Following this study, our group led most of the researches for ING family genes in head and neck cancer. [66]. However, analysis of esophageal cancer, which display some similarities especially for hypopharyngeal cancer, also demonstrated somatic mutations in ING1, supporting our results [44]. We recently demonstrated that frequent deletion of ING2 locus at 4q35.1 associated with advanced tumor stage in HNSCC [67]. LOH was detected in about 55% of the informative samples and high LOH frequency was statistically associated with advanced T stage, suggesting that ING2 LOH might occur in late stages during HNSCC progression. On the other hand, positive node status (N) appeared to be the only independent prognostic factor for both overall and disease free survivals. We showed frequent allelic loss of ING3 in HNSCC [14]. We analysed LOH at 7q31 region in 49 HNSCC by using six polymorphic microsatellite markers and found allelic deletion in 48% (22/46) of the informative cases. We detected two preferentially deleted regions, one is around D7S643 and the other around D7S486. When we redefined the map of 7q31 region according to the contiguous sequences, a recently identified gene, ING3, was found in the proximity of D7S643. But ING3 mutation was very rare in our study (a sole missense mutation of ING3 at codon 20). In another recent our study using a large study population, about half of the 71 tumor samples demonstrated downregulation of ING3 compared to their matched normal counterparts. We revealed that down-regulation of ING3 was more evident in late-stage tumors as compared with early stage patients, and patients with low ING3 mRNA expression demonstrated worse survival rates as compared to the patients with normal-high ING3 expression [68]. We also examined p53 mutation status and investigated its relationship with ING3, as well its clinicopathological characteristics. Although most clinicopathological variables were not significantly related to ING3 downregulation or p53 mutation status, a significant relationship was detected in terms of overall survival between the cases with low and normal to high ING3 expression. At 5 years follow up, approximately 60% of the patients with normal to high ING3 expression survived, whereas this was 35% in the patients with low ING3 expression. Multivariate analysis also s h o w e d d o w n r e g u l a t i o n o f I N G 3 a s a n independent prognostic factor for poor overall survival. These results reveal that ING3 would function as a potential tumor suppressor molecule and that low levels of ING3 may indicate an aggressive nature of head and neck cancer. We analyzed loss of heterozygosity at 12p12-13 region in 50 head and neck squamous cell carcinomas by using six highly polymorphic microsatellite markers and found allelic loss in 66% of the informative cases. To identify ING4 function, mutation analysis was performed. Though mutation of the ING4 gene was not found in head and neck cancers, the mRNA expression level examined by quantitative real-time RT-PCR analysis demonstrated decreased expression of ING4 mRNA in 76% of primary tumors as compared to matched normal samples. Since p53 dependent pathways of other ING family members have been shown, we examined p53 mutation status and compared with ING4 mRNA expression in tumor samples. However, no such direct relationship has been detected. In conclusion, frequent deletion and decreased mRNA expression of ING4 suggested it as a class two tumor suppressor gene and may play an important role in head and neck cancer [15]. In a recent study, nuclear expression of ING4 was found to gradually decrease from noncancerous epithelium and dysplasia to HNSCC and was negatively correlated with a poorly-differentiated status, T staging, and TNM staging in HNSCC. On the other hand, cytoplasmic expression of ING4 was significantly enhanced in HNSCC and was significantly associated with lymph node metastasis and 14-3-3η expression. Moreover, nuclear expression of ING4 was positively correlated with p21 and p300 expression and with the apoptotic index. Their results suggested that the decreases in nuclear ING4 and cytoplasmic translocation of ING4 protein play important roles in tumorigenesis, progression and tumor differentiation in HNSCC [69]. Our group reported the first study linking ING5 chromosome locus to a human cancer. We demonstrated a high ratio of LOH in oral cancer using 16 microsatellite markers on the long arm of chromosome 2q21-37.3 [24]. ING5 appeared to be a strong candidate tumor suppressor in this study though several other candidate TSGs including ILKAP, HDAC4, PPP1R7, DTYMK, STK25, BOK are also localized at the area, where frequent deletion has been detected [11,12,24]. Moreover, our recent study revealed decreased expression of ING5 mRNA and mutations in oral cancer samples as compared to their corresponding normal controls, suggesting its tumor suppressive role in cancer [25]. Examination of 172 cases of HNSCC for ING5 protein by immunohistochemistry using tissue microarray, and in 3 oral SCC cell lines by immunohistochemistry and Western blot showed that a decrease in nuclear ING5 localization and cytoplasmic translocation were detected, supporting the www.intechopen.com Possible applications of ING family genes in molecular diagnosis and therapy of cancer So far most of the studies for possible applications of ING family genes in molecular diagnosis and therapy of cancer include cancer types other than head and neck cancer. However, ING family genes express ubiquitously and are involved in carcinogenesis of many cancer types especially in head and neck carcinogenesis. Thus the following section of the review is added as a model for possible application of ING family genes as diagnostic and therapeutic target. Sub-cellular localization of ING proteins as a biomarker Most tumor suppressors contain nuclear transport signals that facilitate their shuttling between the nucleus and the cytoplasm. This type of dynamic intracellular movement not only regulates protein localization, but also often impacts on function. Shuttling of tumor suppressor proteins between nucleus and cytoplasm has been reported to be involved in the regulation of cell cycle and proliferation. Deregulation of the nucleocytoplasmic cargo system results in the mislocalization of TSG proteins, which then alter function of TSG proteins [71]. The mistargeting of tumor suppressors can finally reveal direct cellular consequences and potentially lead to the initiation and progression of cancer. Abnormalities in nucleocytoplasmic cargo system leading the mislocalization of tumor suppressors were reported for p53, BRCA1, APC, VHL, BRG1 and ING1, and these abnormalities driven by genetic and epigenetic alterations in the tumor suppressor or their partners generally occur during the carcinogenic process [72][73][74][75][76]. For ING1, 2 of 3 different tumor specific somatic mutations that we detected in head and neck cancer were located at or near nuclear targeting domain, which could possibly abolish its functions through accumulation of the protein in the cytoplasm instead of in the nucleus [13]. In a recent study, Nouman et al. reported that translocation of p33ING1b from the nucleus into the cytoplasm of melanocytes may have an important role in the development and progression of melanomas [77]. Immunostaining with new monoclonal antibodies (MAb) of GN1 and GN2 showed that ING1b product, a nuclear protein, was accumulated in the cytoplasm and was closely associated with malignant melanoma development. The authors suggested that detection of this subcellular mobilization with MAb ING1b may be an early indicator and could be of value in diagnostic approach. In another study of Nouman et al. nuclear expression of p33 (ING1b) was decreased in breast cancer cells, both in intensity and proportion of the cells stained. Reduction in nuclear expression of ING1 protein was associated with enhanced cytoplasmic p33 (ING1b) expression in a considerable number of cases. Those cases, which show p33 (ING1b) protein mislocalization, were also associated with more poorly differentiated tumors. Thus the authors suggested that p33 (ING1b) expression could be used as a marker of differentiation in invasive breast cancer. These results support the view that loss of p33 (ING1b) in the nucleus may be an important molecular event in the differentiation and pathogenesis of invasive breast cancer [78]. Similarly loss of nuclear expression of p33 (ING1b) was detected in 78% of cases of acute lymphoblastic leukemia (ALL). This loss in nuclear expression was associated with increased cytoplasmic expression of the protein. Kaplan Meier survival analysis demonstrated a trend towards a better prognosis for patients with tumors that had lost nuclear p33 (ING1b), suggesting that the loss of nuclear p33 (ING1b) expression may be an important molecular event in the pathogenesis of childhood ALL and can be used as a biomarker for prognosis [79]. In another similar study, Vieyra et al. demonstrated that sub-cellular mislocalization of p33ING1b is a commonly seen in gliomas and glioblastomas [80]. Overexpression and aberrant localization of ING1b into the cytoplasm were observed in all of the 29 brain tumors. p33 (ING1b) normally contains a nuclear targeting sequence [11,12,16]. It has been previously demonstrated that altered sub-cellular localization of p33 (ING1b) abrogates its proapoptotic functions [81]. Loss of targeting domains that ensure the proper intracellular localization of p33 (ING1b) or physical association of ING with p53 could account for the abnormal localization of p33 (ING1b) in cancer. Recent experimental observations, including post-translational stabilization of p53 by p33 (ING1b) [82], and the discovery of the p53 associated a parkin-like cytoplasmic-anchoring protein, PARC [83] and its p53-regulatory role support the possibility that association of ING proteins with p53 could account for the abnormal localization. Further studies in this field will clarify this point. For a normal function of ING1, the protein should be in the nucleus. ING1b protein phosphorylated on serine residue at position 199 has been reported to bind 14-3-3 proteins and subsequently be exported from the nucleus [84]. It has recently been shown that ING1 also binds karyopherin proteins and that disruption of this interaction affects subcellular localization and activity of the ING as a transcriptional regulator [84]. For ING1, few studies exist regarding with its subcellular localization. However for other member of ING family proteins, it mostly remains unkown and only few studies exist for subcellular alterations during carcinogenesis. Similar to the study of Nouman et al. [77] nuclear ING3 expression was found to be remarkably reduced in malignant melanomas compared with dysplastic nevi, which was significantly correlated with the increased ING3 level in cytoplasm. Moreover the reduced nuclear ING3 expression was significantly correlated with a poorer disease-specific 5-year survival of the patients with primary melanoma, especially for the high-risk melanomas with the survival rate reducing from 93% for patients with strong nuclear ING3 staining in their tumor biopsies to 44% for those with negative-to-moderate nuclear ING3 staining. Interestingly, the multivariate Cox regression analysis revealed that reduced nuclear ING3 expression is an independent prognostic factor to predict patient outcome in primary melanomas [85]. By using tissue microarray technology and immunohistochemistry, ING2 expression in human nevi and melanoma biopsies was examined. The data showed that nuclear ING2 expression was significantly reduced in radial and vertical growth phases, and metastatic melanomas compared with dysplastic nevi. Reduced ING2 has been suggested as an important indicator in the initiation of melanoma development [86]. 151 In a recent study, the subcellular localization of ING4 has been shown to be modulated by two wobble-splicing events at the exon 4-5 boundary, causing displacement from the nucleolus to the nucleus. The authors provided evidence that ING4 was degraded through the ubiquitin-proteasome pathway and that it is subjected to N-terminal ubiquitination. It has also been demonstrated that nucleolar accumulation of ING4 prolongs its half-life, but lack of nucleolar targeting potentially increases ING4 degradation. Taken together, data of this work suggested that the two wobble-splicing events at the exon 4-5 boundary influenced subnuclear localization and degradation of ING4 [87]. ING4 has been reported to interact with a novel binding partner, liprin alpha 1, which results in suppression of the cell spreading and migration [88]. Liprin α1/PPFIA1 (protein tyrosine phosphatase, receptor type f polypeptide) is known to be a cytoplasmic protein necessary for focal adhesion formation and axon guidance. Cytoplasmic ING4 may regulate cell migration through interacting with liprin α1, and with its known anti-angiogenic function, may prevent invasion and metastasis. This interaction could explain the specific property of ING4 from other ING proteins. In summary, sub-cellular localization of ING proteins or their interaction partners could be detected with various molecular and immunohistopathological methods and may be used as a biomarker for the behavior of the tumor and prediction of the disease progress. Genetic and epigenetic alterations of TSG as prognostic biomarker Alterations in allelic status, expression of mRNA and/or protein of the ING family genes provide potential usage of these genes as biomarkers in human cancer. Regarding with relation between genetic alterations of various genes and clinical outcome has recently been investigated. Since only few studies regarding with ING family genes exist in the literature, we will first give examples, which has been reported for other genes and summarize those published for ING tumor suppressors. In such a research, FHIT gene methylation has been found as a prognostic marker for progressive disease of early lung cancer [89]. Methylation and LOH analysis of FHIT gene showed that loss or reduced FHIT expression was significantly associated with squamous cell carcinoma type and smokers. Also methylation in normally appearing lung mucosa was related with an increased risk for progression into lung cancer, suggesting that FHIT can be used as a biomarker for this cancer type. In another report, allelic loss at 3p and 9p21 was related with elevated risk of malignant transformation of the premalignant lesions in head and neck cancer [90]. Similarly LOH at 8p was a predictor for long-term survival in hepatocellular carcinoma [91]. Another study highlighted the prognostic role of p16 in predicting the recurrence-free probability in patients affected by low-grade urothelial bladder by using p16 expression and LOH at 9p21 and proved the fact that the method is likely to be used in everyday urologic clinical practice to better describe the natural history of urothelial bladder carcinomas [92]. LOH at 16q23.2 was shown to be a predictor of disease-free survival in prostate cancer [93]. Our group has recently demonstrated that deletion at chromosome 14q was associated with poor prognosis in head and neck squamous cell carcinomas [1]. We also showed that frequent deletion of ING2 locus at 4q35.1 associates with advanced tumor stage in head and neck squamous cell carcinoma [67]. Interestingly, in our study, deletion at Dickkopf (dkk)-3 locus (11p15.2) was detected to be related with lower lymph node metastasis and better prognosis in head and neck squamous cell carcinomas, suggesting the different nature of this gene, yet its potential use as a prognostic biomarker [94]. Detection of a gradual increase of mRNA expression of the DNA replication-initiation proteins from epithelial dysplasia (from mild through severe) to squamous cell carcinoma of the tongue has been used as biomarker to distinguish precancerous dysplasia from SCC and is useful for early detection and diagnosis of SCC as an adjunct to clinicopathological parameters [95]. In a recent work, we demonstrated downregulation of TESTIN and its association with cancer history and a tendency toward poor survival in head and neck squamous cell carcinoma [96]. The increased serum midkine concentrations were strongly associated with poor survival in early-stage oral squamous cell carcinoma, suggesting it as a useful marker not only for cancer screening but also for predicting prognosis of OSCC patients [97]. Information on human genome project provided that many gene including cancerassociated genes show alternative splicing. In such a study, deregulation of survivin splicing isoforms has been shown to influence significant implications in tumor aggressiveness and prognosis [98]. In ING family genes, some of the members also have splicing variants. Although we don't have detail study for these variants, their deregulation may have an impact for carcinogenesis. In our work, two major variants of ING1 (p33ING1 and p24ING1) revealed different expression patterns. Our researches indicated alternative splicing variants for ING1, ING3, ING4 and ING5 [13][14][15]25]. For ING2, a recent study reported 2 splicing variants [31,32]. Though both of them showed decreased expression in head and neck cancer tissues as compared to the normal counterparts, methylation analysis demonstrated that only p33ING1 variant was associated with methylation (Gunduz et al. unpublished data). Not only single gene alterations associated with clinical outcome but also genome-wide or microarray studies were also examined. In such as study, genome-wide transcriptomic profiles obtained for 53 primary oral cancer and 22 matching normal tissues exhibited upregulated genes and down-regulated genes. In conclusion, this study provided a transcriptomic signature for oral cancer that may lead to a diagnosis or screen tool [99]. In a recent study, the expression levels of ITGA3, ITGB4, and ITGB5 with functional normalization by desmosomal or cytoskeletal molecule genes were shown as candidate biomarkers for cervical lymph node metastasis or for the outcome of death in oral cancer [100]. Another recent study identified allelic deletion of ING1 as a novel genomic marker as related progression to glioblastoma by using comparative genomic hybridization and DNA microarray [101]. In another study, low levels of ING1 mRNA have been reported to be significantly associated with poor prognosis in neuroblastoma [102]. The expression level of ING1 was also closely related to survival. These results suggest that decreased level of ING1 mRNA and/or protein expression could be an indicator of poor prognosis in advanced stages and/or poor survival of various human tumors. On the other hand, an association between p33ING1b protein expression and clinical outcome in colorectal cancer demonstrated that although patients with decreased p33ING1b protein expression in the tumor have a shorter overall and metastasis-free survival rate as compared with patients with normal p33ING1b protein expression, no statistical significance was achieved [103]. www.intechopen.com However a significant association between p53 mutation status and overall and metastasisfree survival has been found. Regarding with ING2 gene, its reduced mRNA as well as protein expressions were shown to be associated with tumor progression and shortened survival time in HCC [48]. Recently, our group reported that high LOH frequency in ING2 locus at 4q35.1 was significantly associated with advanced tumor stage in HNSCC, suggesting that ING2 LOH might occur in later stages during HNSCC progression [67]. Hence, the relevance of ING2 in HNSCC carcinogenesis and the potential prognostic significance of ING2 are promising results for future studies. Several recent studies examined correlation between ING3 protein expression and clinicopathological variables [52,54]. Interestingly, significant reduction of nuclear ING3 was detected in human malignant melanoma, indicating the status of ING3 as a prognostic and therapeutic marker for melanoma [52]. As it has been shown for ING2, decreased nuclear ING3 protein expression was also associated with a poorer 5-year survival rate. The survival rate was 93% for the patients with strong nuclear ING3 staining, whereas it decreased to 44% for the patients with negative-to-moderate nuclear staining. We have recently reported mRNA expression of ING3 in HNSCC and compared the clinicopathological characteristics to evaluate its prognostic value as a biomarker [14,68]. This study revealed that downregulation of ING3 was more evident in late-stage tumors as compared with early stage cases. Analyses have also showed that down-regulation of ING3 could be used as an independent prognostic factor for poor overall survival and low levels of ING3 may indicate an aggressive nature of HNSCC. Recently the correlation of the ING4 with patient survival and metastasis was revealed to be as a potential prognostic marker in melanoma [56]. It has been found that ING4 expression was significantly decreased in malignant melanoma compared with dysplastic nevi, and overexpression of ING4 inhibited melanoma cell invasion compared with the control. ING genes as chemosensitivity marker Overall survival of head and neck squamous cell carcinoma patients has not improved in the decades. Currently treatment strategies for this cancer are based on the tumor-nodemetastasis (TNM) classification. However, due to the extreme biological heterogeneity of the cancer cells, treatment planning especially for chemoradiotherapy is quite difficult and chemotherapy is an important therapeutic modality for cancer, and identification of the genes that predict the response of cancer cells to these agents is critical to treat the patients more efficiently. Although clinical determinants such as TNM classification will be still important, it is now becoming possible, by molecular markers, to elucidate biological information about host and tumor, to break through the molecular heterogeneity and eventually to optimize the choice of treatment [104]. In a recent analysis for prediction of chemosensitivity, it has been reported that examining the TP to DPD ratio of their tumors could identify HNSCC patients, who would most benefit from capecitabine-based chemotherapy. Moreover, the potential role of TP gene therapy in TP to DPD ratio manipulation to optimize the tumoricidal effect of capecitabine has been demonstrated [105]. In another similar study, acquired (10-fold) resistance of Cal27, a tongue cancer cell line, against cisplatin has been shown to be associated with decreased DKK1 expression and this resistance could partially be reversed by DKK1 overexpression, thus suggesting DKK1 and the WNT signaling pathway as a marker and target for cisplatin chemosensitivity [106]. Recent findings suggest that the ING genes might also have a role in regulating the response of cancer cells to chemotherapeutic agents. In an osteosarcoma cell line, U2OS cells, one of the ING1 splicing variant p33ING1b, prominently enhanced etoposide-induced apoptosis through p53-dependent pathways [107]. In another study of the authors, ectopic expression of p33ING1b was shown to upregulate p53, p21WAF1 and bax protein levels and activate caspase-3 in taxol-treated U2OS cells. Thus the study demonstrated that p33ING1b increased taxol-induced apoptosis through p53-dependent pathway in human osteosarcoma cells, suggesting that p33ING1b may be an important marker and/or therapeutic target in the prevention and treatment of osteosarcoma [108]. Tallen et al. [39] questioned whether p33ING1 mRNA expression correlates with the chemosensitivity of brain tumor cells. They found that, unlike other tumor types, ING1 levels were higher in glioma cell lines than in normal control cells. Medulloblastoma cells revealed the lowest ING1 expression of the lines tested. Comparing all cell lines, p33ING1 gene expression significantly correlated with resistance to vincristine, suggesting that p33ING1 mRNA levels may be used to predict the chemosensitivity of brain tumor cells to vincristine. The tumor suppressor ING1 shares many biological functions with p53 including cell cycle arrest, DNA repair, apoptosis, and chemosensitivity. To investigate if the p33ING1 isoform is also involved in chemosensitivity, Cheung et al. overexpressed p33ING1 in melanoma cells and examined for cell death after treatment with camptothecin. Results from the survival assay and flow cytometry analysis showed no significant difference among cells transfected with vector, p33ING1, and antisense p33ING1, indicating that p33ING1 does not enhance camptothecin-induced cell death in melanoma cells. Moreover, co-transfection of the p33ING1 and p53 constructs had also no effect on the frequency of cell death. Thus influence of ING1 expression for chemosensitivity may have different depending on the cancer type [109]. In another work, down-regulation of ING1 in the p53-deficient glioblastoma cell line, LN229, increased apoptosis following treatment with cisplatin, indicating that reduced ING1 expression may predict the sensitivity of cancer cells to chemotherapy independent of their p53 status [110]. increase cell death, as exposed to some DNA-damage agents, such as etoposide and doxorubicin, implying that ING4 could enhance chemosensitivity to certain DNA-damage agents in HepG2 cells [111]. In another study, chemopreventive agent curcumin (diferuloyl methane) induced ING4 expression during the cell cycle arrest by a p53-dependent manner in glioma cells (U251) [112]. Therefore ING4 has been suggested for a possible role in the signaling pathways of the chemotherapeutic agents. Applications of ING family for gene therapy Cancer still poses a great treat to human life and classical treatment modalities have still failed to eradicate it. Developments in human genome technology and progress in knowledge of the genes provided us alternative methods such as gene therapy to cure this fatal disease. Currently researchers are working on several basic methods to treat cancer using gene therapy. Some of these methods target healthy cells i.e. immune system cells to enhance their ability to fight cancer. Other approaches directly involve cancer cells, to destroy them or at least to stop their growth. The later method usually involves restoration of the tumor suppressor genes. In tumor cells, ING transcript levels are now known to be often downregulated though mutations are very rare. However, as explained in the above sections, it has been now known that the inactivation of ING family genes at genetic and epigenetic levels has a major role in the carcinogenesis of various neoplasms. Considering involvement of ING tumor suppressors in many cancer types, it can be thought that ING family genes may be of potential target for molecular therapy in human cancer. However, only few preclinical studies exist to evaluate this potential. Thus this section will only give an image and possible speculations for using these genes in cancer therapy. Regarding the gene therapy of ING family genes in the literature, a few in vitro studies have been reported. In 1999, the introduction of ING1 gene using virus vectors was reported as a pioneer and a promising approach for the treatment of brain tumors [113]. Although adenovirus-mediated introduction of isolated ING1 transcript has inhibited the growth of glioblastoma cells, combined transduction of p33ING1 and p53 synergistically enhanced the apoptosis in these cells [113], suggesting that ING1 may function as a proapoptotic factor as well as enhancing the effect of p53. Another study has shown similar findings and supported the cooperative role of ING1 and p53 in esophageal cancer [114]. Co-introduction of ING1 and p53 induced more cell death as compared to single use of each gene transcript in esophageal carcinoma cells. Thus, the synergistic effect between p33ING1 and p53 for induction of apoptosis has been suggested for 2 different human cancers, i.e. esophageal carcinoma and glioblastoma. Considering these two in vitro studies, combined gene therapy of one or more ING family members with/without p53 emerged as a promising alternative therapy in those cases with the failure of single use of p53 gene therapy. Another method in gene therapy could be potentialisation of the introduced gene. In fact, a study showed that one of the ING1 splicing isoforms, p47ING1a, was differentially upregulated in response to cisplatin in human glioblastoma cells (LN229) that express ING1 proteins and harbor mutated TP53, which might represent a response to protect DNA from this DNA-damaging agent. Thus ING1 down-regulation may sensitize glioblastoma cells with deficient p53 to treatment with cisplatin. It was concluded that the status of p53independent-ING expression level might predict the relative sensitivity to treatment with cisplatin and HDAC inhibitors in glioblastoma. These studies suggest that molecular therapy of ING1 could be combined with chemotherapeutics in a subset of human cancer. Interestingly, some of ING family members have additional functions for suppressing the tumor growth such as anti-angiogenesis for ING4. Thus future studies using other members alone or in combination for gene therapy will provide more successful and promising results. In fact, it has been shown that ING4 gene therapy may be effective in human lung carcinoma as a novel anti-invasive and anti-metastatic agent [115]. Adenovirus-mediated ING4 expression suppressed the tumor growth and cell invasiveness in A549 lung cancer cells, suggesting that ING4, as a potent tumor-suppressing agent, present great therapeutic potential. Another interesting study displayed that ING4 inhibited MMP-2 and MMP-9 expressions in melanoma cells, which may contribute to the suppression of melanoma cell invasion [56]. This study demonstrated that overexpression of ING4 significantly decreased melanoma cell invasion by 43% and suppressed cell migration by 63%. Since degradation of basement membrane and extracellular matrix (ECM) is the first step in the invasion and metastasis of malignant tumors, down-regulation of the MMP-2 and MMP-9 expressions with Ad-ING4 may be a potential method in suppressing degradation components of the ECM and basement membrane and thus metastasis. In this respect, association of the ING4 with the MMP pathway may open a new avenue and offer novel opportunities for molecular therapy of cancer. In another recent study, Xie et al. [116] demonstrated that Ad-ING4-mediated transfection of PANC-1 human pancreatic carcinoma cells inhibited cell growth, altered the cell cycle with S-phase reduction and G2/M phase arrest, induced apoptosis, and downregulated interleukin (IL)-6 and IL-8 expression of transfected tumor cells. In athymic mice bearing the PANC-1 human pancreatic tumors, intratumoral injections of Ad-ING4 suppressed the tumor growth, downregulated CD34 expression, and reduced the tumor microvessel formation. Therefore, this study provided a framework for future clinical application of Ad-ING4 in human pancreatic and other carcinoma gene therapies. In conclusion, these reports suggest that the transfer or forced expression of ING4 into cancer cells by gene therapy also targets its related molecules such as MMPs. Thus combination of ING4 gene therapy with chemicals, which inhibit MMPs, could be a promising treatment method in various cancer types. In this respect, possible applications each member of ING family genes for gene therapy should be tested. A summary of ING gene alterations and their use as possible biomarkers and consequences of the gene restoration in cancer are shown in Future aspects Over a decade of research on the ING family genes has revealed that ING genes are involved in various functions from chromatin remodeling to cell cycle suppression and apoptosis. Moreover ING family genes also cooperate with major tumor suppressor p53 and make complexes with HAT and HDAC. Alterations of these genes occur commonly in many cancer types. Recent studies also suggest that allelic deletion or down-regulation of mRNA expressions as well as change of subcellular localization of proteins of the genes are likely to be used as a prognostic or predictive marker in human cancer. Today many cancer types including head and neck are treated based on clinical staging and findings such as lymph node involvement, TNM stage. However these clinical markers are most time not enough to follow tumor behavior or patients response to the therapy. Thus new methods are warranted to overcome these shortcomings to get a better response to the therapy or to predict which therapeutic method is best for each patient. At this point, involvement of ING genes in p53 tumor suppressor pathways and crosstalk between the variant of a single ING gene need to be clarified for focusing on INGs as diagnostic biomarkers. Progress on the knowledge of functions of ING family genes as well as the relationship with p53 and other unknown molecules will elucidate their roles in the development of human cancers, which will result in their uses in cancer diagnostics as well as therapy. Cancer today is still one of the most dangerous diseases for human life. So far for treatment of cancer, surgery and chemoradiotherapy are the major therapies. Major difficulty for treatment of the cancer is inefficiency of chemoradiotherapy since each person gives different response to the therapy. So far clinical staging or findings is used to plan for treatment of cancer. However, this is not enough since many patients are resistant to these therapies and there is currently no way to understand efficiency of these methods. Moreover these treatment modalities are not specific and demonstrate high toxicities. Recent developments in human genome and technology provide novel methods for prediction of therapy or tumor behavior as well as tumor-targeted specific therapeutic methods. Thus although development of many molecular biomarkers for prediction of tumor behavior are tested and genetic therapy trials are ongoing, five-year later it is likely to see some of these methods as routine clinical use. For example, LOH of some TSG loci or expression profiles of single or multiple genes or mutation status could direct our therapy and we can have a nearly 100% success for each patient since the treatment will be individualized based on use of multiple molecular biomarkers. Some of these markers could be developed based on the
v3-fos-license
2024-05-19T15:26:48.548Z
2024-05-01T00:00:00.000
269866095
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/253169/20240516-24590-ht2b4i.pdf", "pdf_hash": "2bc58d7b770db589ca1a52fde476e2cf2a86844c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42173", "s2fieldsofstudy": [ "Medicine" ], "sha1": "815020241a865a3ada13d336b8e6bb0fcea012ef", "year": 2024 }
pes2o/s2orc
A Case of Systemic Chemotherapy With Paclitaxel/Cisplatin Followed by Wide Local Vulvectomy in Pelvic Lymph Nodes-Related Stage IVB Vulvar Cancer Multimodality treatments, including chemotherapy, radiation, and surgery, have been evaluated to reduce the extent of resection and morbidity in patients with advanced vulvar cancer. Here, we report the case of a 55-year-old woman diagnosed with advanced vulvar cancer with inguinal and pelvic lymph node metastasis. She exhibited cancerous labia, which were entirely covered with ulcerated and exophytic lesions of squamous cell carcinoma, and underwent systemic chemotherapy consisting of combined paclitaxel-cisplatin. After eight cycles of this regimen, the tumors had nearly regressed, and we performed a wide local vulvectomy with a plastic musculocutaneous flap. Pathological examination revealed no residual carcinoma in the excised labia, indicating that the chemotherapy elicited a pathological complete response. The paclitaxel-cisplatin regimen may provide sufficient efficacy for selected patients with stage IVB vulvar cancer. In addition, surgical strategies should be tailored to avoid complications associated with extensive surgery and more emphasis should be placed on the patient’s expected quality of life. Introduction Vulvar cancer is a rare disease that accounts for 4% of all gynecological malignancies and occurs in 0.85/100,000 women per year worldwide (approximately 44,000 total new cases were estimated in 2018) [1,2].The 1-year and 5-year survival rates of advanced-stage IVB vulvar cancer were estimated at 44.7% and 18.3%, respectively [3].Cancer remains a disease of older adults; however, recent studies indicate that the incidence is rising in younger women, most likely because of exposure to human papillomavirus (HPV), whereas older patients tend to be HPV-negative [2]. Treatment recommendations for vulvar cancer have been published in several guidelines, including the International Federation of Gynecology and Obstetrics (FIGO) staging system [3] and the Japan Society of Gynecologic Oncology [4].Treatment for vulvar cancer typically includes resection, radiation, chemotherapy, and palliative therapy, which varies by disease stage.However, considering the rarity of the disease, performing prospective and randomized clinical trials is challenging.Thus, obtaining high-quality data to determine the preferred chemotherapeutic drug regimen for patients with advanced-stage cancer is difficult.Moreover, current surgical strategies have evolved the paradigm to promote less extensive excision as opposed to radical vulvectomy [5]. Here, we described a successful case of stage IVB vulvar cancer with pelvic node metastasis treated with combined paclitaxel-cisplatin (TP) chemotherapy followed by wide local vulvectomy, which resulted in a favorable response. Case Presentation A 55-year-old Japanese woman, gravida 5 para 3, presented with a vulvar mass.The patient also had type 2 diabetes mellitus with a hemoglobin A1c level of 10.3%, which required insulin therapy.In addition, she was given medication for condyloma acuminatum at 40 years of age but left it untreated.Physical examination initially revealed that her labia were covered entirely with ulcerated and exophytic lesions spread closely to the meatus urethra (Figure 1A).Bilateral inguinal lymph nodes were clinically palpable.A biopsy revealed invasive squamous cell carcinoma of the vulva with non-block-type p16 immunostaining (Figure 2A and Figure 2C).18F-fluorodeoxyglucose positron emission tomography and computed tomography (FDG-PET/CT) imaging showed a high level of FDG accumulation (SUVmax ≤10.90) in the bilateral inguinal lymph nodes and a moderate level of FDG accumulation (SUVmax ≤7.17) of lymph nodes in the bilateral obturator and iliac regions (Figure 3A), which confirmed a diagnosis of International Federation of Gynecology and Obstetrics (FIGO) stage IVB vulvar cancer.No tumor involvement was observed beyond the pelvis, including the lung, liver, or elsewhere.Considering her satisfactory performance status (PS), informed consent for chemotherapy was provided by the patient, and systemic chemotherapy consisting of 175 mg/m 2 of paclitaxel and 50 mg/m 2 of cisplatin once every three weeks was initiated. FIGURE 1: Macroscopic appearance of vulval lesions 1A shows the initial finding of vulval lesions, which regressed after six cycles of systemic TP chemotherapy as shown in 1B.The excised specimen at the wide local vulvectomy and the wound appearance two months after surgery are presented in 1C and 1D, respectively.The vulvar mass showed marked regression with progressive chemotherapy (Figure 1B).Biopsy from her labia after six cycles of chemotherapy revealed residual malignant cells on the right side but only vulvar intraepithelial neoplasia (VIN) 1 on the left side.FDG-PET/CT imaging revealed no FDG accumulation among the intrapelvic or inguinal lymph nodes (Figure 3B).The patient experienced grades 1-2 of chemotherapy-induced peripheral neuropathy according to the Common Terminology Criteria for Adverse Events (CTCAE) version 5.0, and she took pregabalin, but her PS was 1. Consequently, we performed palliative surgery for the remaining vulvar cancer.She withheld assent to perform lymphadenectomy for fear of developing lymphedema.After two additional cycles of TP chemotherapy, the patient underwent a wide local vulvectomy with a plastic musculocutaneous flap (Figure 1C).The surgical procedure was performed as follows: the tumors were removed 2 cm beyond the edge of the tumorous lesion, and the meatus urethra and anus were preserved.The gracilis muscle flap was used to reconstruct the left side of the vulva.Two weeks after surgery, the right side, nonplastically built side of the wound opened; thus, we continued to irrigate the dehiscence.Two months after surgery, the wound was well-adapted and the patient was able to urinate and defecate normally (Figure 1D). Examination of the resected labia showed VIN with non-block-type p16 immunostaining (Figure 2B and Figure 2D) and revealed no remaining viable malignant cells and adequate surgical margin.The patient underwent physical examination and laboratory testing once in three months.In addition, chest/abdomen/pelvis CT imaging and annual cervical/vaginal cytology tests were conducted once in six months to evaluate nodal status and recurrence, but no cancer relapse occurred one year after surgery without adjuvant therapy. Discussion We report a successful case of a patient with stage IVB vulvar cancer who underwent TP chemotherapy followed by a wide local vulvectomy, which resulted in a favorable response.Systemic chemotherapy is an effective therapeutic strategy for shrinking cancer and avoiding complications associated with aggressive surgery for stage IVB vulvar cancer with pelvic lymph node metastasis.Less extensive surgery for patients with advanced-stage disease can improve organ preservation and maintain their quality of life (QOL). Typically, the concurrent use of external beam radiotherapy with chemotherapy offers a chance of cure in women with advanced-stage or large-sized tumors [6]; however, it is often associated with high morbidity (e.g., fibrosis, stenosis, and vulvovaginal atrophy).Compared with concurrent chemoradiation, systemic chemotherapy offers the advantages of local morbidity reduction and treatment of occult or distant diseases.In previous studies by Wagenaar et al. [7] and Aragona et al. [8], systemic chemotherapy resulted in >50% response rates and increased surgical feasibility for initially inoperable patients.These studies used combined bleomycin, methotrexate, and lomustine or cisplatin-based chemotherapy; however, currently, the National Comprehensive Cancer Network recommends TP chemotherapy for advanced or metastatic vulvar cancer [9].Moreover, Raspagliesi et al. reported that six women with stage III/IV vulvar cancer who received TP chemotherapy experienced a 66% response rate [10].Our present case experienced a good clinical response after six cycles of TP chemotherapy and surgical resection.After a total of eight cycles of chemotherapy, a favorable response without severe chemotherapy-related toxicity was achieved.Recent studies have evaluated the addition of bevacizumab as an adjunctive chemotherapy.Furthermore, a phase 2 study of pembrolizumab, an anti-programmed cell death 1 monoclonal antibody, resulted in an objective response rate of 10.9% in patients with advanced vulvar cancer [11]. We believe that the final diagnosis is established by histological examination of lymph node specimens as well as the primary tumor.When metastatic involvement of lymph nodes and/or advanced disease is suspected, whole-body CT with intravenous contrast or FDG-PET/CT should be performed to exclude pelvic lymph node involvement and the presence of other distant metastases.Several guidelines recommend that the suspicious inguinal nodes should be assessed by ultrasound-guided fine-needle aspiration or core needle biopsy if this would alter the primary treatment.In addition, equivocal distant metastasis should be biopsied, whenever possible, to analyze the suspicious metastatic lesions.In the present case, we made a diagnosis of inguinal and pelvic lymph node metastasis because of the physical examinations of her inguinal mass and the findings of moderate to high-level accumulations of FDGs, although we appreciated the significance of lymph node sampling for staging.Unfortunately, the attempt to identify the metastatic pelvic lymph nodes by CT-guided biopsy was abandoned because it would be invasive and technically challenging due to the localization of the lymph nodes. In the present case, the metastatic lymph nodes had completely regressed following TP chemotherapy, whereas residual FDG accumulation was located exclusively within the labium.Therefore, we performed wide local excision for the remaining vulvar cancer.Because the scarring lesion of the labium was located laterally wide on the left side, the wound was reconstructed with a left gracilis muscle flap by the plastic surgeons.The gracilis myocutaneous flap is useful in cases of largely denuded defects in the perineum resulting from radical vulvectomy or perineal surgery, in which primary closure may result in postoperative dehiscence of the wound incision [12].Our case decided not to receive lymphadenectomy, deeply concerned about the development of lymphedema.A Japanese nationwide survey investigated that inguinofemoral lymphadenectomy was performed in approximately 60% of cases among initially operable patients with vulvar cancer [4]; however, it was found to be associated with serious adverse effects, so the risks and benefits of the intervention need to be fully explained to patients.Because chemotherapy can effectively shrink the tumor and secure a sufficient margin to remove the tumor apart from the urethra and anus, tailored surgical strategies for preservation of urination and defecation can be administered.For patients with early vulvar cancer, individualized surgical treatments are reported to be safe and effective [13]; however, insufficient results for the long-term prognosis may occur when performing less extensive surgery after systemic chemotherapy for locally advanced vulvar cancer.Therefore, long-term surveillance is needed. The issue that needs to be discussed is whether radiation should be offered in this case or not.Radiation to the pelvis is commonly administered with a primary or adjuvant intent for many gynecological cancers.For patients with stage IVB vulvar cancer presenting with unfavorable conditions, palliative radiotherapy would be considered as an option for primary treatment; however, we suggested systemic chemotherapy for the initial treatment, considering her satisfactory PS.In addition, pathological examination of wide local vulvectomy revealed no malignant cells remaining in the excised specimen and therefore we conducted surveillance without performing adjuvant radiotherapy. Conclusions Our present case was diagnosed with advanced vulvar cancer with inguinal and pelvic lymph node metastasis.The patient underwent TP chemotherapy followed by surgery, which resulted in a favorable response.Systemic chemotherapy consisting of a TP regimen for locally advanced vulvar cancer is beneficial for tumor shrinkage for selected patients with stage IVB vulvar cancer.Physicians should individualize surgical strategies for improving organ preservation and reducing complications associated with extensive surgery and emphasize the patient's QOL. FIGURE 2 : FIGURE 2: Histopathology findings of vulval lesionsHistopathology findings in H&E and p16 immunostaining sections.2A shows invasive SCC of the initial cancerous lesion of the labia, and 2B shows the excised specimen, which is composed only of VIN3 after a total of eight cycles of systemic TP chemotherapy.As shown in 2C and 2D, both the invasive SCC and VIN specimens show non-block-type p16 immunostaining, indicating HPV-independent lesions.SCC: squamous cell carcinoma; VIN: vulvar intraepithelial neoplasia FIGURE 3 : FIGURE 3: FDG-PET/CT images at initial diagnosis and after systemic TP chemotherapy FDG-PET/CT images at initial diagnosis are shown in 3A and after systemic TP chemotherapy are shown in 3B.The yellow-colored arrowheads indicate metastatic pelvic lymph nodes with moderate accumulations of FDGs, and these abnormal accumulations disappeared after chemotherapy.
v3-fos-license
2018-12-11T19:49:20.660Z
2012-11-20T00:00:00.000
55578766
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://academicjournals.org/journal/AJB/article-full-text-pdf/24FE56A30852", "pdf_hash": "e9e8bdedd6dd6f29acfb7529f928925310064f89", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42174", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "sha1": "95e96bc1cc4d03b72ac332da51aecf963772aa83", "year": 2012 }
pes2o/s2orc
Effect of eucalyptus ( Eucalyptus camaldulensis ) and maize ( Zea mays ) litter on growth, development, mycorrhizal colonization and roots nodulation of Arachis hypogaea In cultivate groundnut in association with eucalyptus plantations to increase their incomes. However eucalyptus plantations produce large amounts of litter, which impact on groundnut has not been clearly elucidated yet. In investigate litter accumulation effect on growth, development, and groundnut root infection by arbuscular mycorrhizal fungi (AMF) and rhizobia, a greenhouse experiment was performed. The effect of eucalyptus litter was compared to that maize litter effect at three litter amendments (0, 1 and 5%). Chemical analysis showed that eucalyptus litter differed essentially from maize litter by its high polyphenols content and lower pH. At high amendment (5%), root nodulation and mycorrhizal colonization were significantly reduced with eucalyptus litter whereas no significant differences were observed with maize litter. In addition, groundnut growth, number of flowers per plant, pods yield and leaf mineral contents (N and C) were significantly lower for plant grown in soil highly amended with eucalyptus litter. Plants showed deficiency of chlorophyll content in leaves and were less vigorous compared to treatments without amendment and those amended at 1% level. For all parameters measured, plants grown in soil lowly amended (1%) and plants grown in control treatment did not significantly differ. However the drought of 1970s has considerably decreased farmers' incomes. In order to offset this loss of income, famers had chosen eucalyptus as an alternative crop in their arable area. Indeed, eucalyptus wood is an important source of building material (Eldridge et al., 1993) and the sale of eucalyptus poles and products has the potential to raise farm incomes, reducing poverty (Anonymous, 2010). Therefore, groundnut which is historically dominant is now found intercropping within Eucalyptus camaldulensis plantations. This agroforestry system, although highly recommended for the indigenous species, may impose unforeseen consequences. Eucalyptus as an industrial crop which is non-edible occupies agricultural land intended for food crops cultivation; and may negatively affect native plant species (including crops). Also, eucalyptus can compete with crops underlying light, water and soil nutrient (Onyewotu et al., 1994;Pérez Bidegain et al., 2001) or by changing the soil pH (Kubmarawa et al., 2008;Mubarak et al., 2011). It is well established that legume such as groundnut plant depends strongly on symbiosis for growth and production (Lindemann and Glover, 2003;Piotrowski et al., 2008;Vieira et al., 2010). Arbuscular mycorrhizal fungi improve plant mineral nutrition especially phosphorus and nitrogen (Smith and Read, 2008;Javaid, 2009), while, rhizobia improve plants nitrogen nutrition (van der Heijden et al., 2006). Tree-culture association is beneficial and sustainable if, positive effects of the tree productivity, sustainability are greater than the adverse effects (reduction of cultivated areas, shade, competition, allelepathy). So far, little is known about eucalyptus effects on groundnut (growth and yield) and roots symbiotic partners (AMF and Rhizobia) mainly in Senegal. To address this knowledge gap, we studied the impact of two levels of eucalyptus litter in comparison to maize litter on groundnut development and production in a greenhouse experiment. This experiment will contribute to an understanding on the effect of litter accumulation on groundnut production in greenhouse conditions. In addition, the findings from this study could be useful to predict the potential hazards of Eucalyptus-groundnut association. Chemical analysis of litters Eucalyptus litter was collected under the shade of old eucalyptus trees (12 years old), and was composed of dead leaves, bark, fruits, twigs and seeds, while maize litter consisted of crop residues (leaves and stems). Before chemical analysis, the litters were ground and sieved (2 mm). Total C, total N and total P in litters were measured, in the LAMA laboratory (Certified ISO 9001, Soumare et al. 15995 version 2008, Dakar, Senegal; Institut de Recherche pour le Développement [www.lama.ird.sn). Phenolic compounds were extracted by soaking 1 g of powder in 100 ml of acetone 80% (v/v). The extraction was perfumed under ultrasound for 30 min at 4°C to prevent the action of polyphenoloxidase which can degrade phenolic compounds. The phenolic content was determined following Folin-Ciocalteu method using Gallic acid as standard range (Singleton and Rossi, 1965). The absorbance reading was performed by using ultraviolet (UV)-visible spectrophotometer (Ultrospec 3000 / Pharmacia Biotech France) at λ = 760. Results were expressed as mg/g gallic acid equivalent. Greenhouse experiment The litter effects on groundnuts were determined in a pot experiment using an unsterilized soil collected from Sangalkam, Senegal (14°46'52''N, 17°13'40''W). The soil physicochemical characteristics were as follows: pH (H2O), 7.02; clay, 8.7%; silt, 5.80%; sand, 88.80%; carbon, 0.30%; total nitrogen, 0.02%; soluble phosphorus, 2.1 mg kg -1 and total phosphorus, 41.4 mg kg -1 . Each litter was separately mixed with the soil to make two doses: 5% (w of litter/w of soil) and 1% (w/w). For each soil-litter mixture, 500 g were placed into a 500 ml polyethene pot. Five treatments were obtained of which 2 (EH (5%), EL (1%)) for eucalyptus litter, 2 (MH(5%), ML (1%)) for maize litter and one without litter amendment (T). ML and EL were considered as low amendment treatments, while EH and MH were considered as high amendment treatments. Six replicates were per-formed for each treatment and arranged in a randomized complete block design. Two groundnuts seeds (variety hasty 55 to 437; 90 days) were sown in each pot and a week after weeding was performed to maintain one plant per pot. Plants were grown under natural light (daylight approximately 12 h, mean day-time temperature 35°C) and watered daily with tap water during three months. Flowering survey The onset of flowering and the number of new flowers were documented every two days during 18 days, duration of flowering process (Catan and Fleury, 1998). Leaves chlorophyll content 45 days after sowing, the same weight of leaves (2 g) was harvested from plants of each treatment and leaves chlorophyll content were assessed using the method described by Arnon et al. (1949). A weight of 100 mg of ground fresh leaves was suspended in an 80% acetone buffer (80 ml of acetone made up to 100 ml of 2.5 mM sodium phosphate buffer (pH 7.8)) and the mixture was incubated at 4°C overnight in dark. Supernatant was withdrawn after centrifugation (10,000 g; rotor Nr 12154, Sigma 3K15, USA) and absorbance of aqueous extract was recorded at 662 nm with a spectrophotometer (Ultrospec 3000/Pharmacia Biotech France). Total chlorophyll content was determined by the formula: Chl = A662 x 27.8 mg/ L/ g.fresh material (A662 = absorbance at 662 nm) (Arnon et al., 1949). Mycorrhizal infection and nodulation At the end of experiment (after three months), plants were harvested and AM colonization was assessed according to the method of Phillips and Hayman (1970). Fine roots were collected from plants, washed with tap water, cleared in 10% KOH and stained with 0.05% trypan blue. They were subsequently placed on slides for microscopic observations at 250 × magnifications (Brundett et al., 1996). Five slides, each with ten randomly selected stained roots (approximately 1.5 cm long root pieces) were prepared for each treatment. Mycorrhizal colonization (intensity of roots colonization), the number and dry weight (60°C, one week) of nodules were determined. Shoots, roots dry matter and nutrient content in shoots Shoots and roots of each plant were separated, dried (60°C, one week) and weighed. After drying, shoots were ground and 1 g of powder from each plant was washed (500°C), digested in 2 ml HCl 6 N and 10 ml HNO3 N and then analyzed by colorimetry for P (John, 1970). Total nitrogen and carbon were measured by dry combustion with a CHN analyzer (LECO Corporation, St. Joseph, MI, USA). Statistical analysis All measured variables were subjected to a one-way analysis of variance (ANOVA) to assess differences between the treatments. Principal component analysis (PCA) was used to highlight the relationship between treatments and variables. Statistical analyses were performed using Xlstat software 2010 for ANOVA and R software (version R-2.13.0) for PCA. Discrimination treatments based on plants growth parameters (principal components analysis) The horizontal axis on the PCA (PCA 1) is strongly correlated to shoots and roots dry matter, groundnuts pod, total C, total N and chlorophyll content, while vertical axis PCA 2 is correlated to shoots phosphorus contents (P). This PCA explaining 83.74% of the variability in the first two factors discriminated the treatments according to the type of litter (maize versus eucalyptus) and quantity added to soil (high versus low) ( Figure 1). For instance, the treatment, EH was relatively far away from the treatment MH on the PCA 1 (explaining 57.46% of the variability), whereas the treatment MH was relatively far away from the treatment ML on the PCA 2. Indeed, the plants raised on the soil highly amended with eucalyptus litter (EH) had the highest levels of total phosphorus in the shoots, lowest growth and lowest pod yields (Table 1). In contrast, the plants were raised on the treatments MH, ML and T had the highest C and N contents in shoots. The plants from the treatment EL displayed intermediate position between EH and others treatments. Control, ML and MH treatments presented higher pod yield, nodule, roots and shoots dry matter. Also, more root colonization was recorded for control, ML and MH treatments in comparison to the treatments amended with eucalyptus litter (EL and EH) (Figure 1). Effect of litters on growth, leaves chlorophyll and shoots mineral content of Arachis hypogea The plants grown in the treatments T, MH and ML flourished earlier (35, 35 and 37 days after sowing, respec-tively), followed by the plants grown in the treatment EL (41 days after sowing) and the plants grown in the treatment EH (49 days after sowing). Therefore, the plants grown in the treatment EL displayed a six days delay of flowering and those grown in the treatment EH displayed a 14 days delay of flowering. Moreover, the plants grown in the treatment T produced more flowers than the others plants, during the measurement period ( Figure 2). In contrast, plants of the treatment EH had yellow leaves and less vegetative development compared to other treatments (data not shown) for the duration of the experiment. This is supported by the significant low chlorophyll content for EH treated plant as compared to others ( Table 2). Chlorophyll content in leaves were positively and very significantly correlated with yield of pods, shoots and roots dry weight (0.68***, 0.57** and 0.47** respecttively) (Table 3) suggesting that yield and good development of plant could be related with leaves chlorophyll content. N and C contents in Arachis hypogaea shoots were significantly lower in shoots of plants grown at 5% level amendment of eucalyptus litter compared to the other treatments (Table 2). No significant difference was found between the control and the low amended soil with the different residues. However, it is noteworthy that highest P shoots contents were found with the 5% treatment (EH and MH as already shown in PCA). Significant positive correlations were recorded between N and C leaves contents and chlorophyll contents (0.90*** and 0.55**). These elements were also significantly correlated with AM colonization (0.65*** and 0.55*** respectively) and nodules dry weight (0.75** and 0.86***) (Table 3) suggesting an improvement of mineral nutrition by these symbiosis. Mycorrhizal infection and nodulation Mycorrhizal infection was greatly reduced in EH treatment. Also, nodulation infectivity (number and dry weight of nodule) was delayed for this treatment (Table 1). However, mycorrhization and nodulation were not significantly different for others treatments (EL, ML and MH). Mycorrhizal intensity was positively and significantly correlated with yield of pods, shoots and roots dry weight (0.73***, 0.60*** and 0.53** respectively). In addition, nodules weight were positively correlated to the yield of pods, shoots and roots dry weight (0.37 *, 0.85 *** and 0.38* respectively) (Table 3), this suggesting that mycorrhizal infection and nodulation were inhibited by high level of eucalyptus litter. Chemical characteristics of the litter materials Carbon, total polyphenols content were higher in eucalyptus litter compared to maize litter (Table 4). In addition, pH was lower for eucalyptus litter. However, there were no significant differences in N and P contents between Table 3. EH and EL, are respectively high (5%) and low (1%) amended soil with eucalyptus litter; MH and ML are high (5%) and low (1%) amended soil with maize litter; T, control without amendment. Data in the same line followed by the same letter are not significantly different according to the one-way analysis of variance (p ˂ 0.05). EH and EL, are respectively high (5%) and low (1%) amended soil with eucalyptus litter; MH and ML are high (5%) and low (1%) amended soil with maize litter; T, control without amendment. between these two different litters (Table 1). Litters C/N ratio were almost identical. The amount of phenols added to each treatment was determined (data not shown) and the correlation was established between the amounts of phenols from soil and growth variables (Table 5). Similarly, correlation of these variables and soil pH after amendment (data not show) were determined (Table 5). Polyphenols were negatively linked to growth variables (except for P), while most of these variables were positively correlated with pH (Table 5). Soil pH remained acidic for treatments amended with eucalyptus litter while control treatment amended and unamended were neutral (Table 6). Difference in polyphenols and pH will mainly lead results discussion. DISCUSSION An amendment is an addition of organic matter in soil. The chemical quality of this organic matter strongly influ-ences plant growth. Our experiment shows that at high dose, eucalyptus litter reduced plant flowers (and early flowering), growth and production, probably through its acidity and very high level of phenol compounds (as evidence by negative correlation between variables growth and these two parameters). Soil acidity decrease roots growth and soil explored by roots and therefore a decrease of mineral absorption for plant growth. Many authors (Koyama et al., 2001;Hopkinsa et al., 2004;Pavlovkin et al., 2009) have shown that plants exposed to low pH stress are normally subjected to metal toxicity, and hence decrease in the root growth and the total biomass. Acidity significantly reduces mineral absorption by plants (James and Nelson, 1981). Low total C and N content in shoots of EH treated plants seem in agreement with this hypothesis. Lowest chlorophyll content in leaves and flowering retardation of plants for the treatment EH compared to other treatments (ML, MH, EL and T) is probably a consequence of this nutritional imbalance. This inhibition can severely compromise groundnut yield (Catan and Fleury, 1998) especially in Sahelian countries where rainy season would only last for three months. Maize litter does not affect A. hypogea growth in low and high amendment treatments (ML and MH). This means that maize litter did not significantly modify soil chemical proprieties (pH and phenol content) as compared to control. Also, A. hypogeae plants were tolerant to 1% eucalyptus amendment dose since they showed no significant growth change as compared to control. At high litter amendment (5%), eucalyptus reduced drastically A. hypogaea height (not shown) shoots dry matter and pods yield, suggesting that accumulating of eucalyptus residues in time could reduce plant growth and crops production in intercropping systems. Our study corroborates Suresh and Rai (1987) studies which showed a strong reduction of seeds germination, root length and dry matter production of sorghum, cowpea (Vigna unguiculata) and sunflower in cultivating with Eucalyptus tereticornis, Casuarina equisetifolia and Leucaena leucocephala. Using extract from leaves and bark of E. teriticomis, Puri and Khara (1991) observed similar results on Phaseous vulgaris germination and total biomass. The depressive effect of acidity and phenols on growth of A. hypogea can be done indirectly by reducing or canceling symbiotic microorganism contribution on plant growth. Our work showed a positive correlation between AM mycorrhizal and growth parameters [shoots dry weight (PA); roots dry weight (PR), yield of groundnuts pods (PG)] and shoots mineral contents (N and C). This sug-gests that AM fungi promote plant development by increasing nutrient use occurred with increasing AM fungi hyphal. Many publications have already shown that mycorrhizal symbiosis increases nitrogen uptake from the soil (Barea et al., 1991), plant fitness and nutrient turnover (Jeffries et al., 2003). However, at high eucalyptus litter amendment (5%), our results showed that mycorrhizal formation and mineral leaves contents were strongly reduced (low AM colonization, low N and C contents, and cancel nodulation). According to Lehto (1994), soil pH affects the ability of roots to grow or ability of mycorrhizal fungi to colonize roots and to take up nutrients. Also in our study, the high polyphenol content in eucalyptus organic matter could accentuate the negative effect by reducing germination, hyphal extension (Cantor et al., 2011) or killing symbiotic, partner of A. hypogaea, or preventing the mechanism to recognition in symbiotic partners. Callaway et al. (2008) had already suggested that potential allelopathic effects of exotic species (Alliaria petiolata) might be due to direct inhibition of plant seedlings and fungus before the formation of symbiosis. It has already been well established that some alien species negatively impact (inhibit, delete, reduce abun-dance and performance of N-fixing microbes) on nodulation through their organic residues or aqueous root extract (Faye et al., 2009;Sanon et al., 2009). Poore and Fries, 1985 found that germination and growth of associated species was inhibited by extracts of eucalyptus leaves. Several physiological reasons have been attributed to this phenomenon including: inhibition of infection of leguminous roots by nodule bacteria leading to decreasing nodule formation; inhibition of nitrogenase enzyme activity in the nodule due to modification of the nitrogenase iron protein and decrease in the supply of photosynthates to the rhizobium due to the poor supply of major nutrients, such as P (Bolan et al., 2003). Our results on groundnuts are not necessarily generalizable to all plants. In fact some plants not symbiotically dependent or having co-evolved with eucalyptus may not be affected by eucalyptus litter (Alliaume et al., 2010). In addition, some authors (Sarkar et al., 2010) have already observed positive impact of eucalyptus amendment on the growth of red amaranth. Phosphorus, the major nutrient most needed by groundnuts has a vital role in energy storage, root development and early maturity of crops. The highest phosphorus concentration observed in plants amended with 5% of eucalyptus organic matter seems to be linked to the lack of groundnuts pods. This allows us to conclude that, in other treatments, great part of phosphorus is used for groundnuts pods production. This result is in accordance with previous studies which showed that, in Senegal 65% of the phosphorus taken up by the crops is stored in pods and hence removed from the field at harvest (Schilling et al., 1996). The young eucalyptus is associated with AMF while adult plants are associated with ectomycorrhizal fungi (ECMF) (Malajczuk et al., 1982). Our study allows us to formulate a hypothesis on the mycorrhizal successional allegedly linked to the accumulation of organic matter, specifically poplyphenols associated with the organic matter. In fact, it was already demonstrated some ECMF could detoxify phenolics while AMF cannot and were inhibited with increasing soil polyphenols concentration. Therefore phenols increasing reduce AM roots colonization in aid of ECM colonization (Piotrowski et al., 2008). Conclusion Maize and Eucalyptus litter are differentiated mainly by their pH and polyphenols content. This work shows that high dose of Eucalyptus litter caused depressive effects on growth and yield of groundnut. Mycorrhizal colonization and root nodule formation were also strongly reduced for plant grown in soil highly amended with eucalyptus litter. This work supports the idea that planting A. hypogea in association with Eucalyptus plantation may be against production in the long term due to the accumulation of eucalyptus residues. In future studies, it will be necessary to clarify whether it is the action of pH or polyphenols that is responsible for adverse effect or the synergy of both. Whatever concentration, maize litter doesn't show any depressive effects on groundnuts growth and production.
v3-fos-license
2017-08-27T17:02:41.001Z
2016-09-08T00:00:00.000
1092045
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/s13052-016-0292-1", "pdf_hash": "83bc22238f26f6d2a75a94cd5260b22c3c5d5383", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42175", "s2fieldsofstudy": [ "Medicine" ], "sha1": "83bc22238f26f6d2a75a94cd5260b22c3c5d5383", "year": 2016 }
pes2o/s2orc
Kawasaki disease associated with Mycoplasma pneumoniae Background Kawasaki disease (KD) is an illness of unknown etiology that mostly occurs in children under 5 years of age and is the leading cause of acquired heart disease all over the world. Mycoplasma pneumoniae (MP) was one of the likely causative agents of KD. However, the etiologic effect of MP in KD has not been fully recognized. Methods We prospectively analyzed the clinical records of 450 patients with KD hospitalized in Children’s Hospital of Soochow University from 2012 to 2014. Using medical records, we retrospectively identified patients with low respiratory tract infection (non-KD group). Results Of the 450 KD patients, MP was positive in 62 (13.8 %). The median age of the MP + KD+ group was significantly older than the MP-KD+ group (25 vs 14.5 months, P < 0.01). MP + KD+ group had higher levels of ESR, N% and CRP than the MP-KD+ group. MP + KD+ group were more frequent in respiratory disorders than MP-KD+ group with a P < 0.05. No statistical difference of non-responders or coronary artery lesion was found between the groups. Conclusions MP infections are found in an important proportion of the KD patients (13.8 % in our series). MP infection tended to occur in older populations and with a higher rate of respiratory tract involvement in patients with KD. No statistical difference of non-responders or coronary artery lesion was found between the MP+ and MP- KD patients. admission, after obtaining informed consent from the parents, clinical-epidemiological information was collected by pediatricians. Using medical records, we retrospectively identified patients with low respiratory tract infection (LRTI) (non-KD group) from the ward of pulmonology. All experiments were performed following the relevant guidelines and regulations of Soochow University. The study was approved by the Medical Ethics Committee of Soochow University. Data collection Data such as age, sex, season onset, fever duration before diagnosis, length of stay in hospital, KD-related clinical manifestations and other systemic involvements, white blood cell (WBC) count, platelet (PLT), neutrophils proportion (N%), c-reaction protein (CRP), erythrocyte sedimentation rate (ESR), MP antibody, MP-DNA by PCR and echocardiographic findings within 1 month of onset were collected and further analyzed. WBC, PLT, N%, CRP and ESR were tested within 24 h after admission. MP-DNA detection and evaluation Nasopharyngeal secretions were collected from each study participant within 24 h after admission by a lab technician as previously described [9]. Briefly, an aseptic plastic sputum catheter was inserted into the nostril to a depth of about 7-8 cm until reaching the pharynx. Approximately 2 ml of nasopharyngeal secretions was collected by applying negative pressure. The sample was mixed with 4-8 ml PBS, and centrifuged for 10 min at 300-500 rpm. The supernatant was discarded and the pellet was mixed with 4-8 ml PBS and centrifuged for an additional 10 min. The pellet was stored at 280uC until testing began. DNA lysate (Shanghai Shenyou biotechnology company, Shanghai, China) was added to the sputum pellet following washing with PBS. The sample was heated to at 95°C for 10 min, centrifuged for 5 min at 12 000 rpm, and then the supernatant was collected. After extracting the DNA from the sputum specimen, MP DNA was detected by fluorescent real-time PCR (BIO-RAD iCycler, USA). The cyclic temperature settings were 93°C, 2 min; 93°C, 45 s; 55°C, 60 s → 10 cycles; 93°C 30 s → 55°C, 45 s → 30 cycles. The fluorescence collection point was set at the 55°C, 45 s. Ct value was used to quantify the fluorescence quantitative PCR results. The following primers were used: The probe binding sequence was located between the upstream and downstream primer. The fluorescent reporter dye at the 59 end of probe was 6carboxyfluorescein, and the quencher at the 39 end of the probe was 6-carboxytetramethylrhodamine. The primers and probe were purchased from Guangzhou Daan Gene Ltd. (Guangzhou, China). An MP-negative sample was defined as having an amplification curve that was not S-shaped or a Ct value = 30. Both results indicated that the MP DNA content was below the detection limit. A positive MP sample was defined as having an amplification curve that was S-shaped and a Ct value <30. Serological analysis for MP Paired serum samples were collected on admission and at least 1 week later. IgM antibodies were measured using the Serion Elisa Mycoplasma pneumoniae IgM kit (Institut Virion/serion GmbH, Würzburg, Germany) with the test cut-off 0.5 × mean optical density value of the kit control serum, as indicated in the insert. The assay was considered positive if IgM ≥1.1U/ml according to manufacturer instructions. Diagnosis of MP infection Diagnosis of MP infection was based on serology and PCR findings. Both the presence of IgM antibodies and positive PCR results were used as sufficient criteria for current MP infection. Definition of KD KD was defined by the presence of ≥5 days of fever and ≥4 of the 5 principal clinical features for KD according to American Heart Association [1]. These clinical features included (1) bilateral non-exudative conjunctival injection; (2) oral mucosal changes, such as erythema of the lips or strawberry tongue; (3) changes in extremities, such as edema, erythema and desquamation; (4) polymorphous rash and (5) cervical lymphadenopathy of ≥1.5 cm. Patients with only two or three principal clinical features of KD, in addition to fever, are considered to have incomplete KD when CAL was confirmed. CAL (in echocardiography) was defined as an internal lumen diameter ≥3 mm in children <5 years of age or ≥4 mm in children >5 years of age. Coronary artery aneurysm (CAA) was defined as a segmental internal diameter of any segment ≥1.5 times greater than that of an adjacent segment. Giant coronary aneurysm was referred to a segmental internal diameter ≥8 mm [10]. Non-responders were referred to persistent or recrudescent fever ≥36 h after the initial IVIG infusion. Statistical analysis We used n (%) for categorical variables and median (quartiles) for continuous variables with non-normal distribution or mean and standard deviation (SD) for those with normal distribution. We assessed differences in categorical variables with the χ 2 test. We calculated 95 % confidence interval (95 % CI) for differences in medians with an exact test. Logistic regression analysis was performed to identify different clinical characteristics and laboratory parameters associated with MP infection. SPSS (version 22.0) software was used for all statistical analysis. Clinical features and laboratory parameters of the patient groups A total of 483 patients were diagnosed with KD. Thirtythree patients were excluded because of incomplete data. The remaining 450 patients were enrolled in this study. The age of onset ranged from 2 to 129 months with a median of 17 months old. The male to female ratio was 1.86:1. KD criteria of diagnosis included fever in 100 % of the patients, rash in 76.2 %, conjunctival injection in 84.2 %, changes in extremities in 77.6 %, cervical lymphadenopathy in 61.1 % and mucosal changes in 88 %. Of the 450 KD patients, MP was positive in 62 (13.8 %) patients. The MP + KD+ group consisted 62 cases while the MP-KD+ group consisted 388 cases. The clinical features and laboratory parameters of the two groups are shown in Table 1. The median age of the MP + KD+ group was significantly older than the MP-KD+ group (25 vs 14.5 months, P < 0.01). MP + KD+ group had higher levels of ESR, N% and CRP than the MP-KD+ group with significant differences after logistic regression analysis. A total of 6354 patients were included in the non-KD group. MP was positive in 1302 (20.5 %) patients. The median age of the 1302 children were 35 months. The ratio of male to female was 1.46:1. The median age of patients with MP infection in KD group were significant younger than that in non-KD group (25 vs 35 months, P < 0.001). The clinical features and laboratory parameters between the two groups are shown in Table 2. Epidemiology of MP infection in KD and non-KD patients The MP infection rate in KD patients increased with their age, with a statistical significance for age distribution (P < 0.001). Fewer children younger than 1 year old had MP infection (1.3 %) than those older than 5 years old (28 %) (P < 0.001) (Fig. 1a), indicating that the older children with KD were more prone to MP infection. The age distribution of MP infection in KD group was similar to that in non-KD group (Fig. 1b). The seasons in the Suzhou area of China were defined as spring (March-May), summer (June-August), autumn (September-November), and winter (December-February). During the 3-year period, seasonal discrepancy of MP infection was observed in non-KD group. MP was detected throughout the year with an epidemic peak observed each year in summer season. In KD group, the highest proportion of KD onset occurred during March through July (53 %), with a peak in May. However, seasonal discrepancy of MP infection was not observed in KD patients (Fig. 2). Response to treatment in KD patients A total dose of 2 g/kg of intravenous immunoglobulin (IVIG) was administered in patients in 432 patients, mainly during the 5 th and 9 th day of the disease onset (98.5 %). Among the patients who did not received IVIG (initial missed diagnosis and low level of IgA), 7 belonged to the MP + KD+ group and 11 belonged to the MP-KD+ group. An additional high dose of aspirin of 30-50 mg/kg was used during the febrile period and thereafter a low dose of 3-5 mg/kg was administrated until afebrile for 3 days. Non-responders were administered a second dose of 2 g/kg of IVIG. Intravenous pulse methylprednisolone Length of hospitalization and coronary artery lesion in KD patients We compared length of hospitalization of the MP+ and MP-KD patients. 92.1 % of the patients discharged within 15 days of hospitalization. No significant difference was found in the length of the hospitalization between the MP+ and MP-KD patients. Besides, no significant difference was found in the length of hospitalization in the patients with azithromycin and without azithromycin treatment (10 [9, 12], 10 [8,12], respectively, P > 0.05). CAL was found in 108 (24 %) of the patients, most of whose abnormalities was dilation only (88.0 %). No difference was found of CAL between the two groups. These findings are shown in Table 3. Other systemic involvement in KD patients Many patients in KD presented with other systemic involvements besides CAL. These results are seen in Table 4. Respiratory and gastrointestinal disorders were most commonly seen in both groups (48.21 % and 37.97 %, respectively). Respiratory disorders mainly included rhinorrhea, sore throat, cough, sputum and wheeze. Gastrointestinal disorders mainly included anorexia, vomiting, diarrhea and abdominal pain. Urinary system involvement included urethritis and meatitis. MP + KD+ group were more frequent in respiratory disorders than MP-KD+ group with a P < 0.05. On the other hand, there was no difference in the rate of gastrointestinal tract, urinary system involvement, aseptic meningitis and hepatic dysfunction. Besides, other complications like paroditis, arthritis, myocarditis and arrhythmia were seen in few cases. Discussion The etiology of KD remains unclear up to now despite numbers of studies. Many case reports have found that infectious agents might play an important role [8,[11][12][13][14][15]. The mechanism of infectious agents associated with KD has not been fully understood. Abe et al. [16] suggested that the process of superantigens activating T cell might be important in the pathophysiology of KD. Another species of the Mycoplasma genome, Mycoplasma arthritidis, has been shown to produce a superantigen suggesting the possibility that other Mycoplasma species, like MP, may do likewise [17]. MP was one of the most reported organisms to cause KD [7,8], but the clinical characteristics of MP infected KD have not been thoroughly examined. As far as we know, this is by far the largest study to determine the role of MP infection in KD. In our study, MP + KD+ patients were much older than MP-KD+ patients which was in accordance with Lee MNet al. [8]. In their research, they found that the MP group was significantly older than the non-MP group in KD patients (5.5 ± 3.5 vs 2.8 ± 2.2 years). In our study, we also compared the median age of patients with MP infection in KD and non-KD groups. Patients with MP infection in KD were significantly younger than those in non-KD group (25 vs 35 months, P < 0.001), although the age distribution of patients with MP is similar between KD and non-KD group. Seasonal peaks of MP infection in LRTI has been reported in our previous studies. Outbreaks began in summer months and peaked during August and September in Suzhou [9,18]. The seasonal distribution in our non-KD group is in line with our previous studies. However, interestingly, no seasonal peaks of MP infection in KD was observed in our study. This phenomenon imply that outbreaks of MP infection in LRTI may not increase the incidence of MP associated KD. There was no significant difference in the incidence of CAL in our study. Lee et al. and his colleagues also found no difference of both left and right coronary artery in MP+ and MP-KD patients by analyzing 12 KD patients and 42 controls [8]. Though MP as an important causative agent that mainly resides in the respiratory tract, it may disseminate systemically to the peripheral blood mononuclear cells and localize in arteries where it may infect endothelial cells, vascular smooth muscle cells and monocytes/macrophages, leading to vascular changes. In earlier years Momiyama Y et al. proposed an elevated level of MP antiboby was associated with coronary artery disease [19], suggesting a closed relationship between MP and vascular changes. Unfortunately, there was no difference of CAL or CAA between MP+ and MP-KD patients in our study. But it need to be verified by larger study samples. KD itself can lead to respiratory, gastrointestinal, urinary, hepatic and nervous systems disorder [1]. In our study, we found that respiratory involvement was more common in MP + KD+ group compared with MP-KD+ group. Respiratory involvement in KD might be a consequence of vessel inflammation with increased vascular permeability and perivascular edematous changes [20]. It is confirmed that MP infections involve both the upper and lower respiratory tract, which could partly explain why respiratory tract involvement was observed frequently in the MP + KD+ patients. Besides, MP is also reported to be responsible for non-respiratory tract manifestations, including neurological, hepatic and cardiac diseases [21][22][23]. However, there was no statistical difference in terms of non-respiratory manifestations between MP+ and MP-KD patients. This might indicate that MP plays a minor role in these systems in KD patients. In MP infection, it is difficult to set up criteria for the "gold standard" to detect acute infections. Cultural isolation is 100 % specific but too slow to be of timely diagnostic value. There is no universally agreed upon gold standard serological assay for detection of antibodies to MP. Paired serology (≥4-fold rise in IgG titer by complement fixation tests) remains the mainstay for diagnosing MP [24]. However, our patients were treated with highdose IVIG that contain substantial amounts of MP antibody. Such passive immunization would greatly increase MP-IgG. Based on this, in our study we detect MP-IgM, instead of IgG, to diagnose MP infection. On the other hand, specific IgM antibody could persist for up to a year after infection in some patients, but the pathogen is detected less frequently during the later stages of the disease. In our study, both the presence of IgM antibodies and positive PCR results were used as sufficient criteria of current MP infection. So we believed that our definition of MP infection has great accuracy in determining MP infection in KD patients. There are some limitations in the present study. First, the size of the study population was relatively small. Therefore, more studies are needed in order to firmly establish the relationship between simple MP infection and KD. In addition, overdiagnosis and underdiagnosis Conclusion We demonstrated 13.8 % patients had MP infection at the time of KD diagnosis. KD patients with MP infection tended to occur in older populations and with a higher rate of respiratory tract involvement. No statistical difference of non-responders or coronary artery lesion was found between the MP+ and MP-KD patients.
v3-fos-license
2024-06-21T15:07:11.250Z
2024-06-01T00:00:00.000
270635725
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-4418/14/12/1302/pdf?version=1718806651", "pdf_hash": "0bc66004ccaa498ba138ea9b06e028608634a440", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42176", "s2fieldsofstudy": [ "Medicine" ], "sha1": "767c8a89eb90281b79f70cbdf645985b22501ad6", "year": 2024 }
pes2o/s2orc
Long-Term Results of Surgical Treatment for Popliteal Artery Entrapment Syndrome Introduction: Popliteal artery entrapment syndrome (PAES) is a rare disease of the lower limbs, mainly affecting young patients, due to extrinsic compression of the neurovascular bundle at the popliteal fossa. The aim of this study was to describe our experience during a median 15-year period. Methods: Patients treated for PAES in our institution from 1979 to 2024 were included. Preoperative, intraoperative, and postoperative data were analyzed. Results: A total of 47 patients with a total of 78 limbs were treated. Duplex ultrasound with active maneuvers was performed in all limbs (100%). Angiography was performed in almost all patients (97.4%), computed tomography angiography in 56 (71.8%), and magnetic resonance angiography in 22 (28.2%). Concerning surgical treatment, musculotendinous section was performed in 60 limbs (76.9%), and autologous venous bypass was achieved in 18 limbs (23.1%). The rates for freedom from target lesion revascularization—meaning that no significant stenosis or occlusion during follow-up required revascularization—and 15-year primary patency were 92.4% and 98%, respectively. Conclusion: Long-term results of surgical treatment for PAES seem to be very satisfying. Myotomy with or without arterial reconstruction using venous bypass can lead to good patency at 15 years of follow-up. Introduction Popliteal entrapment syndrome (PES) is an uncommon disease of the lower limbs, mainly affecting athletes or military personnel and young patients without any atherosclerotic risk factors.It results mainly due to compression of the neurovascular bundle at the popliteal fossa by extrinsic popliteal musculotendinous structures.The syndrome can be classified as either anatomical or functional, and six different types are described according to the PVE Forum classification (Table 1) [1]. PVE Forum Classification of Compressing Structures Causing Popliteal Entrapment Type I: Popliteal artery running medially to the medial head of gastrocnemius Type II: Medial head of gastrocnemius laterally attached Type III: Accessory slip of gastrocnemius Type IV: Popliteal artery passing below popliteal muscle Type V: Primary venous entrapment Type VI: Variants Type F: Functional entrapment Anatomical PES is caused by vascular compression because of anatomic anomalies that are frequently present from birth and develop over time.In functional PES, on the other hand, the compression is caused by dynamic factors, such as contraction of the hypertrophied muscles during physical activity. The type of popliteal structure involved determines the different clinical scenarios.Venous compression is primarily responsible for calf swelling, discoloration, and paresthesia.Meanwhile, arterial compression is primarily responsible for exercise intolerance; lifestyle-limiting claudication; less commonly, post-stenotic aneurysm formation; and, in worst cases, limb-threatening ischemia. Our study focuses on popliteal artery entrapment syndrome (PAES), in which, in most cases, the vascular injury or mechanical compression requires surgical correction due to aberrant interaction between the popliteal artery and the muscular-tendon structures in the popliteal fossa. Therefore, the aim of this study was to describe our institutional experience in terms of diagnosis, surgical management, and long-term results of PAES during a 15-year follow-up period. Pathophysiology The popliteal neurovascular bundle normally passes between the medial and lateral heads of the gastrocnemius muscle.Regarding PAES, its pathogenic process has an embryological basis related to the development of the popliteal artery and the surrounding musculature. The proximal part of the popliteal artery originates as a continuation of the superficial femoral artery, which originates from the fusion of the ischiatic artery and the femoral artery.Meanwhile, the union of the tibialis anterior and tibioperoneal trunk forms the distal part, which occurs prior to the migration of the head portion of the medial part of the gastrocnemius into the medial location of the popliteal fossa. The most encountered variation of a muscle in PAES is that of the medial head of the gastrocnemius muscle.In humans, the muscle mass that is to make up the future medial head of the gastrocnemius muscle migrates across the popliteal fossa from its original lateral position. The early formation of the distal portion of the popliteal artery and the delayed migration of the proximal head of the medial gastrocnemius muscle cause the first and second types of compressive mechanisms [2]. In the absence of anatomical abnormalities, hypertrophy of the medial and lateral portions of the gastrocnemius muscles intermittently causes compression of the popliteal artery during plantar flexion, resulting in the sixth type of this syndrome, known as the functional subtype. Incidence The exact incidence of PES is not clear.Clinical studies report an incidence of 0.17%, while post-mortem studies suggest an incidence of 3.5% [1][2][3].In a similar study performed by the Popliteal Vascular Entrapment Forum, an incidence of 4.3% was recorded on 162 limbs [3]. Most patients affected by PES are predominantly young (in their early 30 s) males (83%) [4].Hypertrophy of the medial gastrocnemius muscle observed in athletes has been widely linked to functional PAES, with approximately 60% of reported cases affecting young athletes under 30 years [5]. Nevertheless, most functional PAES are not diagnosed.Nowadays, the diagnosis remains challenging because dynamic imaging is needed as the imaging examinations at rest may be normal [6]. Signs and Symptoms The clinical characteristics of anatomical PAES vary according to type.A grading classification of the clinical severity of popliteal artery entrapment syndrome has been proposed. PAES may be asymptomatic or may present with intermittent claudication.In rare cases, it can also present with cold feet and the absence of a distal pulse up, with acute limb-threatening ischemia being the most serious and critical manifestation. Repetitive trauma to the popliteal artery from focal impingement results in chronic inflammation, followed by occlusive or aneurysmal formation.However, this type is underestimated because only dynamic maneuvers can detect it.Nevertheless, to avoid the disease progression and associated critical complications, surgeons and physicians should not underestimate a clinical suspicion of such pathology [7]. Diagnosis Nowadays, a specific diagnostic protocol for suspected PAES has not reached a worldwide consensus.Many studies published in the literature report, as first instance, bedside noninvasive clinical tests, such as the use of the ankle-brachial indices (ABI) and duplex ultrasound [8]. However, these tests should be incorporated with provocative maneuvers.Typically, a decrease in oscillometric deflections is observed when the gastrocnemius muscle is actively contracted by plantar flexion or overstretched by passive dorsiflexion of the foot. If the initial evaluation is indicative of PAES, cross-sectional images are suggested.Computed tomography angiography (CTA) and magnetic resonance angiography (MRA) delineate the underlying musculotendinous anatomical abnormalities. MRA is known as the "gold standard" in the identification of abnormal popliteal fossa myofascial anatomy related to PAES Types I-VI because of its great sensitivity [4,9]. However, to confirm the diagnosis, in most cases, a digital subtraction angiography is necessary.Most often, a medial shift of the artery may be observed, or a focal stenosis or, in rare cases, a post-stenotic aneurysm formation.At least but not at last, run-off vessel evaluations are mandatory in those cases in which a reconstruction is needed. Surgical Treatment Endovascular treatment is the standard of care for revascularization in patients affected by atherosclerotic peripheral artery disease.Surgery is considered the gold standard for PAES.Surgical management includes the release of any fibrous structures and sections of the muscular structures compressing the artery.Most often, the medial head of the gastrocnemius muscle is involved, and its section is mandatory.Hypertrophy of soleus and plantaris muscles or the lateral head of gastrocnemius requires their resection. In advanced cases with arterial damage, such as arterial stenosis or occlusion, revascularization surgery is needed. Percutaneous transluminal angioplasty (PTA) may be achieved.However, avoiding the positioning of any stent in this anatomic district is strongly recommended because stent fracture and occlusion are common.Since most patients are active young people, long-term patency of the revascularization procedures should be warranted.Thus, venous bypass grafting is a valid option. Materials and Methods Data from all patients treated for PES in our institution from 1979 to 2024 were retrospectively collected and prospectively analyzed.Patients treated for venous PES were excluded from this study.Only patients affected by PAES were included.Informed consent was obtained, instead ethical approval was waived due to the retrospective nature of the study, according to our Ethical Committee. Preoperative, intraoperative, and postoperative data were extrapolated and inserted in a dedicated database. The preoperative risk factors considered for analysis were sex, smoke habit, heart function, diabetes, Chronic Obstructive Pulmonary Disease (COPD), Chronic Kidney Disease (CKD), and sports attitude. Each patient was asked if a previously incorrect diagnosis was suspected or if a previous surgery was performed for the same symptoms. Patients were classified as class 0 if they were asymptomatic, class 1 if they were affected by paresthesia and cold foot, class 2 if they were affected by intermittent claudication after more than 100 m, class 3 if they were affected by intermittent claudication after less than 100 m, class 4 if they were affected by rest pain, and class 5 if a distal gangrene or necrosis was present. All patients were preoperatively submitted to Doppler CW, with active maneuvers, such as plantarflexion against resistance, to suspect the entrapment syndrome. To assess for arterial compression, under duplex ultrasound, a popliteal artery scan is carried out on a supine patient while the lower leg muscles are relaxed and in plantar flexion.The patient then flexes the ankle against resistance, frequently displaying full artery blockage in some individuals. Patients were then submitted to digital subtraction angiography (DSA) with dynamic maneuvers to confirm the diagnosis and to evaluate the popliteal status and the distal vessel run-off, beyond the positivity on dynamic maneuvers. The popliteal status was evaluated as normal, altered, occluded, aneurysmatic, or stenotic. Run-off vessels were classified based on the number of patent vessels, with localized, severe, or diffuse atherosclerosis with occlusion of one to two vessels. At the last examination, a CTA or an MRA was performed to confirm the presence of abnormal structures compromising the popliteal fossa. Patients were stratified into six different subtypes, according to our PVE Forum classification. The surgical approach was chosen between medial and posterior depending on the extension of the arterial pathology and the type of myofascial structures involved.After popliteal fossa neurovascular bundle exposure, the structure compressing the vessel was identified and sectioned. In case of vascular dilatation or occlusion, the aneurysm was resected, and a bypass grafting was performed.Whenever the distal popliteal artery was not patent, a distal bypass was performed.The main conduit used was the great saphenous vein, in an inverted fashion. Every patient was enrolled in our dedicated follow-up including physical examination and duplex ultrasound at 6 months, 1 year, and annually thereafter. Postoperative complication, reintervention, or symptom reappearance was recorded at 30 days, 6 months, and annually thereafter. The primary endpoint was considered symptom regression, and the secondary outcomes were the vascular conduit primary patency, in case of vascular reconstruction. Statistical Analysis Comparative analyses were executed using the χ 2 test and Fisher's exact test, contingent upon the data.IBM SPSS Statistics for Windows v.25 (IBM Corp., Armonk, NY, USA), was the tool of choice for statistical analysis.Continuous variables were articulated as means, while categorical variables were represented as percentages.A p-value of ≤0.05 was the threshold for statistical significance. Patients' Characteristics From 1979 to 2024, a total of 76 patients and 118 limbs affected by PES were identified.A total of 40 limbs were treated for venous PES but were excluded from this study.However, we identified 47 patients affected by popliteal artery entrapment syndrome at our academic institution between 1979 and 2024.A total of 78 limbs were treated for PAES.A total of 31 patients (39.7%) had a history of contralateral PAES.The median age of patients was 34 (range, 14-62), who were mostly male (77.2%).The majority were athletes, with only 6 (12.76%) claiming not to be a sports player. Surgical Treatment In terms of surgical treatment, all 78 limbs were surgically treated.The preferred approach was the posterior approach (61 limbs, 78.2%), whereas 17 (21.7%)received the medial.Musculotendinous section (MTS) was performed in 60 limbs (76.9%).In the remaining 18 limbs, MTS was not performed, and a release of the musculotendinous structures was performed. MTS associated with PTA was performed in 2 limbs (2.5%) (Table 4).In 4 limbs, preoperative fibrinolysis with urokinase was performed to restore distal outflow.An autologous venous bypass grafting was achieved in 18 limbs (23.1%), as only 1 limb received polytetrafluoroethylene (PTFE) graft.The preferred autologous vein graft was the great saphenous vein. During the mean long-term postoperative follow-up of 181 months (range, 28-480 months), only 1 patient, who received an autologous venous bypass grafting (although poor run-off vessels were found preoperatively), underwent transmetatarsal amputation after 13 months from reconstruction due to occlusion of the graft.New reconstruction due to occlusion of venous bypass grafting was performed in 2 limbs, 1 after 7 months from the first intervention and 1 after 21 months.At a mean follow-up of 181 months, the rates for freedom from target lesion revascularization (TLR) and 15-year primary patency for the surgical treatment of PAES were 92.4% and 98%, respectively. Those patients treated with MTS had a 15-year patency rate of 98%, whereas those treated with reconstruction had a 71% of patency rate at a 15-year follow-up (p < 0.001). Discussion This study reports one of the largest series of PAES with the longest follow-up reported in the literature, to the best of the authors' knowledge.Since the incidence of this syndrome is quite low, our center had the privilege to be a national referral center for PAES with a large series collected in a prospective database [10,11]. As is well known, most patients affected by PAES are young men, and in some cases, are children, as in our experience, there were 5 cases of child patients.The most common symptom of PAES in pediatric age was claudication, but unfortunately, acute limb ischemia (ALI) was also common [12]. The median percentage of ALI was 11% in a recent meta-analysis on PAES that included primarily adult patients [13]. Since PAES is a progressive disorder, popliteal artery microtrauma and compression exerted by muscular or tendinous abnormalities could lead to arterial damage, with thrombosis or aneurysmal degeneration [14]. This process usually has a chronic evolution, giving the possibilities to develop a collateral network, which can compensate in case of complete occlusion. For the same reason, early diagnosis is essential to prevent limb ischemia and irreversible arterial damage [13,14], and it is critical to receive treatment as soon as possible to avoid major complications and the possibility of amputation.However, it is important to rule out other potential causes of ALI in pediatric patients, including premature accelerated atherosclerosis, microemboli, Takayasu's arteritis, collagen vascular disease, coagulopathy, and the presence of a cystic adventitial disease (CAD).Notably, CAD is a rare condition, accounting for about 0.1% of vascular diseases [15].It is characterized by the development of a cystic mass in the subadventitial layer of the vessel.The pathognomonic sign of this condition is the visualization of a thin, echogenic line separating the lumen of the vessel and the cyst, with the narrowed lumen presenting with an ultrasonic scimitar sign on DUS imaging.In most cases, the neurovascular evaluation is normal.Passive flexion of the knee, however, can result in reduced distal pulses.This differs from PAES, where active plantar flexion or passive dorsiflexion maneuvers contracting the gastrocnemius muscle always reduce pedal pulses.Consequently, it is possible that a physical examination and diagnostic tests will not be able to differentiate between CAD and PAES; therefore, surgical planning should account for the possibility of encountering either condition. Settembre et al. [13] report a literature review about PAES in children, with 18 cases of ALI, and complications occurred in four children, but full recovery was obtained in all cases, and no major amputations were registered.In our experience, 5 patients who were in their childhood were all successfully treated, without sequelae and with complete symptom regression. PAES is frequently under-reported or misdiagnosed.There is some evidence in the literature reporting an average of 2 years before receiving a PAES diagnosis; others experienced symptoms for almost 10 years before the cause was identified. In our series, 10 patients with exertional pain received a diagnosis of chronic compartment syndrome before coming to our attention, delaying the proper diagnosis of PAES. Non-invasive imaging modalities associated with a meticulous clinical examination, such as dorsiflexion and plantarflexion, causing popliteal artery compression and distal pulse disappearance, usually lead to an accurate diagnosis.In our series, all patients underwent active maneuvers such as dorsiflexion and plantarflexion, which can compress the popliteal artery and obliterate the blood flow during the examination.These maneuvers are imperative for the diagnosis of PAES in the early stage [16]. Thus, duplex ultrasonography represents a helpful tool to diagnose PEAS, especially when associated with provocative maneuvers obtained with calf muscle contraction [17,18]. However, to accurately determine the extent of the entrapment, multiple images are frequently necessary. CT and MRA exams could define the diagnosis, showing in most patients the abnormalities on the popliteal artery and the surrounding tissues. In our experience, invasive techniques such as the conventional angiography were used in almost all cases and represent an essential tool for patients who require endovascular treatment alongside surgical treatment.In fact, catheter-based local arterial thrombolysis is reported in case of complete artery occlusion, especially in an acute setting [19]. In our series, all patients underwent at least three examinations.In our series, the sensitivity to the detection of PAES was 96.2% for DSA with dynamic maneuvers, 54.5% (12 limbs) for MRA, and 51.7% (29 limbs) for CTA.Stearns at al. reported a sensitivity of 88% for MRA, reflecting a very high sensitivity of detection [18].DSA, on the other hand, is more invasive but necessary, aiding in establishing a diagnosis and detecting arterial abnormalities with a 100% sensitivity [20]. The literature does not have a consensus with regard to the importance of DSA; in fact, some authors [21][22][23] considered catheter-directed angiography an invasive procedure, not more diagnostic than CTA and MRI, in recent times. However, in our opinion, DSA associated with active maneuvers is the most diagnostic tool in our armamentarium, and it can be associated with preoperative fibrinolysis with Urokinase, performed to restore distal outflow, as reported in 4 cases in our experience. Sinha et al. in a literature review of 26 PAES studies reported a bilateral involvement in around 40% of cases.In our series, most patients present a bilateral PAES, and even in case of asymptomatic contralateral popliteal entrapment discovered by radiological findings, to prevent definitive arterial damage or complications, surgical treatment is recommended. Concerning surgical treatment, different surgical methods have been described, aiming for popliteal entrapment release, establishing a normal anatomy, and restoring a normal arterial flow.In most cases, in fact, surgical correction requires myotomy or resection of the aberrant musculotendinous structures.In addition, bypass is required in case of popliteal artery degeneration, with stenosis or occlusion, and post-stenotic dilatation or aneurysm formation. With regard to the surgical technique, our practice recommends the posterior approach because the relationship between the musculotendinous tissues and the popliteal artery, as well as other anatomical features of the popliteal fossa, may generally be better observed using the posterior approach than the medial one. Thus, in our experience, the preferred surgical approach was the posterior: 61 limbs (78.2%) received the posterior approach, whereas 17 (21.7%) the medial, especially in those cases where an extended surgical revascularization was necessary.As reported in a large Japanese retrospective multicenter study, the posterior approach was the main surgical option.Nevertheless, due to the low number of cases receiving the medial approach, a comparison between the two approaches is not reported [24]. Autologous venous substitution remains the best conduit to use, as reported in the literature [25], and vascular reconstruction with venous bypass is the most frequent procedure described in the literature [26].In our experience, autologous venous bypass grafting was achieved in 18 limbs (23.1%), as only 1 limb received polytetrafluoroethylene (PTFE) graft due to the unavailability of a venous conduit.Some authors suggest, as an adjunct to MTS, the endovascular therapy with thrombolysis to restore distal outflow.In our series, 4 limbs with poor outflow underwent preoperative thrombolysis with Urokinase and then MTS. There was a clear difference in terms of arterial patency using arterial reconstruction or myotomy alone.This aspect could be justified not only because of the popliteal artery quality deterioration, but also because of the poor run-off below the knee arteries, caused by distal embolization from arterial or aneurysmal thrombosis. Only a small number of studies provided a 5-year follow-up with a positive outcome following the surgical therapy (84-92%). In our series, the rates for freedom from TLR and 15-year primary patency for the surgical treatment of PAES were 92.4% and 98%, respectively.Those patients treated with MTS had a 15-year patency rate of 98%, whereas those treated with reconstruction had a 71% patency rate at a 15-year follow-up. Long-term patency was superior when musculotendinous sectioning was performed without vascular reconstruction. Conclusions Long-term results of surgical treatment for PAES seem to be very satisfying, and myotomy with or arterial reconstruction using venous bypass leads to good patency at 15 years of mid follow-up.The key to the success of the PAES syndrome treatment remains to be the exact and prompt diagnosis to avoid delays and irreversible arterial damage. Figure 1 . Figure 1.Digital subtraction angiography in a patient affected by PAES with popliteal artery s sis after dynamic maneuvers.(A) neutral position; (B) dynamic maneuvers of dorsiflexion feet. Figure 2 . Figure 2. CTA of left inferior limb in a patient with PAES.(A) Coronal section of CTA sho anomalous insertion of the medial head of gastrocnemius (arrow) and occluded popliteal Figure 1 . Figure 1.Digital subtraction angiography in a patient affected by PAES with popliteal artery stenosis after dynamic maneuvers.(A) neutral position; (B) dynamic maneuvers of dorsiflexion of the feet. Figure 1 . Figure 1.Digital subtraction angiography in a patient affected by PAES with popliteal artery s sis after dynamic maneuvers.(A) neutral position; (B) dynamic maneuvers of dorsiflexion o feet. Figure 2 . Figure 2. CTA of left inferior limb in a patient with PAES.(A) Coronal section of CTA sho anomalous insertion of the medial head of gastrocnemius (arrow) and occluded popliteal a Figure 2 . Figure 2. CTA of left inferior limb in a patient with PAES.(A) Coronal section of CTA showing anomalous insertion of the medial head of gastrocnemius (arrow) and occluded popliteal artery (dotted arrow).(B) Sagittal volume rendering showing anomalous insertion of the medial head of gastrocnemius (arrow) and occluded popliteal artery (dotted arrow). Table 1 . PVE Forum classification of popliteal entrapment syndrome. Table 2 . Demographic characteristics of patients with popliteal artery entrapment syndrome. Table 3 . Preoperative clinical symptoms and type of imaging used for diagnosis. Table 4 . Types of surgical treatment and long-term patency rate.
v3-fos-license
2023-02-23T14:22:45.018Z
2022-04-06T00:00:00.000
257092994
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-022-09721-9.pdf", "pdf_hash": "7a6298bcfbbc4b93c8166ed8b4660a52c4cf6653", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42177", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Medicine" ], "sha1": "7a6298bcfbbc4b93c8166ed8b4660a52c4cf6653", "year": 2022 }
pes2o/s2orc
Extensive epigenetic modification with large-scale chromosomal and plasmid recombination characterise the Legionella longbeachae serogroup 1 genome Legionella longbeachae is an environmental bacterium that is the most clinically significant Legionella species in New Zealand (NZ), causing around two-thirds of all notified cases of Legionnaires’ disease. Here we report the sequencing and analysis of the geo-temporal genetic diversity of 54 L. longbeachae serogroup 1 (sg1) clinical isolates, derived from cases from around NZ over a 22-year period, including one complete genome and its associated methylome. The 54 sg1 isolates belonged to two main clades that last shared a common ancestor between 95 BCE and 1694 CE. There was diversity at the genome-structural level, with large-scale arrangements occurring in some regions of the chromosome and evidence of extensive chromosomal and plasmid recombination. This includes the presence of plasmids derived from recombination and horizontal gene transfer between various Legionella species, indicating there has been both intra- and inter-species gene flow. However, because similar plasmids were found among isolates within each clade, plasmid recombination events may pre-empt the emergence of new L. longbeachae strains. Our complete NZ reference genome consisted of a 4.1 Mb chromosome and a 108 kb plasmid. The genome was highly methylated with two known epigenetic modifications, m4C and m6A, occurring in particular sequence motifs within the genome. www.nature.com/scientificreports/ identified via Gubbins (3045) and the isolates belonged to two main clades (Fig. 1). The larger clade consisted of isolates from several regions in NZ and shared a date of common ancestor between the years 604-1755 CE (95% HPD interval), while the smaller clade consisted of isolates from the Canterbury district only, sharing a date of common ancestor between 1904 and 1986 CE (95% HPD interval; S1 Figure). Given the ad hoc nature of our sampling and the strong bias towards the Canterbury region, it is not clear if the smaller clade truly reflects the distributions of these lineages within NZ. The oversampling of Canterbury isolates is primarily a historical consequence stemming from the 1990's when pneumonia aetiological studies in this region revealed the importance of Legionella as a cause of pneumonia 21,22 . As a result, systematic routine testing, including a commitment to culture patient specimens allowing the isolation of clinical strains, was initiated, which subsequently lead to a disproportionate number of isolates being derived from the region. Recently it has been shown that when routine, systematic testing similar to that used in Canterbury is performed in other NZ regions, there are several that have higher or similar rates of LD 2 . Thus, the higher rates previously observed in Canterbury appear to predominantly reflect the testing regime employed rather than some special or intrinsic quality of the region. Future studies that include more isolates from outside Canterbury will clarify this and potentially identify pathways that has led to its introduction to NZ. One potential explanation for the splitting of the two clades is that there have been two separate introductions into NZ. L. longbeachae has recently been detected by qPCR on the bark of live pine trees 23 , particularly on the species Pinus radiata, which is an important commercial crop and the most common pine tree species grown in NZ 24 . It was first introduced in the 1850's, but the boom in commercial forestry did not begin until 1920-1930. This boom coincides with the date of the common ancestor of clade 2 and many sub-clades of clade 1. This suggests L. longbeachae may have been introduced to NZ with Pinus radiata followed by relatively rapid evolution and dispersal. The swift evolution and spread of disease-causing strains may be a feature of Legionella. It has also been observed in L. pneumophila, where David et al. 20 , showed phylogenetic evidence of rapid dispersion and the emergence of disease-causing strains in man-made environments over the last 100 years. In our plasmid analyses, three additional L. longbeachae plasmids were identified through further sequencing of two of our clinical isolates, B1445CHC and B41211CHC. Isolate B1445CHC was found to contain two plasmids of 73,728 bp and 150,426 bp (pB1445CHC_73k and B1445CHC p150k), while B41211CHC contained only one plasmid of 76,153 bp (pB41211CHC_76k). Alignment of the NZ L. longbeachae read sets to the Legionella reference plasmids (pNSW150, pB41211 CHC_76k, pB1445CHC_73k, pF1157CHC, pB1455CHC_150k and pLELO) demonstrated that some of the NZ isolates contained an exact copy of the reference plasmids investigated, some contained similar plasmids with reads aligning to sections of the reference plasmids, and some contained reads that aligned to sections of more than one reference plasmid (Fig. 1). This illustrates that the Legionella plasmids share a common back bone separated by variable regions and that there is extensive recombination amongst them (S2 Figure). The plasmid results also correlated with the clades (Fig. 1) identified via phylogenetic analysis, suggesting that plasmid recombination events may pre-empt the emergence of new L. longbeachae strains. Our global analysis that included available sequence information from 89 L. longbeachae isolates from the United Kingdom (UK) and NZ were found to share 3219 core SNPs and belonged to multiple small clades. Most of the clades consisted of isolates from a single country, whilst a small number had isolates from both countries (S3 Figure), indicating some recent global transmission. Genetic diversity of L. longbeachae sg1 clinical isolates. Given there are few complete L. longbeachae genomes available we chose one of our sg1 isolates as the reference genome to further analyse our other 53 NZ isolates. Isolate F1157CHC was sequenced using both Illumina short read sequencing in the initial comparative dataset, and then it was subsequently sequenced with PacBio long read sequencing. To visualise the data from the comparison of the 54 samples in the dataset and to show multiple facets of this study simultaneously, an overarching Circos figure (Fig. 2) was generated using the complete PacBio genome of F1157CHC as a backbone. The tracks in the figure are described in detail in the legend. Overall it can be seen that the regions detected for recombination by Gubbins are unevenly distributed, with some clusters around the genome (~ 600 kb, ~ 800 kb, ~ 1900-2050 kb), and a large, slightly less dense region (2300-2800 kb). In total, 655 protein coding genes are at least partially included in these regions of high recombination, and they are not clearly associated with any functional class (S4 Figure). As indicated in the gene rings in Fig. 2, the genome of F1157CHC was functionally annotated and categorized using the amino acid sequences from the NCBI PGAP predictions against the eggNOG-mapper database (v. 2.0). The genes were coloured according to their COG categories (Fig. 2). We found that 3410 (94.14%) returned an annotation result, and of those 2741 (80.38%) were categorized with COG functional categories (S1 Table). In terms of the performance of the eggNOG server, this level of annotation for L. longbeachae is slightly above the level of 76% reported for the Legionellales order, and close to the eggNOG v. 5.0 database average of 80% 25 . The main functional groups (those with a single COG category definition) accounted for 2522 (73.96%) of the annotations. As expected, the functions of the genes encoded on the chromosome and plasmid differ (S5 Figure), where it carries more genes of unknown function (category S) and those associated with replication, recombination and repair (category L). COG category S-"function unknown"-is the largest single category, and accounts for 547 (16.04%) of the returned annotations. We used our set of 53 draft genomes to investigate both the core and pangenome. A genome summary of these 53 draft genomes, plus the complete genomes of F1157CHC, NSW150 and FDAARGOS_201, can be found in S2 www.nature.com/scientificreports/ www.nature.com/scientificreports/ (282.7 ± 5.5). These differences are small and not significant, suggesting genome-completeness doesn't strongly influence our ability to annotate genomes. The only exception is the number of rRNAs, which are much more numerous in the complete genomes (12 copies each) than the draft genomes (6.89 ± 0.42), reflecting the difficulty in assembling highly repetitive regions from short read sequencing data. In comparison to the recent study of Bacigalupe et al. 17 , (n = 56, predominantly UK sg1 isolates), we found that the range of our coding sequences was similar to the 3558 genes they reported. We cannot say if there is any real difference in gene numbers between the NZ strains reported here, and those in the Bacigalupe et al., study 17 because only summary gene numbers per genome were provided, but it seems unlikely. Using Roary 26 , we found a pangenome of 6517 genes, and a core genome of 2952 genes amongst 56 isolates (our 54 isolates, NSW150 and FDAARGOS). This is ~ 86.3% of the number of genes in the F1157CHC genome, indicating a large core genome and a small accessory genome amongst the isolates in this study. Bacigalupe et al. 17 , also reported a core genome (2574 genes) and pangenome (6890 genes), which were over a shorter, but contemporaneous, timeframe. Given the isolate numbers are almost the same (excluding reference isolates), but the methodologies differ for calculating the core genome, it is interesting to observe a smaller number of genes in the core but a larger number of genes in the pangenome. It is tempting to speculate that there might be a smaller gene repertoire for L. longbeachae in NZ, possibly a result of its relative geographical isolation, or maybe environmental conditions are different, requiring the use of different sets of genes to survive within NZ soil. Using the categories defined within Roary, we found 157 genes (95 to 99% of strains) in the soft core category, 865 (15 to 95%) in the shell category and 2543 (0 to 15%) in the cloud category (S1 File). Currently, there are 61 recognised species and 3 subspecies within the Legionella genus (http:// www. bacte rio. net/ legio nella. html). Of these, 58 have at least draft genome sequences available, which aid in understanding the evolution of the genus 27 . A core genome has been estimated to be only 1008 genes, highlighting genus diversity. With a GC content of ~ 39% and a genome of ~ 3.3 Mb, L. pneumophila is regarded as the most clinically important 1 . A recent Australian study 28 has estimated the core genome of this species as 2181 genes, which is 36.7% of the pangenome's genes (5919 genes). In comparison, in our study, analogous numbers indicate 45.3% for L. longbeachae, suggesting its genome is probably more stable than the L. pneumophila genome. Finally, we used FastGeP 29 to perform an ad-hoc whole genome MLST analysis of the 56 isolates using the 3420 CDSs in the F1157CHC reference genome. We found 2756 loci were shared, of which 1321 (47.93%) were identical at the allelic level. One-hundred and eight of the shared loci were excluded because of hypothetical gene duplications, and 664 were excluded because of incomplete information, such as missing alleles, truncation or nucleotide ambiguities. After removal of these loci, 2648 (of which 1327 were polymorphic) were used to construct the distance/difference matrix. As FastGeP is looking at allelic distances in a gene, the distance between two alleles is independent of underlying sequence differences. Visualization of the FastGeP matrix in iTOL 30 is shown in S2 file. Antibiotic resistance and virulence factor genes. Unsurprisingly, our 54 L. longbeachae sg1 isolates all contained a chromosomal class D β-Lactamase gene homologous to bla OXA enzyme family. This 795 bp bla OXA-like gene, whose phenotypic features are uncharacterized, is also found in L. oakridgensis (100% nucleotide match). Twenty-one isolates also have another molecular class D β-Lactamase with 100% nucleotide match to bla that are contained on a plasmid similar to L. pneumophila pLELO. The bla OXA-29 gene was first identified in the Fluoribacter gormanii type strain ATCC 33297 T (Genbank accession number NG_049586.1 31 ). The majority of the known class D β-Lactamases are found on mobile genetic elements and indicate the intra-species transfer of bla OXA-29 on conjugative plasmids amongst the various Legionella species such as L. pneumophila, L. sainthelensi, and L. hackeliae. This bla OXA-29 β-Lactamase is part of a group of structurally related serine enzymes that have high activity against penicillins, reduced activity against cephalosporins and no activity against carbapenems 32 . All isolates also contained a previously identified tetracycline destructase gene, tet56 that confers tetracycline resistance when expressed 33 . Tet56 belongs to a family of flavoprotein monoogenoxygenases that inactivate tetracycline antibiotics through covalent modification to the antibiotic scaffold 33,34 . Previously, the antimicrobial susceptibilities of 16 isolates that were sequenced in our current study had been investigated 35 . For these isolates, the tetracycline MIC 90 was found to be high, ranging between 16 to 64 mg/mL when the isolates were grown in BYE broth, suggesting tet56 was expressed and the protein was functional in these isolates (S3 Table). Virulence factor database analysis showed our 54 isolates as well as the NSW150 and FDAARGOS_201 complete genomes had a near identical pattern with between 33 and 36 virulence factor genes (S4 Table). Many of these encoded various components of the type IVB Dot/Icm secretion system (T4SS), which is essential for its virulence and found to be present in all Legionella species examined to date 36 . Legionella longbeachae chromosome and plasmid architecture. Our complete chromosome for isolate F1157CHC has been published 19 , and therefore the description of this genome is kept relatively brief and is more comparative in nature with the other available reference L. longbeachae genomes (NSW150, and FDAARGOS_201). We compared all three reference genomes using the MAUVE plugin within Geneious (v 9.1.8) and the results are shown in Fig. 3. At 4,142,881 bp, F1157CHC is larger by 65,549 bp when compared to NSW150 and smaller by 19,851 bp to FDAARGOS_201. Overall, the genomes of F1157CHC, NSW150 and FDAARGOS_201 are similar in their organisation with the MAUVE alignment showing four (81-2264 kb) collinear blocks in the genomes, called LCB1, LCB2, LCB3 and LCB4. At an overall genome level, the order and orientation of these blocks indicates a greater similarity between NSW150 and F1157CHC, while FDAARGOS_201 is slightly different (S5 Table). www.nature.com/scientificreports/ Three of these blocks (LCB2, LCB3 and LCB4) are found in all three genomes, and a further one (LCB1) is found only in NSW150 and FDAARGOS_201. The genomic coordinates and the percentage of the collinear block that contains genomic sequence are described in S5 Table. In addition, there are two and three small regions in two of the genomes that are not found in collinear blocks totaling 4.2 and 4.4 kb for NSW150 and FDAAR-GOS_201, respectively. For FDAARGOS_201 and NSW150, two of these unique regions are found flanking the shortest collinear block of 81 kb (LCB1), and for NSW150 the third region is a short sequence at the start of the chromosome (unusually for this chromosome the start of the dnaA gene is not annotated to be at position 1). The LCB1 block shows the greatest disparity in content with the genomic length in NSW150 being 31.3 kb, but 73.6 kb in FDAARGOS_201, hence there are many gaps in the collinear block alignments. It should be noted that as the MAUVE aligner within Generous works on a linear chromosome, the LCBs at the end of the chromosome form part of the same larger collinear block, meaning that on the circular chromosomes there are in effect only three blocks, with the ~ 1807 kb block LCB3 being flanked by the content variable 81 kb block LCB1. There are thus only a few boundaries around the main collinear blocks. The boundary between LCB2 and LCB3 in FDAARGOS_201 and FH1157CHC occurs within the traF gene, part of the tra operon. The organization is more complex in NSW150 where the 31.5 kb block of LCB1 and a 3.9 kb region containing three hypothetical genes is found between LCB2 and LCB3, with the tra operon being found on LCB1. The tra operon is important for pathogenicity because it forms part of the T4SS for the transfer of plasmids via conjugation 37 . The other main boundary between LCB3 and LCB4 for all three chromosomes, the transfer messenger RNA (tmRNA) ssrA gene is present at the end of LCB4. The tmRNA genes are part of the trans-translocation machinery, which can overcome ribosome stalling during protein biosynthesis. Trans-translocation has been found to be essential for viability within the Legionella genus, with the ssrA gene being unable to be deleted in L. pneumophila 38 . Under the control of an inducible promoter, it was found that decreasing tmRNA levels led to significantly higher sensitivity to ribosome-targeting antibiotics, including erythromycin 38 . At the end of LCB3 in F1157CHC and NSW150, there is an IS6 family transposase and an SOS response-associated peptidase, but little is known about these genes. The flanking gene in FDAARGOS_201 comes from a small orphan block of 1.3 kb between LCB1 and is a short DUF3892 domain-containing protein, as defined by Pfam 39 . Whilst having unknown function it is found widely across bacteria and archaea, and within the Legionellales order. As described above, the collinear blocks include gaps, and except for LCB1, all other defined blocks in the isolates are found with the genomic length being greater than 87% of the block length. Within the blocks themselves, LCB1 shares a common region of ~ 23.8 kb and a larger non-overlapping (i.e. different gene content) region in FDAARGOS_201 compared to NSW150. For the remaining three blocks there are combinations of absence and presence of genetic material within these blocks across the three isolates. For the regions over 10 kb, these can be summarized as regions that are present in only a single isolate (37.1 kb in LCB3 of F1157CHC), or in two isolates (12. The gene content in these blocks is varied, and the boundaries close to tRNA genes, site-specific integrase genes, SDR family oxidoreductase genes, ankyrin repeat domain-containing genes, or in intragenic space, but for some of the boundaries transposase genes (IS3, IS4, IS6, and IS926 families) are involved. In bacteria, tRNAs have been shown to be integration sites 40 , so finding them at collinear block boundaries is unsurprising. Only NSW150 and F1157CHC were found to contain a plasmid (pNSW150 and; pF1157CHC 19 ). At 108,267 bp pF1157CHC is 36,441 bp larger when compared to pNSW150. To assess plasmid architecture more fully, the three additional L. longbeachae plasmids we obtained from further sequencing of two of our isolates www.nature.com/scientificreports/ (pB1445CHC_73k, pB1445CHC_150k and pB41211CHC_76k) were aligned using MAUVE and visualised in Geneious, (Fig. 4). The plasmids share a common backbone consisting of conjugational genes (yellow collinear block), ranging in size from ~ 25,000 to ~ 28,000 bp, as well as several other collinear blocks that vary in size and orientation (Fig. 4). These blocks are separated by variable regions around mobile genetic elements, such as insertion sequences. Analysis of the larger plasmid pB1445CHC_150k revealed this is the same as plasmid pLELO, first reported in L. pneumophila. MAUVE alignment of the L. longbeachae plasmids, pLELO and two L. sainthelensi plasmids (pLA01-117_165k and pLA01-117_122k 41 ) (S6 Figure) again shows Legionella plasmids have a common backbone including conjugational genes separated by variable regions. Although the number of plasmids in our analysis is limited, Legionella plasmids identified to date can be broadly divided into two groups; one consisting of the smaller plasmids of ~ 70 kb that appear to be primarily a L. longbeachae group (pNSW150, pB1445CHC_73k, pB41211CHC_76k) and another group consisting of larger plasmids that occur in various species, including our complete genome (pF1157CHC, pLELO, pLA01-117_165k). This again suggests there has been extensive plasmid recombination followed by both intra-and inter-species transfer, supporting the findings of Bacigalupe et al. 17 . Interestingly, pB1445CHC_73 k has a repetitive region that was identified as a clustered regularly interspaced short palindromic repeat (CRISPR) element. This element belongs to the type I-F system with the same repeat region between 20 to 33 spacer regions and cas1-cas6f associated enzyme (Fig. 5). While there are few reports of naturally occurring CRISPR-Cas arrays on plasmids, previous studies 42, 43 as well as a recent comparative genomics analysis of available bacterial and archaeal genomes has demonstrated that type IV CRISPR-Cas systems are primarily encoded by plasmids 44 . There have also been similar reports of a type I-F CRISPR-Cas array being present on the plasmids of L. pneumophila strains 45,46 . Further analysis of our other L. longbeachae isolates showed that the type I-F CRISPR-Cas element is also present in 5 other strains (F2519CHC, LA01-195, LA01-196, LA03-576). Legionella longbeachae methylome. The PacBio assay utilized in the current study is unable to detect 5mC modifications. However, methylome analysis of our F1157CHC genome identified two classes of modified base, N4-cytosine (m 4 C) and N6-methyladenine (m 6 A). Bases in the chromosomal sequence were more likely to be modified (1.49% of As and 6.4% of Cs being methylated) than those in the plasmid (1% of As and 2.4% of Cs) (Fig. 6A). Modifications were evenly distributed within a given molecule, except for a single cluster of m 6 A in the chromosome, where this methylation 'spike' is focused on a specific gene, BOB39_12100 as depicted in Fig. 6B. The majority (73.6%) of m 6 A bases occurred in three sequence motifs (ATGNNNNNNRTGG/ CCAYNNNNNNCAT, GATC and GGGAG). Two of these (ATGNNNNNNRTGG/CCAYNNNNNNCAT and GATC) are almost always methylated (97-99.5% of occurrences) while the third (GGGAG) is frequently modified (77.2% of occurrences). By contrast, the m 4 C modifications are not strongly concentrated in motifs. The motif most frequently associated with this modification (CSNNNTB) is only modified in 9.2% of occurrences (about 3 times the background rate for all cytosines). DNA methylation in bacteria is often associated with restriction modifications (RM) systems, which protect the bacterial cell from foreign DNA. These systems combine a restriction endonuclease that digests unmethylated copies of target sequences and a DNA methyltransferase that methylates this sequence motif in the bacterium's own DNA. The strong association between m 6 A modification and three sequence motifs in the L. longbeachae genome suggests this modification is part of an RM system. www.nature.com/scientificreports/ www.nature.com/scientificreports/ Using REBASE, we identified putative methyltransferases and encodnucleases in the L. longbeachae genome. This analysis revealed three neighbouring genes that encode a type I RM system associated with the ATGNNNNNNRTGG/CCAYNNNNNNCAT motif. Specifically, gene B0B39_08545 encodes a SAM-dependent DNA methyltransferase with target recognition domains for both ends of this motif, while genes B0B39_08550 and B0B39_08555 encode the S and R subunits of an associated endonuclease. The enzymes responsible for the GATC and GGGAG motifs are less clear. Two proteins (LloF1157ORF6795P and LloF1157ORF8795P) are homologous to methyltransferases that recognize GATC in other species. Neither of these proteins are associated with a restriction endonuclease. Although many bacterial genomes contain the m 4 C modification, the biological functions encoded by it remains unclear 47 . There is some evidence that this mark may contribute to the regulation of gene expression. Notably, the deletion of a single m 4 C methyltransferase in Helicobacter pylori alters the expression of more than 100 genes and leads to reduced virulence. We used our genome annotation and methylation data to test for any associations between m 4 C methylation and genome features of functional classes of genes that might suggest this mark contributes to gene regulation in L. longbeachae. We found it is considerably more common within protein coding genes than intergenic spaces (Fig. 6C). However, there is no association between the presence of this mark in a gene sequence and any of the functional classifications present in our COG data (Fig. 6D). Although the over-representation of m 4 C bases in genetic sequences suggests it might be associated with, or a passive consequence of, transcription in L. longbeachae, we find no evidence it contributes to particular biological functions. In summary, we have demonstrated that most genomic variability in L. longbeachae is from recombination with large-scale rearrangements in the chromosome. Our 54 sg1 clinical isolates could be grouped into two highly related clades that persisted over time. The most genetically distinct clade consisted of isolates from only the Canterbury region but could just reflect oversampling from this region. Further sequencing of isolates from other regions is required. Most sequenced isolates were found to contain a plasmid that showed high levels of recombination and horizontal gene transfer with evidence for both intra-and inter-species gene-flow. The genome of L. longbeachae was also highly modified, with m 6 A modifications being the most common and was strongly associated with particular sequence motifs. Materials and methods Bacterial isolates, sequencing and genome assembly. A total of 60 isolates previously identified as L. longbeachae (including 57 serotyped as sg1 and 3 serotyped either as sg2 or undefined) were sequenced. Isolates were obtained from either the NZ Legionella Reference Laboratory (ESR, Porirua, New Zealand; n = 39) or Canterbury Health Laboratories (CHL) culture collection (Christchurch, New Zealand; n = 21). All isolates were derived from sporadic LD cases that occurred between 1993 and 2015 from 8 regions (S7 Figure) around the country and included the first NZ case in which L. longbeachae was successfully cultured from a patient specimen (LA93_171; S2 Table). Isolates were grown on buffered-charcoal-yeast-extract (BCYE) agar at 37 °C for 72 h. DNA was extracted from each fresh culture using GenElute Bacterial Genomic kits (Sigma-Aldrich, MO, USA) according to the manufacturer's instructions. Libraries were prepared using the Nextera XT kit (Illumina, San Diego, CA, USA) and were sequenced using Illumina MiSeq technology (2 × 250 bp paired-end) and version 2 chemistry by NZ Genomics Ltd (University of Otago, Dunedin, NZ). The quality of the raw reads was checked using FastQC (v. 0.11.4; https:// www. bioin forma tics. babra ham. ac. uk/ proje cts/ fastqc/). They were mapped against PhiX using Bowtie2 (v. 2.0.2 48 ), and any that mapped to PhiX were removed from the SAM file, and read pairs were reconstructed using the SamToFastq.jar program from the Picard suite (v. 1.107; https:// broad insti tute. github. io/ picard/) using the default parameters. Any adaptors were removed through the "fastq-mcf " program (using the default parameters) from the ea-utils suite of tools (v. 1.1.2-621; https:// expre ssion analy sis. github. io/ ea-utils/). Finally the reads were quality trimmed using SolexaQA++ (v. 3.1.4 49 ) at a probability threshold of 0.01 and sorted on length to remove any sequences < 50 bp prior to assembly. Sequence reads from each isolate was assembled using SPAdes (v. 1.2 50 ) de novo assembler in "careful" mode, with default settings. Table), 57 were found to be L. longbeachae sg1, two were sg2 and one had been mistyped and was Legionella sainthelensi. Analyses were limited to the sg1 isolates but because poor sequence data were obtained for three genomes (2 from Auckland and 1 from Waikato), only 54 were included (Table 1). We also included the two other publicly available complete genomes for L. longbeachae sg1, NSW150 (Australia; GenBank: NC_013861) and FDAARGOS_201 (USA; GenBank: NZ_CP020412) in our core genome and cluster of orthologous groups (COG) analyses. Sequenced strains analysed. Of the 60 isolates (S2 The reads of a further 65 previously published L. longbeachae isolates (Bioproject number PRJEB14754, SRA accession numbers ERS1345649 to ERS1345585 17 ) were downloaded and compared with our 54 sg1 isolates. However, 30 of these read sets were either of poor quality, aligning to less than 80% of our reference genome (F1157CHC; GenBank NZ_CP020894 19 ), or were not L. longbeachae sg1 isolates and were excluded. The remaining 35 read sets were included in our global phylogenetic analyses (S6 Table). Ancestral state reconstruction and phylogenetic analysis. Single nucleotide polymorphisms (SNPs) were identified using Snippy v2.6 (https:// github. com/ tseem an/ snippy). Snippy uses the Burrows-Wheelers Aligner 51 and SAMtools 52 to align reads of the 53 NZ L. longbeachae isolates to our reference genome Legionella longbeachae F1157CHC and FreeBayes 53 to identify variants among the alignments. Gubbins was used to remove areas of recombination on the full alignment 54 . SNPs were exported into BEAUti v2.5 to create an Extensive Markup Language (xml) file for BEAST v2.5 55 Global L. longbeachae isolates. As described above the read sets of 35 previously published L. longbeachae isolates 17 were downloaded and compared with our 54 NZ isolates using the SNP-identification method described above. In total, 89 L. longbeachae isolates from NZ and the UK were investigated for our global analysis. RaxML 63 was used to form a maximum likelihood tree of the isolates based on their SNP data and was visualised using EvolView v2. Core genome and COG analyses. The eggNOG-mapper 25,64 ) webserver with default parameters (http:// eggnog-mapper. embl. de/) was used to annotate the F1157CHC PGAP-derived amino acid sequences. The Prokka pipeline (v. 1.12 65 ) was used to annotate our draft isolates using default parameters. The Prokka-generated GFF files were analysed with Roary using default parameters, and the comparison script roary_plots.py was used to visualize the output. FastGeP was used with default parameters to perform a whole genome MLST analysis of the 56 isolates, which meant that the generated allele sequences were searched with BLAST + at an identity threshold ≥ 80%. F1157CHC was used as the reference genome for this analysis. SplitsTree (v.4.15.2 66,67 ) was used to convert the FastGeP Nexus file into a Newick file (as a Neighbour-joining tree) for visualization and annotation in iTOL with the inclusion of metadata for region and the sample type. Complete NZ reference genome, gene prediction and annotation. To generate our own complete NZ reference genome, one isolate (F1157CHC) was further sequenced using the PacBio RSII system (Pacific Biosciences, CA, USA) as previously described 19 . Gene prediction and annotation was performed using the NCBI Prokaryotic Genome Annotation Pipeline (2013). Total 1993-1997 1998-2002 2003-2007 2008-2012 2013-2015 Northland -2 ---2 Auckland ---2 9 11 Waikato 1 -1 --2 Bay of plenty --1 --1 Rotorua ----1 1 Manawatu www.nature.com/scientificreports/ Genome architecture. In order to assess the genome architecture, F1157CHC was used as the basis for all analyses in which comparisons were made against a reference. The genome was visualized using Circos software (v. 0.69.3 69 ). Tracks included mapping the annotation prediction from PGAP, as well an overlay of the results of a functional annotation with the eggNOG web annotation server, mapping of both the methylation results and recombinant regions detected with Gubbins, SNP density of the comparative samples as defined by SNIPPY, and finally a visualization of the repeats within the F1157CHC genome using Reputer 70,71 . The genome was analysed with the following Reputer parameters (number of best hits: 10,000; minimum length: 30 bp; and maximum Hamming distance: 3), and the output parsed through a MySQL database with a custom Perl script to generate the tracks to allow the links between all repeated regions to be visualized on the Circos plot. Of the four possible repeats, only the forward and palindromic repeats were detected. Furthermore, depending on the Hamming distance between the two repeats, the links were coloured to show those with a smaller Hamming distance as a darker colour. In order to assess the overall genome architecture in comparison to other L. longbeachae genomes, the MAUVE plugin within Geneious (v. 9.1.8) was used to visualize the F1157CHC genome against NSW150 and FDAARGOS_201. Legionella longbeachae methylome. Methylated bases were detected for isolate F1157CHC, using the "RS_Modification_and Motif Analysis" protocol implemented in SMRTAnalysis v2.3.0 using the SMRTbell DNA library described above as input. This pipeline takes advantage of BLASR (v1 72 ) to map sequencing reads to the assembled genome and MotifFinder v1 to identify sequence motifs associated with particular modifications. The resulting files were submitted to REBASE 73 along with our annotated reference genome to identify protein coding genes that may be responsible for the inferred methylation patterns. The distribution of methylated bases on the reference genome, and with regard to genomic features was analysed using bedtools (v2.25.0 74 ) and the R statistical language (v3.4). We tested for differences in methylation rate between genes of different functional classes using anova, as implemented in R. A complete record of the code used to perform statistical analyses and visualisation of the methylome data is provided in (S4 file). Data availability The raw reads of the 54 New Zealand L. longbeachae isolates are available in the NCBI Bioproject database (https:// www. ncbi. nlm. nih. gov/ biopr oject/) under accession number PRJNA417721 and in the Sequence Read Archive (SRA) database (https:// www. ncbi. nlm. gov/ sra) under accession numbers SRX3379702-SRX3379755. Our complete annotated reference genome for isolate F1157CHC is available in genbank genome (https:// www. ncbi. nlm. gov/ genome) under accession numbers NZ_CP020894 (chromosome) and NZ_CP020895 (plasmid). All supporting data, code and protocols have been provided within the article or through Supplementary Data.
v3-fos-license
2020-01-25T14:05:01.834Z
2020-01-01T00:00:00.000
210881799
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-1-0716-0199-0_7.pdf", "pdf_hash": "d10816d545594340ef97ad27d4b7cf06493f5fca", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42179", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "sha1": "29d24d46969595333020a9d69b88f12fc3559315", "year": 2020 }
pes2o/s2orc
MSMC and MSMC2: The Multiple Sequentially Markovian Coalescent The Multiple Sequentially Markovian Coalescent (MSMC) is a population genetic method and software for inferring demographic history and population structure through time from genome sequences. Here we describe the main program MSMC and its successor MSMC2. We go through all the necessary steps of processing genomic data from BAM files all the way to generating plots of inferred population size and separation histories. Some background on the methodology itself is provided, as well as bash scripts and python source code to run the necessary programs. The reader is also referred to community resources such as a mailing list and github repositories for further advice. MSMC MSMC [1] is an algorithm and program for analyzing genome sequence data to answer two basic questions: How did the effective population size of a population change through time? When and how did two populations separate from each other in the past? As input data, MSMC analyzes multiple phased genome sequences simultaneously (separated into haplotypes, i.e. maternal and paternal haploid chromosomes) to fit a demographic model to the data. MSMC models an approximate version of the coalescent under recombination across the input sequences. Specifically, the coalescent under recombination is approximated by a Markov model along multiple sequences [2,3], which describes how local genealogical trees change due to ancestral recombinations (Fig. 1). These local genealogies as well as the recombination events are of course invisible and therefore act as latent variables that are to be integrated out of the joint probability distribution. Since it is infeasible to do this integration across the entire space of possible trees, MSMC focuses only on one particular aspect of those trees: the first coalescence event. This variable (dark blue in Fig. 1) acts as a hidden state in the Hidden Markov Model (HMM). Using standard HMM algorithms, the hidden state (trees and recombination events) can be integrated out efficiently using dynamic programming. We can thus efficiently compute the likelihood of the data given a demographic model, and iteratively find a demographic model that maximizes this likelihood. The demographic model itself is-in the simplest case of just one population-parameterized by a sequence of piecewise constant coalescence rates, i.e. inverse effective population sizes. The time segments are chosen such that they cover the distribution of times to first coalescence. Therefore, the more sequences are analyzed, the more recent the window of analysis will be (Fig. 2). If the input individuals come from two populations, the demographic model is parameterized by three coalescent rates through time: A coalescence rate between lineages sampled within the first population, a coalescence rate between lineages sampled within the second population, and a coalescence rate between lineages sampled across the two populations (Fig. 3a). As introduced in Schiffels and Durbin [1], to simplify interpretation of the three inferred rates, we can plot a simple summary by taking the ratio of the across-rate and the mean within-rate, which is termed the relative cross coalescence rate (rCCR) (Fig. 3b). This summary variable ranges between 0 and 1, and indicates when and how the two populations diverged. Values close to 1 indicate that the two populations were really one population at that time. At the time when the rCCR drops to zero, the two populations likely separated into two isolated populations. Heuristically, the mid-point of that decline (i.e., the time when the rCCR hits 0.5) is often taken to be an estimate for the split time between the two populations. MSMC has been widely applied to human data (for example [4][5][6][7][8]) and non-human organisms (for example [9][10][11][12][13]). MSMC2 MSMC2 is a newer algorithm, and the tool is still actively being developed. A first version was used in Malaspinas et al. [6] for analyzing Australian genomes. At the time of this writing, a manuscript that presents the new algorithm in more detail is in preparation. MSMC2 was developed to overcome some problems that we saw with MSMC. In particular, MSMC is computationally intensive, and for all practical purposes limited to analyzing eight haplotypes at most. But even within this scope, we see that coalescence rate estimates for more than four haplotypes are sometimes biased (see, for example, Fig. 2, red curve), with some systematic over-and underestimations of the true coalescence rates. These biases are in part caused by approximations in the emission rate of the HMM, which requires knowledge of the local lengths of leaf branches of trees. This variable is estimated by a separate HMM that is heuristic and cannot easily be improved, and which apparently performs poorly for larger trees. This means that even if we improved the computational aspects, we could not scale up this algorithm easily to more haplotypes. MSMC2 takes a step back from these complications and approaches the problem of modelling multiple samples in a much simpler way: Instead of analyzing all input haplotypes simultaneously, it uses a much simpler pairwise HMM (very similar to PSMC) on all pairs of haplotypes. The likelihood of the data is then simply multiplied across all pairs as a composite likelihood. This has two interesting consequences: First, the pairwise model is-in contrast to the MSMC-an exact model under the Sequentially Markovian Coalescent, and does not suffer from biases with increasing number of genomes. Second, the pairwise model describes the entire distribution of pairwise coalescence times, not just the time to first coalescence. MSMC2 can therefore estimate coalescent rates across the entire distribution of pairwise coalescence times, with increasing resolution in more recent times, and importantly without biased estimates (Fig. 4). In contrast, MSMC loses power in ancient times with increasing numbers of input genomes (see Fig. 2). MSMC2 can also analyze population separations via the relative cross coalescence rate, and gives similar results as MSMC, but with computational improvements, as we will point out further below. We caution that at the time of writing, MSMC2 is still in beta and some aspects of the interface and algorithm may still change. Nevertheless, we will cover its use throughout this chapter alongside MSMC. Software Overview MSMC has been implemented in three open source software packages, summarized in the following. A mailing list for discussions around all three packages exists under https://groups.google. com/forum/#!forum/msmc-popgen The main program is written in the D programming language (www.dlang.org). A tutorial can be found at https://github.com/stschiff/msmctools/blob/master/msmc-tutorial/guide.md and general documentation can be found within each package. MSMC The main program used in the original publication [1] is accessible at http://www.github.com/stschiff/msmc. Pre-compiled packages for Mac and Linux can be found under the Releases tab. For compilation from source code, a D language compiler is needed (see www.dlang.org for details). MSMC2 MSMC2 (see Subheading 1) can be accessed at http://www.github. com/stschiff/msmc2. MSMC2 is still under development, but has been used in a key publication [6], which can be used to cite this program. A publication describing the novel aspects and comparison to other state-of-the-art methods is in preparation at the time of this writing. MSMC-Tools Utilities for preparing input files for MSMC, as well as some other tasks, can be found in a separate repository at http://www.github. com/stschiff/msmc-tools and mainly contains python scripts that help with generating the input data and with processing the output data. Data Requirements MSMC normally operates on diploid, phased, complete, high coverage genomes. Here we discuss these conditions one by one. Diploid Data Technically, it is not a strict condition that input sequences be diploid. However, most populations/organisms that are not diploid do not follow a coalescent under recombination. For example, bacteria and viruses are asexual without recombination, which breaks several key assumptions that the MSMC model makes. In some diploid model organisms, inbred lines are available and sequenced (for example, in Drosophila). Such inbred lines are effectively haploid, but originate from a diploid outbred population. In this case we think MSMC should work OK, by using each homozygous haploid input genome as a single "haplotype," although we lack explicit experience and overview of potential caveats in this case. Phasing When sequencing diploid genomes, modern sequencing platforms generate unphased data, which randomly permutes the association of heterozygous alleles to the paternal and maternal haplotypes. For MSMC, knowledge of the paternal vs. maternal allele is important when more than two haplotypes are analyzed. Note that for a single diploid genome as input (i.e., two haplotypes), no phasing is necessary. Phasing can be a laborious preprocessing step, which requires external tools, such as shapeit (https://jmarchini.org/shapeit3/) or beagle (https://faculty.washington.edu/browning/beagle/bea gle.html). As a general rule, what helps phasing quality a lot are: l availability of a reference panel of phased populations l presence of related individuals (e.g., parent-child duos or father-mother-child trios) l long sequencing reads l long-insert libraries in combination with paired-end sequencing. Note that MSMC and MSMC2 can in principle handle unphased data within the input data format (see below), but for some analyses we recommend to exclude those sites from the analysis, which can be done within MSMC. Note also that MSMC2 now can optionally run on unphased genomes for population size analysis, but not for population separation analysis. As described below, this is achieved by running the MSMC2-HMM only within each diploid genome, but not across pairs of genomes. This will give lower resolution than with phased data, but may be a good compromise if phasing is not possible and only population sizes need to be estimated. Complete Genomes MSMC and MSMC2 cannot run on Array data, with selected SNPs, but require contiguous sequence segments. For many organisms, genomes are shorter than in humans, and from our experience, MSMC still works fine for much smaller genomes, but we recommend in these cases to run simulations with shorter genome length and specified heterozygosity to test performance of the program on shorter genomes. For many non-model organisms, reference genomes are only available via assembly scaffolds, which are sometimes as short as a few hundred thousand basepairs (compared to hundred million basepairs for a human chromosome). In our experience, MSMC works still fine in many such cases, as long as scaffolds are not too short. Although the exact threshold depends on an organisms mean heterozygosity, in my experience scaffolds on the order of 500 kb and longer often work OK. We again recommend simulations of short chromosomes to assess the power in those cases. High Coverage Data MSMC requires good resolution of heterozygous vs. homozygous genotypes across the genome, which is only available with high coverage sequencing data. In our experience, 20-fold coverage and higher is sufficient. MSMC may work on lower coverage data as well, but detailed analyses of the effects of false negative/positives in genotype calling need to be assessed in these cases, ideally again through simulated data, into which sequencing errors are randomly introduced to test their effect on the estimates. Input Data Format MSMC/MSMC2 take several files as input, one for each chromosome, each with a list of segregating sites, including a column to denote how many sites have been called since the last segregating site. Note that here we use the term "chromosomes" to refer to coordinate blocks in a reference genome (which could also be an assembly scaffolds). We use the term "haplotypes," when we refer to the phased input sequences from multiple individuals. Here is an example part of an input file for chromosome 1 for four haplotypes (two diploid individuals): The four (tab-separated) columns are: 1. The chromosome (can be an arbitrary string, but has to be the same for all rows in a file). 2. The position on the chromosome. 3. The number of called homozygous sites since the last segregating site, which includes the given location. This number must always be greater than zero and cannot be larger than the difference between the current position and the previous position. 4. The ordered and phased alleles of the multiple haplotypes. If the phasing is unknown or only partially known, multiple phasings can be given, separated by a comma to indicate the different possibilities (see the second-last line in the example). Unknown alleles can be indicated by "?", but they can also simply be left out and expressed by a reduced number of called sites in the line of the next heterozygous site. The third column is needed to indicate where missing data is. For simulated data, without any missing data, this column should simply contain the distance in bp from the previous segregating site, indicating that all sites between segregating sites are called homozygous reference, without missing data. To the extent that this number is lower than the distance from the previous site do the input data contain missing data. Information about missing vs. homozygous reference calls is crucial for MSMC: If, for example, missing data is not correctly annotated, long distances between segregating sites may falsely be seen as long homozygous blocks, indicating a very recent time to the common ancestor between the lineages, thereby skewing model estimates. The generation of such an input file follows three steps: 1. Generating VCF and mask files from individual BAM files. Combining multiple phased individuals. In the following, we describe these steps in order Generating VCF and Mask Files from Individual BAM Files Starting with a BAM file, bamCaller.py (included in the MSMC-Tools package) can be used for generating a sample-specific VCF file and a mask file. This script reads samtools mpileup data from stdin, so it has to be used in a pipe in which a reference file in fasta format is also required. Here is an example bash script using samtools 1.0 or higher for generating chromosome-specific VCF files (sample1.chr*.vcf.gz) and mask files (sample1.mask. chr*.bed.gz) from a human BAM file: Phasing If your samples are unrelated and you want to run MSMC on more than two haplotypes at a time, you would need to statistically phase the VCFs with a tool like shapeit. There are two different phasing strategies using shapeit, either with a reference panel or without a reference panel. If a good reference panel is available for your samples, shapeit phasing with a reference panel is recommended. Here, as an example, we describe phasing a single human diploid sample against the 1000 Genomes Phase 3 reference panel. In the following, we assume that shapeit2 is installed, the 1000 Genomes (phase 3) reference panel is available locally (can be downloaded from https://mathgen.stats.ox.ac.uk/impute/ 1000GP_Phase3.html), and that the unphased VCF file contains all variable positions in the sample, plus all variable positions in the 1000 Genomes reference panel. This can be achieved using the --legend_file option in bamCaller.py. The script first removes multi-allelic sites in your VCF, generating .noMultiAllelicSites.vcf.gz with bcftools. Then it makes a list of sites to be excluded in the main run for phasing by shapeit -check, because shapeit can only phase SNPs that are in both the sample and the reference panel with the same allele type. Apart from the main log file per chromosome sample1.chr$CHR.alignments.log, the two following files will be generated from shapeit -check: 1. sample1.chr$CHR.alignments.strand: this file describes all sites in detail that either have incompatible allele types in the sample and the reference panel or found in the sample but not in the reference panel. 2. sample1.chr$CHR.alignments.strand.exclude: this file gives a simple list of physical positions of sites to be excluded from phasing. Note that this script can also be found in the git repository accompanying this book chapter (https://github.com/ StatisticalPopulationGenomics/MSMCandMSMC2). Combining Multiple Individuals into One Input File At this point, we assume that you have a phased VCF for each individual per chromosome (potentially containing some unphased sites not in the reference panel), and one mask file for each individual per chromosome. In addition, you will need one mappability mask file per chromosome, which is universal per chromosome and does not depend on the input individuals. Mappability masks ensure that only regions in the genome are included, which have sufficiently high mappability, i.e. no repeat regions and other features that are hard to map with next-generation sequencing data. Mappability masks can be generated using the SNPable pipeline described at http://lh3lh3.users.sourceforge.net/snpable.shtml. For the human reference genome hs37d5, they can be downloaded here https://oc.gnz.mpg.de/owncloud/index.php/s/ RNQAkHcNiXZz2fd. For generating the input files for MSMC for one chromosome, the script generate_multihetsep.py from MSMC-tools is required, which merges VCF and mask files together, and also performs simple trio-phasing in case the data contains trios. Here is an example of generating multihetsep files for two (previously phased) diploid individuals on chromosome 1. Another useful option in generate_multihetsep.py is --trio <child>,<father>,<mother>, allowing the three members of a trio. All three fields must be integers specifying the index of the child/father/mother within the VCFs you gave as input, in order. So for example, if you had given three VCF files in the order of father, mother, child, you need to give -trio 2,0,1. This option will automatically apply a constraint for phasing and also strip the child genotypes from the result. Resource Requirements Resource usage for MSMC and MSMC2 depend on the size of the dataset, the number of haplotypes analyzed, the number of time segments and on the number of CPUs used. The following numbers are example use cases and need to be somewhat extrapolated to other use cases. As a general rule of thumb, run time and number of CPUs are inversely proportional, and memory and number of CPUs are linearly proportional. Also, the number of haplotypes and the number of time segments affect both memory and run time quadratically. Use cases for MSMC, assuming 22 human chromosomes and 11 CPUs, default time patterning: Test Data We provide input files for MSMC and MSMC2 for four diploid human individuals, two Yoruba and two French individuals. The test input data consists of 22 text files for 22 autosomes in the MSMC input format described above. The test data can be accessed at https://github.com/StatisticalPopulationGenomics/ MSMCandMSMC2. Running MSMC A typical command line to run MSMC on the test data is which runs the program on 11 CPUs (option -t), keeps the recombination rate fixed at the initial value (option -R), and uses as output-prefix the file prefix out_prefix. The parallelization, here specified by the number of CPUs (-t 11), goes across input files. So when given 22 input chromosomes as in the test data, which is typical for human data, running on 11 CPUs means that the first 11 chromosomes can be run in parallel, and then the second 11. Using more CPUs will help a bit to make things even faster, but only to the extent that the number of chromosomes exceeds or equals the number of CPUs. The -R option is recommended for MSMC except when running on two haplotypes only. Additional options can be viewed by running msmc -h. In order to run MSMC to obtain estimates of cross-population divergences, you need to prepare your input files to contain individuals from multiple populations. For example, in order to run MSMC on one Yoruba and one French individual from the test data, you run (here for chr1 only): There are two changes here with respect to the first run. First, we use the options -I 0,1,4,5 -P 0,0,1,1, which specifies that only the first two haplotypes in each subpopulation should be used (indices 0,1 are the first Yoruba individual, indices 4,5 the first French), and that those selected four haplotypes belong to two subpopulations. Second, we set -s, which instructs MSMC to skip ambiguously phased sites. This is important if you have phased your samples against a reference panel and have private variants unphased. Empirically, we have found that MSMC is quite robust to unphased sites when analyzing population size changes in a single population, but that results on cross-population divergence are affected by unphased sites, and results are less biased if those sites are removed [1]. Upon running either of the two commands above, MSMC produces several output files. First, a file containing log output, called prefix.log. Second, a file containing the parameter estimates at each iteration step, called prefix.loop.txt. And third, a file containing the final results, called prefix.final.txt. This last file looks like this: Each row of this output file lists one time segment, with scaled start and end time indicated by second and third column. The fourth column contains the scaled coalescent rate in each time segment. In case of cross-population analysis (using the -P flag), the output will contain two more columns, titled lambda_01 and lambda_11, giving the coalescence rate estimates between populations and within the second population, respectively. Times and rates are scaled. In order to convert to real values, you need a mutation rate μ per site per generation. All times can then be converted to generations by dividing the scaled time by μ. In order to convert generations into years, a generation time is needed (for humans we typically take 29 years). Population size estimates are obtained by first taking the inverse of the scaled coalescence rate, and then dividing that inverse rate by 2μ. To get the relative cross coalescence rate (rCCR, see Fig. 3), you need to compute 2λ 01 /(λ 00 + λ 11 ), without any additional scaling. It can then be informative to compute the time point at which the relative CCR hits 0.5, to reflect an estimate of the split time between two populations (provided that a clean-split scenario is appropriate). Running MSMC2 Running MSMC2 is very similar to running MSMC if samples come from a single population. In that case, a typical command line may look like this: Note that here we have omitted the option -R, since MSMC2 can robustly infer recombination rates simultaneously with population sizes, so there is no need to keep the recombination rate fixed. The output of the program is the same as in MSMC. To analyze individuals from multiple populations, as in the provided test data the procedure is different from MSMC. In that case, MSMC2 needs to be run three times independently: Once each for estimating coalescence rates within population 1, within population 2, and across populations. This has two advantages: First, since runs can be parallelized, the combined running should be faster on computer clusters. Second, if many pairs of populations are analyzed, estimates of coalescence rates within populations need to be run only once and not co-estimated with each cross-coalescence rate estimates. So taking the test data as an example, we have four diploid individuals from two populations in a single input file, and we can run on only one individual from each population like this: Here, we have again used the option -s to remove unphased sites. A key difference to MSMC is how haplotype pairs in MSMC2 are specified using the -I option. In MSMC2, haplotype configurations passed via -I can be given in two flavors. First, you can enter a single comma-separated list, like this -I 0, 1,4,5. In this case, MSMC2 will run over all pairs of haplotypes within this set of indices. This is useful for running on multiple phased diploid genomes sampled from one population. In the second flavor, you can give a list of pairs, like above: -I 0-4,0-5,1-4,1-5. In this case, MSMC2 will run only those specified pairs, which are all pairs between the first Yoruba and first French individual in this case. Note that if you do not use this parameter altogether, MSMC2 will run on all pairs of input haplotypes and assume that they all belong to one population. As a special feature in MSMC2, the option -I can be used also to run MSMC2 to get population size estimates from entirely unphased genomes, using the composite likelihood approach to run on all pairs of unphased diploids, but not across them. For example, if your input file contains four diploid unphased samples, you could use -I 0-1,2-3,4-5,6-7 to instruct MSMC2 to estimate coalescence rates only within each diploid genome. In order to simplify plotting and analysis of the relative cross coalescence rate from MSMC2, we provide a tool in the MSMCtools repository called combineCrossCoal.py. This tool takes as input three result files from MSMC2, obtained by running within each population and across. It will then use interpolation to create a single joint output file with all three rates that can then be plotted exactly as in the MSMC case above. To use the script on the three estimates obtained with the three MSMC2 runs above, simply run and then use the combined file to proceed with plotting. Plotting Results Here is an example of plotting population sizes and relative CCR in python, as well as computing the midpoint of the rCCR curve, using the numpy, pandas, and matplotlib libraries. To try this out, we provide result files for MSMC2 within the book chapter repository (https://github.com/StatisticalPopulationGenomics/ MSMCandMSMC2), and those result files are used in this script, which is also included in the same repository: This script produces the plot shown in Fig. 5 and prints out the midpoint of the cross-coalescence rate, which is 69405.8165002096 for the test data, i.e. around 70,000 years ago for a rough estimate of the split time between French and Yoruba. Bootstrapping It is often important to obtain confidence intervals around coalescence rate estimates (either for population size estimates or for rCCR estimates). This can be done using block-bootstrapping. We provide a script called multihetsep_bootstrap.py in the MSMC-tools repository. You can run python3 multihetsep_bootstrap.py -h to get some inline help. The program generates artificial "bootstrapped" datasets from an input dataset consisting of MSMC input files, by chopping up the input data into blocks (5 Mb long by default) and randomly sampling with replacement to create artificial 3 Gb long genomes out of these blocks. By default, 20 datasets are generated. You can run the tool via which creates 20 subdirectories, here beginning with boot-strap_dir, each containing 30 multihetsep input files created with the block-sampling strategy described above. You should then run MSMC or MSMC2 on each of these datasets separately and plot all results together with the original estimates to visualize confidence intervals. Controlling Time Patterning Often, MSMC creates extremely large estimates in the most recent or the most ancient time intervals. This is a sign of overfitting, and can be mitigated by using fewer free parameters. By default, MSMC uses 40 time segments, with 25 free parameters (some neighboring time segments are forced to have the same coalescence rate). MSMC2 by default uses 32 time segments with 28 free parameters. You can use the -p flag to control the time patterning in detail. For example, to change the patterning of MSMC2 from 32 to 20 time segments with 18 free parameters, you could try -p 1*2+16*1 +1*2, which would use 20 time segments, and merge together the first two and last two to have just one free coalescence rate parameter, respectively. We recommend to experiment with these settings, in particular when non-human data is analyzed, where sometimes the default settings in MSMC and MSMC2 are not appropriate because the genomes are substantially shorter and hence fewer parameters should be estimated.
v3-fos-license
2020-02-13T09:13:04.153Z
2020-02-01T00:00:00.000
211087535
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://res.mdpi.com/d_attachment/polymers/polymers-12-00366/article_deploy/polymers-12-00366-v2.pdf", "pdf_hash": "0632d990fa0fb2d5e1b1f51c43eaf1273fb9b35a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42180", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "e5e92a703bbda11e461d6a7b36fec82fc18065b5", "year": 2020 }
pes2o/s2orc
Physical and Antioxidant Properties of Cassava Starch–Carboxymethyl Cellulose Incorporated with Quercetin and TBHQ as Active Food Packaging Antioxidant integration has been advocated for in polymer films, to exert their antioxidative effects in active packaging. In this study, the new antioxidant food packaging made from cassava starch–carboxymethyl cellulose (CMC), which is biodegradable, edible and inexpensive, was developed. Their properties were determined and applied in food models for application. Antioxidants (quercetin and tertiary butylhydroquinone (TBHQ)) were added at various concentrations into cassava starch–carboxymethyl cellulose (CMC) (7:3 w/w) films containing glycerol (30 g/100 g starch–CMC) as a plasticizer. The effects of quercetin and TBHQ concentrations on the mechanical properties, solubility, antioxidative activity, and applications of the films were investigated. Addition of antioxidant improved tensile strength, but reduced elongation at break of the cassava starch–CMC film. Cassava starch–CMC films containing quercetin showed higher tensile strength, but lower elongation at break, compared to films with TBHQ. Increases in quercetin and TBHQ content decreased water solubility in the films. Both the total phenolic content and antioxidative activity (DPPH scavenging assay) still remained in films during storage time (30 days). In application, cassava starch–CMC film containing quercetin and TBHQ can retard the oxidation of lard (35–70 days) and delay the discoloration of pork. Introduction The development of biodegradable films based on biopolymers has attracted attention, mainly due to their friendliness to the environment and their potential as a substitute for some petroleum polymers in the food packaging industry. Biodegradable films have generally been made of renewable, natural, and abundant biopolymeric materials, such as polysaccharides, proteins, lipids, or a combination Film Preparation The cassava starch-CMC film and antioxidant casting procedure were modified from a published method [21]. The film solution was prepared by dispersing 7 g of cassava starch and 3 g of CMC in distilled water (200 mL) with various quercetin and TBHQ contents (0, 20, 50, 100 mg). Glycerol (30 g/100 g cassava starch-CMC mixture) was used as a plasticizer. The film solution was heated to 80 • C with constant stirring to achieve starch gelatinization. The film-forming solution was then cast on a flat 30 × 30 cm Teflon plate. The films were then dried at room temperature (25 • C) for 24 h. Mechanical Properties The blended films were cut into 25 × 100 mm strips and then conditioned in desiccators over saturated salt solutions with the desired relative humidity 34% (MgCl 2 ) and 54% RH (Mg(NO 3 ) 2 ) at 25 • C for 48 h before testing. The mechanical properties (tensile strength and elongation at break) of the films were measured using a universal testing machine (Hounsfield, UK) according to the American Society for Testing and Materials (ASTM) D 882-12 method [24]. Twenty replicates of each film type, preconditioned at each RH, were tested. Fourier Transform Infrared Spectroscopy (FT-IR) Transmission infrared spectra of the films were measured at room temperature using a Nicolet 6700 FT-IR spectrometer (Thermo Electron Corporation, Waltham, Massachusetts, USA) in the range of 4000-400 cm −1 with 64 scans, 4 cm −1 resolution, using a deuterated triglycine sulfate (DTGS) KBr detector and KBr beam splitter. The films were placed in the sample holder. Differential Scanning Calorimetry (DSC) Differential scanning calorimetry (Mettler Toledo Schwerzenbach Instrument, Ohio, USA) was carried out. Samples were previously conditioned at 54% RH and 25 • C at least 48 h before testing. Three replicates of the film samples (≈10 milligrams) were put in aluminum pans and heated in the temperature range of −20 to 220 • C, at a heating rate of 5 • C/min in a nitrogen atmosphere (50 mL/min). X-ray Diffraction (XRD) The X-ray diffraction patterns of cassava starch-CMC films with and without antioxidants were carried on an X'Pert MPD X-ray diffractometer (Philips, Amsterdam, The Netherlands), using Nickel-filtered Cu Kα radiation at 40 kV and 35 mA in the 2θ range of 5 • -50 • . Water Solubility of Composite Films The water solubility of composite films was measured as a percentage of dry residue after the film was soaked in water for 24 h. This method was adapted from the method by Phan, et al. [25]. The initial dry weight of each film was obtained after drying film specimens at 65 • C for 24 h, followed by Polymers 2020, 12, 366 4 of 18 placement in 0% RH silica gel desiccators for two days. Dried films (about 0.3 g) were weighed (initial dry weight) and immersed in beakers containing 50 mL distilled water at 25 • C, that were then sealed and periodically agitated for 24 h. The solutions containing film residues were filtered with Whatman filter paper No.1 (previously dried at 105 • C for 24 h and weighed before using). The residues were dried at 80 • C for 24 h and weighed to determine the weight of dry matter (final dry weight). Tests were performed in triplicate, and the solubility was calculated using Equation (1): Total Phenolic Assay For this determination, 3 × 3 cm (≈0.1 g) of three-film samples were cut randomly from the big sheet of film and dissolved in 10 mL of methanol for 24 h to prepare a film extract solution. The total phenolic content (TPC) of the film samples was determined according to the Folin-Ciocalteu method as described by Association of Official Analytical Chemists (AOAC) [26] with slight modifications. Briefly, 0.5 mL of film extract solution was added with 8 mL distilled water and 1 mL of Folin-Ciocalteu reagent. The mixture was incubated for 5 min at room temperature before the addition of 0.5 mL of saturated sodium carbonate. The mixture was stored in a dark chamber at room temperature for 30 min. The absorbance of the mixture was then measured at 760 nm using a spectrophotometer (Spectro SC, Labomed Inc., Los Angeles, California, USA). Methanol was used as a blank. The concentration of total phenolic compounds in the samples is expressed as gallic acid equivalent (GAE), which reflects the phenolic content as the amount of gallic acid in mg per gram dry weight of the sample, calculated by using Equation (2) [27]: where A 760 is the absorbance at 760 nm. Determination of Antioxidant Activity in the Composite Films Film samples (3 × 3 cm, with at least three pieces randomly cut) were dissolved in 10 mL methanol for 24 h and filtered. The sample extract (500 µL) was mixed with 2 mL of 0.06 mM DPPH solution (in methanol) and kept in a dark location for 30 min at room temperature. The absorbance was then measured at 517 nm with a spectrophotometer. Methanol solution and quercetin were used as a reference and positive control, respectively. The radical scavenging activity of DPPH was calculated according to the following Equation (3): where A is the absorbance. Effect of Antioxidant Incorporations into Cassava Starch-CMC Films on Lard Storage The effect of antioxidants (quercetin and TBHQ) in the composite film on lard storage was determined using a method modified from Zhang et al. [28]. The lard samples (15 mL, 36 • C) were packaged in cassava starch-CMC-quercetin films and cassava starch-CMC-TBHQ films, with a film area of 100 cm 2 each, and stored at 30 • C and relative humidity of 40% for 30 days. The control was unpackaged and stored under the same conditions. The peroxide value of the packaged lard was determined. The peroxide values of the film extracts were measured using the modified method of Jung, et al. [29]. One gram of the film extract was dissolved in 25 mL of solvent (2 parts chloroform: 3 parts acetic acid). Saturated potassium iodide (1 mL) was then added, and the solution was kept in the dark for 10 min. After that, 30 mL of distilled water and 1 mL of starch solution (1 g/100 mL) were added to the solution and titrated with 0.01 N Na 2 S 2 O 3 until colorless. Peroxide values (PVs) were calculated as follows (Equation (4)): where S, B, N (mol equiv/L), and W mean the titration amount of sample, the titration amount of blank, the normality of Na 2 S 2 O 3 , and the sample weight (W, g), respectively. Application of Antioxidant Films on Fresh Pork The procedures in this experiment were applied from other studies [16,29,30]. Fresh pork samples were purchased from a local butcher shop, the samples were sliced into 5 × 10 × 1.5 cm (width × length × thickness) sections and weighed ca. 30-35 g. Each piece of sliced pork was placed in a polystyrene tray (10 × 5 × 1.5 cm) and covered on either side with one of the antioxidant composite films. Trays were sealed hermetically and stored at 4 ± 1 • C. Color changes of the pork were observed periodically (on days 0, 4, 8, and 12) during storage. The color characteristics were evaluated using a hand-held colorimeter (Minolta, Japan) to determine the L* value (lightness or brightness), the a* value (redness or greenness), and the b* value (yellowness or blueness) of the film samples. Percentage of redness decrease was calculated from the a* value following Equation (5): where a * 0 is the a* value of the sample at 0 days, and a * t is the a* value at storage time. Statistical Analysis ANOVA analysis and Duncan's multiple range tests were performed on all results using a statistical program, SPSS v. 10.0, at a confidence interval of 95% to determine the significant differences between group samples. Influence of Antioxidants Concentrations on Mechanical Properties of the Composite Films Cassava starch-CMC (7:3) was used in this study to form a film which showed good mechanical properties, as described in a previous study [21]. Antioxidants (quercetin and TBHQ) were added into the film to determine the effect of quercetin and TBHQ concentrations on the mechanical properties of cassava starch-CMC film. Tensile strength (TS) of cassava starch-CMC film with various quercetin and TBHQ concentrations are shown in Figure 1a quercetin. The film containing 50 mg quercetin/200 mL film solution showed the highest TS, and a film containing 100 mg TBHQ/200 mL film solution showed the highest EAB. However, these results were different to those obtained in a previous study with rice flour/cassava starch film containing antioxidants (PG, BHA, BHT) where the type of antioxidant had no effect on mechanical properties of the film [20]. The effect of relative humidity (34% and 54% RH) on the mechanical properties of the film blends was also investigated. All films kept at 54% RH gave higher EAB but lower TS than films at the 34% RH condition, because water worked as a plasticizer by binding with hydroxyl groups (OH) of the starch chain and reduced the intermolecular bonds and increased mobility in polymer chains. This result agreed with Rachtanapun and Wongchaiya [33] who studied the influence of relative humidity on the mechanical properties of the chitosan-methylcellulose film. At 54% RH, increasing At 34% RH, TS of cassava starch-CMC films with quercetin and TBHQ were higher than the control cassava starch-CMC film. It might be due to a possible interaction between quercetin or TBHQ and cassava starch-CMC, which strengthened the film network. Hydroxyl groups in quercetin and TBHQ possibly acted as hydrogen donors and hydrogen bonds could be formed between quercetin and starch-CMC molecule. Li, et al. [31] described that the larger molecules normally form a stronger network, which increases the energy required to tear the starch film during tensile testing. This result agreed with the TS of fish skin-CMC film incorporated with BHT and α-tocopherol [32]. It related to the elongation at the break of the films, as shown in Figure 1b. Cassava starch-CMC film with quercetin and TBHQ gave a lower elongation at break (EAB) than the control film. However, increasing quercetin or TBHQ content slightly increased the EAB of the film blends. In a comparison of the mechanical properties of the film with quercetin and TBHQ, cassava starch-CMC blended films containing quercetin showed higher tensile strength than the film with TBHQ. Nevertheless, cassava starch-CMC film with TBHQ was more flexible than the film with quercetin. The film containing 50 mg quercetin/200 mL film solution showed the highest TS, and a film containing 100 mg TBHQ/200 Polymers 2020, 12, 366 7 of 18 mL film solution showed the highest EAB. However, these results were different to those obtained in a previous study with rice flour/cassava starch film containing antioxidants (PG, BHA, BHT) where the type of antioxidant had no effect on mechanical properties of the film [20]. The effect of relative humidity (34% and 54% RH) on the mechanical properties of the film blends was also investigated. All films kept at 54% RH gave higher EAB but lower TS than films at the 34% RH condition, because water worked as a plasticizer by binding with hydroxyl groups (OH) of the starch chain and reduced the intermolecular bonds and increased mobility in polymer chains. This result agreed with Rachtanapun and Wongchaiya [33] who studied the influence of relative humidity on the mechanical properties of the chitosan-methylcellulose film. At 54% RH, increasing quercetin and TBHQ concentrations decreased tensile strength. Increasing TBHQ content increased EAB of cassava starch-CMC films due to the plasticizing effect of increasing absorbed water in film [34][35][36]. On the contrary, the increase of quercetin concentration had no significant effect on the EAB of the film. Fourier Transform Infrared Spectroscopy (FT-IR) The FT-IR spectra of control film (without antioxidant) and those incorporated with quercetin and TBHQ are shown in Figure 2. A special mention should be made to the peaks between 3265 and 2926 cm −1 , matching the stretching vibration of free hydroxyl and -CH band stretching, respectively [37]. Additionally, strong water bands at 1322 cm −1 , associated with OH in-plane bending, is noticeable in the films incorporated with quercetin and TBHQ. By the addition of antioxidants into cassava starch-CMC film, the O-H band of films shifted to 3263-3260 cm −1 . The C-OH bending band of cassava starch-CMC film that appeared at 1322 cm −1 was shifted to 1326-1323 cm −1 with antioxidant addition. The peak at 1100 cm −1 related to the glycosidic linkage. The other important change takes place between 1592 and 1100 cm −1 . Peaks at 1592 cm −1 are ascribable to carbon-to-oxygen (C = O) stretching within the carboxylic group of CMC [38]. The slight change in absorption band intensity at 995 cm −1 was observed in the composite films when quercetin and TBHQ were incorporated. These results were consistent with the FT-IR spectra of fish gelatin films containing BHT and α-tocopherol [32], and the FT-IR spectra of chitosan films with α-tocopherol [39]. When quercetin and TBHQ were added to the composite films, new peaks at 1369, 1078, and 1015 cm −1 appeared that was associated with the stretching of C-O-C, C-O-H bending of carbohydrate chains, and ether bonds [39]. This observation supported that there could be a particular arrangement in the films, due to the interactions of antioxidant polyphenolic compounds with hydroxyl and carboxyl groups of CMC [40]. These results were in agreement with the FT-IR spectra of chitosan film incorporated with green tea extract [27] and the study of physicochemical interaction between chitosan and catechin by Zhang and Kosaraju [41], who found that the peak of the carboxyl group of the chitosan decreased when incorporated with catechin. Similar findings were also reported by Curcio et al. [42] in the formation of covalent bonds between gallic acid-chitosan and catechin-chitosan. From FT-IR, it is evident that the addition of quercetin and TBHQ could form hydrogen bonding and covalent bonding, and thus engaged the functional group of cassava starch-CMC matrix, and subsequently lowered the free hydrogen group which can form hydrophilic bonding with water [27]. interaction between chitosan and catechin by Zhang and Kosaraju [41], who found that the peak of the carboxyl group of the chitosan decreased when incorporated with catechin. Similar findings were also reported by Curcio et al. [42] in the formation of covalent bonds between gallic acidchitosan and catechin-chitosan. X-ray Diffraction Patterns The diffractogram obtained for the control film ( Figure 3) showed a characteristic pattern reported in the literature [43]; the peaks correspond to amylopectin (pseudo-crystalline) located at 2θ = 16 • −19 • , due to the CMC-starch interactions. There is a sharp peak located at 2θ = 28 • , and a peak located in the region of 2θ = 7 • -8 • , which are a pattern of CMC [43]. From FT-IR, it is evident that the addition of quercetin and TBHQ could form hydrogen bonding and covalent bonding, and thus engaged the functional group of cassava starch-CMC matrix, and subsequently lowered the free hydrogen group which can form hydrophilic bonding with water [27]. X-ray Diffraction Patterns The diffractogram obtained for the control film ( Figure 3) showed a characteristic pattern reported in the literature [43]; the peaks correspond to amylopectin (pseudo-crystalline) located at 2θ = 16°−19°, due to the CMC-starch interactions. There is a sharp peak located at 2θ = 28°, and a peak located in the region of 2θ = 7°-8°, which are a pattern of CMC [43]. However, the pseudo-crystalline and sharp peaks at 2θ = 28° of control film were suppressed when quercetin and TBHQ were added into the composite film ( Figure 3). The crystalline peaks of the composite films decreased because the added quercetin and TBHQ blocked the rearrangement of starch-CMC crystallization in composite films [44]. A new broad amorphous peak was observed, demonstrating an interaction between these components [45]. At 2θ = 12°, a new sharp peak occurred in composite films containing quercetin. It represented the crystalline structures of added antioxidants. However, the pseudo-crystalline and sharp peaks at 2θ = 28 • of control film were suppressed when quercetin and TBHQ were added into the composite film ( Figure 3). The crystalline peaks of the composite films decreased because the added quercetin and TBHQ blocked the rearrangement Polymers 2020, 12, 366 9 of 18 of starch-CMC crystallization in composite films [44]. A new broad amorphous peak was observed, demonstrating an interaction between these components [45]. At 2θ = 12 • , a new sharp peak occurred in composite films containing quercetin. It represented the crystalline structures of added antioxidants. Thermal Properties of the Composite Films The melting temperature (T m ) and heat of fusion (∆H) of cassava starch-CMC films, with and without quercetin and TBHQ, are presented in Table 1. Thermograms of the composite films with quercetin and TBHQ showed a single sharp endothermic peak (Figure 4), which indicated homogeneity of the films. This endothermic peak was related to the melting of crystalline starch and CMC domains [46]. This result agreed with DSC thermograms of corn starch-CMC films [43] and soluble starch-CMC films [47]. The melting temperatures (T m ) of cassava starch-CMC composite films with quercetin and TBHQ were lower than the T m of cassava starch-CMC films, except for the film with 50 mg quercetin. According to Arvanitoyannis, et al. [47], polyols interact with starch and CMC polymers, favoring hydrogen bonding formation and decreasing the interactions between polymer chains. This behavior leads to a lower melting temperature. The T m of composite films with 50 mg quercetin shifted to higher temperatures due to the interaction between the film matrix and quercetin [38]. The area under the endothermic peak represented the heat of fusion of the films 39-40, which also improved with increasing quercetin concentrations in cassava starch composite films. It is due to the interaction between the quercetin and film matrix which needs more energy to break bonds [39]. This result related to the mechanical property (highest TS) of the film with 50 mg of quercetin. On the other hand, ∆H of cassava starch-CMC films with TBHQ was lower than ∆H of control film. This is because the incorporation of TBHQ into the film matrix decreases the intermolecular force between starch-CMC chains, and partly decreases the crystallinity of cassava starch-CMC, resulting in a decrease in the degree of crystallinity in composite films [48] as the TBHQ content increased (as shown in the results from XRD). These results related to the XRD results, which indicated that the addition of quercetin and TBHQ blocked the rearrangement of starch-CMC crystallization in composite films. According to Arvanitoyannis, et al. [47], polyols interact with starch and CMC polymers favoring hydrogen bonding formation and decreasing the interactions between polymer chains. This behavior leads to lower interaction energies between polymer chains. These results were related to DSC thermograms of azuki bean starch films with cacao nibs extract [49] and chitosan-MC films with vanillin [50]. Water Solubility of Composite Films The water solubility of cassava starch-CMC composite films was dependent on antioxidant concentrations ( Figure 5). The water solubility of the control films was about 78%, and film solubility declined with increasing quercetin and TBHQ concentrations. This result was consistent with the water solubility of cross-linked fish gelatin-chitosan films [51]. Moreover, the solubility of cassava starch-CMC films incorporated with quercetin and TBHQ is also related to the observations of the FT-IR spectra and the mechanical properties of the films, as discussed previously. This indicated that intermolecular interaction [4,47] likely occurred between the antioxidant and starch-CMC in the composite films. The CMC molecules have both positively and negatively charged segments. The Water Solubility of Composite Films The water solubility of cassava starch-CMC composite films was dependent on antioxidant concentrations ( Figure 5). The water solubility of the control films was about 78%, and film solubility declined with increasing quercetin and TBHQ concentrations. This result was consistent with the water solubility of cross-linked fish gelatin-chitosan films [51]. Moreover, the solubility of cassava starch-CMC films incorporated with quercetin and TBHQ is also related to the observations of the FT-IR spectra and the mechanical properties of the films, as discussed previously. This indicated that intermolecular interaction [4,47] likely occurred between the antioxidant and starch-CMC in the composite films. The CMC molecules have both positively and negatively charged segments. The two charged segments can join through inter-and intramolecular interactions [52]. The hydroxyl (O-H) group and carboxyl (C = O) of CMC can form strong hydrogen bonds with the hydroxyl groups on the phenolic antioxidant [53], improving the interactions between molecules and the cohesiveness of the biopolymer matrix, and decreasing the water solubility [46]. Polymers 2020, 12, x FOR PEER REVIEW 11 of 18 two charged segments can join through inter-and intramolecular interactions [52]. The hydroxyl (O-H) group and carboxyl (C = O) of CMC can form strong hydrogen bonds with the hydroxyl groups on the phenolic antioxidant [53], improving the interactions between molecules and the cohesiveness of the biopolymer matrix, and decreasing the water solubility [46]. Total Phenolic Content Assay The results showed that total phenolic content in the cassava starch-CMC films significantly increased (p ≤ 0.05) with increasing quercetin and TBHQ concentration (Figure 6). At the same antioxidant concentration, the total phenolic content of cassava starch-CMC films with quercetin was higher than that of the film with TBHQ, because quercetin has a higher molecular weight (gallic acid equivalent) than TBHQ. Storage time had no effect on the total phenolic content of cassava starch-CMC films. Determination of Antioxidant Activity in the Composite Films The results showed that the DPPH scavenging activity of cassava starch-CMC films was not different (p ≥ 0.05) with increased quercetin and TBHQ concentration (data not shown). Storage time had no effect on the DPPH scavenging activity of cassava starch-CMC films. This result agreed with the total phenolic content of the films as described in Section 3.6. Total Phenolic Content Assay The results showed that total phenolic content in the cassava starch-CMC films significantly increased (p ≤ 0.05) with increasing quercetin and TBHQ concentration (Figure 6). At the same antioxidant concentration, the total phenolic content of cassava starch-CMC films with quercetin was higher than that of the film with TBHQ, because quercetin has a higher molecular weight (gallic acid equivalent) than TBHQ. Storage time had no effect on the total phenolic content of cassava starch-CMC films. Polymers 2020, 12, x FOR PEER REVIEW 12 of 18 Effect of Antioxidants Incorporated into Cassava Starch-CMC Films on Lard Storage To determine the effect of cassava starch-CMC films containing quercetin and TBHQ on the rancidization process, lipid peroxides and lipid aldehydes were monitored during lard storage. Peroxide value (PV) represents primary products of lipid oxidation and is used for the oxidative state determination of lipid-containing foods [29]. As shown in Figure 7, the changes in the PV of the lard were relatively insignificant. This could be explained by the small amounts of polyunsaturated fatty acids contained in lard. The PV of the unpackaged lard (control) increased from 5 to 20 meq/kg (which represented the rancidity of lard) during the 18-day storage period. Even though the PV of the lard packaged with cassava starch-CMC films with and without quercetin and TBHQ also increased, the rate of increase was considerably lower than control. The lard packaged in the film without antioxidants showed lower PV than control, because of lower oxygen permeability through the film. This result agreed with the PV of almond oil in hydroxy propyl methyl cellulose (HPMC) film [54]. Moreover, during the first three days, the PV increase of lard packaged in cassava starch-CMC films containing quercetin and TBHQ was insignificant compared to control and lard packaged in a film without antioxidant, proving that sustained release of the quercetin and TBHQ from the cassava-CMC films inhibited early lipid oxidation [29]. Determination of Antioxidant Activity in the Composite Films The results showed that the DPPH scavenging activity of cassava starch-CMC films was not different (p ≥ 0.05) with increased quercetin and TBHQ concentration (data not shown). Storage time had no effect on the DPPH scavenging activity of cassava starch-CMC films. This result agreed with the total phenolic content of the films as described in Section 3.6. Effect of Antioxidants Incorporated into Cassava Starch-CMC Films on Lard Storage To determine the effect of cassava starch-CMC films containing quercetin and TBHQ on the rancidization process, lipid peroxides and lipid aldehydes were monitored during lard storage. Peroxide value (PV) represents primary products of lipid oxidation and is used for the oxidative state determination of lipid-containing foods [29]. As shown in Figure 7, the changes in the PV of the lard were relatively insignificant. This could be explained by the small amounts of polyunsaturated fatty acids contained in lard. The PV of the unpackaged lard (control) increased from 5 to 20 meq/kg (which represented the rancidity of lard) during the 18-day storage period. Even though the PV of the lard packaged with cassava starch-CMC films with and without quercetin and TBHQ also increased, the rate of increase was considerably lower than control. The lard packaged in the film without antioxidants showed lower PV than control, because of lower oxygen permeability through the film. This result agreed with the PV of almond oil in hydroxy propyl methyl cellulose (HPMC) film [54]. Moreover, during the first three days, the PV increase of lard packaged in cassava starch-CMC films containing quercetin and TBHQ was insignificant compared to control and lard packaged in a film without antioxidant, proving that sustained release of the quercetin and TBHQ from the cassava-CMC films inhibited early lipid oxidation [29]. The increase of quercetin and TBHQ content in the films extended the shelf-life of the lard from 18 days to 70 and 50 days, respectively. These results showed that cassava starch-CMC films containing quercetin and TBHQ can be used as an active packaging for the postponement of lard oxidation. Effect of Antioxidants Incorporated into Cassava Starch-CMC Films on Discoloration of Pork In this study, pork samples were covered with cassava starch-CMC composite films containing various concentrations of quercetin and TBHQ, and then the change of color within 12 days was measured. In order to compare the decrease of redness, % redness decrease was calculated as shown in Figure 8. The pork samples covered with a film containing quercetin and TBHQ had higher (p < 0.05) redness than uncovered pork (control) after eight days of storage. After eight days of storage, the redness of uncovered pork decreased rapidly and reduced by more than 50%, whereas the redness of pork covered with the quercetin-and TBHQ-films showed lower redness reduction than control. Redness of pork covered with TBHQ films slightly decreased (less than 20% redness These results showed that cassava starch-CMC films containing quercetin and TBHQ can be used as an active packaging for the postponement of lard oxidation. Effect of Antioxidants Incorporated into Cassava Starch-CMC Films on Discoloration of Pork In this study, pork samples were covered with cassava starch-CMC composite films containing various concentrations of quercetin and TBHQ, and then the change of color within 12 days was measured. In order to compare the decrease of redness, % redness decrease was calculated as shown in Figure 8. The pork samples covered with a film containing quercetin and TBHQ had higher (p < 0.05) redness than uncovered pork (control) after eight days of storage. After eight days of storage, the redness of uncovered pork decreased rapidly and reduced by more than 50%, whereas the redness of pork covered with the quercetin-and TBHQ-films showed lower redness reduction than control. Redness of pork covered with TBHQ films slightly decreased (less than 20% redness decrease) after 12 days. This indicated that quercetin and TBHQ addition in cassava starch-CMC film gave good results with regard to preventing overall discoloration (Figure 8). These results were consistent with the retardation of pork oxidation using antioxidative plastic film coated with horseradish extract [29], using tea catechin impregnated PVA-starch film on red meat [55], and the color loss reduction in pork loin coated with an alginate-based edible coating containing rosemary and oregano essential oils [46]. Polymers 2020, 12, x FOR PEER REVIEW 14 of 18 consistent with the retardation of pork oxidation using antioxidative plastic film coated with horseradish extract [29], using tea catechin impregnated PVA-starch film on red meat [55], and the color loss reduction in pork loin coated with an alginate-based edible coating containing rosemary and oregano essential oils [46]. Therefore, it was not surprising that the studied films inhibited the oxidation of myoglobin in pork. This result confirmed the antioxidative activity of cassava starch-CMC film with quercetin and TBHQ, which can retard the color discoloration as well as delay the oxidation of lard, as described in Section 3.8. Conclusions Mechanical properties of cassava starch-CMC film were generally affected by the incorporation of quercetin and TBHQ, as well as relative humidity. Cassava starch-CMC film with antioxidants increased tensile strength, but reduced elongation at the break of the films. Increasing quercetin and TBHQ contents decreased tensile strength, but increased elongation at the break of the films. FT-IR spectra represented the intermolecular interactions between cassava starch-CMC film with quercetin and TBHQ by indication of the shifting of the -OH band, carboxylic group, and aromatic Therefore, it was not surprising that the studied films inhibited the oxidation of myoglobin in pork. This result confirmed the antioxidative activity of cassava starch-CMC film with quercetin and TBHQ, which can retard the color discoloration as well as delay the oxidation of lard, as described in Section 3.8. Conclusions Mechanical properties of cassava starch-CMC film were generally affected by the incorporation of quercetin and TBHQ, as well as relative humidity. Cassava starch-CMC film with antioxidants increased tensile strength, but reduced elongation at the break of the films. Increasing quercetin and TBHQ contents decreased tensile strength, but increased elongation at the break of the films. FT-IR spectra represented the intermolecular interactions between cassava starch-CMC film with quercetin and TBHQ by indication of the shifting of the -OH band, carboxylic group, and aromatic ring. The XRD micrographs also confirmed the interactions of the cassava starch-CMC matrix with antioxidants. DSC thermograms established the homogeneity of films containing quercetin and TBHQ. The increase of quercetin and TBHQ contents decreased the water solubility of the films. In application, cassava starch-CMC film containing quercetin and TBHQ can retard the oxidation of lard (35-70 days) and delay the redness discoloration of pork. Thus, it seems that the cassava starch-CMC films containing quercetin and TBHQ showed better physical properties than cassava starch-CMC film. The cassava starch-CMC films containing quercetin and TBHQ have the potential to be used as active and biodegradable films for low and intermediate moisture products.
v3-fos-license
2023-06-07T06:17:49.544Z
2023-06-05T00:00:00.000
259090760
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1073/pnas.2302580120", "pdf_hash": "b08b46ff9450a826bbef3077cbf1841a18fcde72", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42182", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "bf95c4fafd74a211f6605f7e05a13b81c9ceb862", "year": 2023 }
pes2o/s2orc
Activator-induced conformational changes regulate division-associated peptidoglycan amidases Significance Peptidoglycan amidases break the peptidoglycan layer during cell division and maintain integrity of the cell envelope. Here, we present structures of an isolated peptidoglycan hydrolase in an autoinhibited (“off”) state and a second amidase bound to the activating LytM domain of EnvC revealing the active (“on”) state. A comparison of these structures provides important molecular insights into the activation of peptidoglycan hydrolases by their cognate activators. The peptidoglycan layer is a complex molecular mesh that surrounds the bacterial cytoplasmic membrane, providing structural rigidity and protection from osmotic shock (1). In gram-negative bacteria, the peptidoglycan layer also serves as a point of attachment for the outer membrane and defines their characteristic shapes (2). During cell division, the peptidoglycan layer is broken at the division septum to allow insertion of new peptidoglycan and to separate daughter cells. In Escherichia coli (E. coli), splitting the peptidoglycan layer at the division site involves activity of three closely related periplasmic peptidoglycan amidases (3). AmiA is the smallest division-associated amidase, consisting of a simple zinc-dependent enzymatic domain of ~28 kDa. AmiB and AmiC are larger amidases (45 and 43 kDa, respectively), composed of an AmiA-like enzymatic domain and a small N-terminal domain (the "amiN" domain) that is suspected to be important in anchoring these proteins to the peptidoglycan layer (4,5) and localization to the division site (6,7). Both AmiA and AmiC are directed to the periplasm by the twin arginine repeat translocation (TAT) system while AmiB is exported by the Sec pathway (6,8). A fourth amidase, AmiD, is a membrane-anchored lipoprotein that is not involved in cell division and belongs to a structurally distinct family of zinc-dependent amidases (9). All the three division-associated amidases have overlapping enzymatic functions in hydrolyzing the peptidoglycan amide bond between the sugar and the first amino acid of the peptide cross-link (3). Single-gene knockouts of amiA, amiB, and amiC each have modest cell separation defects; however, strains lacking multiple amidases have severe chaining phenotypes (3). Strains lacking amidases also have increased sensitivity to antibiotics and detergents, suggesting envelope defects that allow penetration of molecules that would not usually cross the outer membrane barrier (3,8,10). Because of the importance of the peptidoglycan layer for bacterial viability and cell envelope integrity, the activation of peptidoglycan amidases is carefully controlled to guard against lysis or exposure to noxious compounds in the environment. In their resting states, zinc-dependent peptidoglycan amidases such as AmiB and AmiC adopt autoinhibited conformations in which their active sites are blocked by an alpha helix containing a conserved glutamate residue which binds to the active-site zinc (5,11). Activation of amidases is then stimulated by proteins that bind to the amidases to promote enzymatic activity (11)(12)(13). In E. coli, several amidase "activator" proteins (also known as murein hydrolase activators) have been identified including ActS (14,15), EnvC (13,16), and NlpD (13). The activators share a common motif, the LytM domain (7,12), which forms the proposed site of amidase binding and activation (7,17). The LytM domains of EnvC, ActS, and NlpD are also sometimes referred to as degenerate "dLytM" domains in recognition that OPEN ACCESS they lack the enzymatic activity present in the original protein from which they are named (7). Activators are themselves typically autoinhibited and are activated at specific times and places to regulate amidase activity. One of the best understood amidase activation systems is the FtsEX-EnvC complex. FtsEX is a Type VII ABC transporter (18)(19)(20) that belongs to the same protein superfamily as MacB (19), LolCDE (21)(22)(23), BceAB (24), and HrtBA (25). During cell division, and after recruitment to the septal Z-ring (26), ATP binding and hydrolysis by FtsEX drives conformational changes in EnvC that facilitate binding and activation of AmiA and AmiB in the periplasm (17,27). A structure of E. coli EnvC bound to the periplasmic domains of FtsX shows that EnvC is itself autoinhibited by the presence of a helix (the restraining arm) that blocks access to the amidase-binding groove in the EnvC LytM domain (17). Conformational change in FtsEX-EnvC is predicted to displace the restraining arm providing access for the amidase to bind the LytM domain (17); however, molecular details of how activator binding induces amidase activation remain poorly understood. Here, we present a crystal structure of the AmiA peptidoglycan amidase in its as-isolated "resting" state and an activated form of the AmiB enzymatic domain bound to the EnvC LytM domain. Our structures show precisely how activator binding displaces the autoinhibitory helix of the amidase to allow substrate access and reorganizes the active site to promote peptidoglycan hydrolase activity. A Structure of AmiA Defines Its Active Site and Regulatory Domain. We determined a crystal structure of E. coli AmiA using X-ray crystallography. Crystals of AmiA diffract to a resolution of 2.4 Å and contain two molecules per asymmetric unit. Full diffraction data and refinement statistics are given in SI Appendix, Table S1. The secondary structure of AmiA is diagrammed in Fig. 1A, and a representative monomer from the structure is shown in Fig. 1B. As expected from its amino acid similarity, the overall fold of AmiA is very similar to previous structures of Bartonella henselae AmiB (11) and E. coli AmiC (5), including the active site (SI Appendix, Fig. S1). Each AmiA monomer consists of a single globular domain with a six-membered beta sheet and six alpha helices ( Fig. 1 A and B). The AmiA active site is composed of a single zinc atom that is held in place by two histidine residues (His65 and His133), an aspartate (Asp135), and two glutamates (Glu80 and Glu167) (Fig. 1C). Similar to AmiB (11) and AmiC (5), the AmiA active-site zinc is not accessible to peptidoglycan substrates due to the presence of an alpha helix that occludes the active site; we term this feature the "blocking helix" (Fig. 1 A and B, red). As demonstrated in subsequent sections, the blocking helix has a role in autoinhibiting the activity of AmiA and forms part of a larger regulatory domain (residues 151-194) which includes a second alpha helix that constitutes the binding site for EnvC. We define the latter feature as the "interaction helix" (Fig. 1 A and C, blue). The interaction helix consists of residues 180-192 and stands conspicuously proud from the rest of the molecule. The interaction helix is also notable for possessing five solvent-facing hydrophobic residues (Leu184, Leu185, Val188, Leu189, Leu192) and is the only feature for which we identify meaningful conformational differences between the two AmiA molecules observed in the crystal structure (Fig. 1D). In one chain, the interaction helix is well defined, while in the other, the corresponding electron density is smeared out, consistent with thermal motion. We further assessed the dynamics of AmiA by plotting the B-factors from each monomer against sequence and performing molecular dynamics simulations (SI Appendix, Fig. S2). Both the experimental data and simulations consistently show that the regulatory domain is much more dynamic than the rest of the protein. The overall structure of AmiA is consistent with an autoinhibited form of the enzyme in which the active-site zinc is occluded by the blocking helix, while a potential protein interaction site remains exposed to solvent ready for activation. Mutational Analysis of the AmiA Autoinhibitory Domain. To test whether the regulatory domain maintains AmiA in an autoinhibited state, we made mutations that are predicted to relieve autoinhibition and monitored bacterial viability and detergent sensitivity when these variants were expressed in the periplasm. Expression of wild-type AmiA does not significantly disrupt viability or detergent sensitivity of E. coli. However, expression of AmiA variants that lack the regulatory domain causes a reduction in bacterial viability and increases bacterial sensitivity to detergent (SI Appendix, Fig. S3A). This was the case for three distinct AmiA regulatory domain deletion constructs, each engineered with different regulatory domain deletions (SI Appendix, Fig. S3B). In addition to the regulatory domain deletions, we also tested single-amino acid substitutions in Glu167 which is located on the blocking helix and, in the crystal structure, is directly ligated to the active-site zinc. The equivalent residue has previously been shown to be a key residue in maintaining autoinhibition for both AmiB (11) and AmiC (6). Mutations of Glu167 to glutamine or lysine are modestly effective in relieving AmiA autoinhibition as judged by the detergent sensitivity of strains expressing these variants in the periplasm (SI Appendix, Fig. S4A). Molecular dynamics simulations of AmiA and AmiA Glu167 mutants provide useful context to these experiments, showing that the blocking helix fluctuates between bound and free positions in the mutants, but remains locked firmly in place for the wild type (SI Appendix, Fig. S4 B and C). These observations are consistent with a role for the blocking helix in AmiA autoinhibition, with Glu167 forming a "latch" that anchors the blocking helix to the active-site zinc. Mutations in the Interaction Helix Break the Interaction between AmiA and Its Activator. We next turned our attention to the function of the interaction helix. Based on the structure of AmiA, we hypothesized that the solvent-facing hydrophobic residues presented along the face of the interaction helix mediate binding to the cognate activator (EnvC). Using a bacterial 2-hybrid experiment, we assessed the interaction between AmiA and the EnvC LytM domain after introducing point mutations into the interaction helix. Wild-type AmiA binds strongly to the EnvC LytM domain, but lysine substitutions of any of the solvent-facing hydrophobics completely disrupt the interaction ( Fig. 2A). When left for a longer period, some variants did show detectable signs of interaction-although these were significantly weaker than for the wild type consistent with partial disruption of the interaction ( Fig. 2A). To control for the possibility that these mutations might destabilize the amidase, or differentially affect expression levels, we also ran an SDS-PAGE gel to detect the expression of each variant under identical bacterial growth conditions; all mutants were detected at the correct molecular weight with similar intensity across the gel (SI Appendix, Fig. S5). To further analyze the effect of interaction helix mutations in vitro, we coexpressed a subset of AmiA variants alongside the . Phase-contrast images of bacterial cultures after overnight growth. BW25113 indicates the "wild-type" E. coli strain and ΔamiABC indicates a triple-deletion knockout strain. Strains carry either an empty vector (pET21) or full-length AmiA construct (pET21/AmiA) encoding either the wild-type protein or indicated variant (C). Viability assays. Cultures are spotted as a series dilution, from left to right, starting with a culture adjusted to OD 1 with a 10-fold dilution at each step. LB agar reports on general viability, while SDS and LBON50 (low salt media) report on outer membrane integrity and sensitivity to osmotic challenge, respectively. His-tagged EnvC LytM domain and assessed the stability of the complex using copurification (SI Appendix, Fig. S6 A and B). Consistent with the bacterial 2-hybrid data, both AmiA L184K and L185K copurify in lower yield than the wild type, and the AmiA L188K variant does not interact at all, even though all AmiA variants are highly expressed in comparison to the LytM domain (SI Appendix, Fig. S6C). We therefore conclude that the AmiA interacts with EnvC via its surface-exposed interaction helix. Mutations in the Interaction Helix Block the Function of AmiA In Vivo. To further dissect the function of AmiA, and the role of the interaction helix, we established a multiamidase knockout strain of E. coli BW25113 that could be complemented by AmiA or AmiA variants (SI Appendix, Fig. S7). As expected from a previous triple-knockout study (3), both the cell division defect and detergent susceptibility phenotypes of the triple-amidase mutants can be corrected by expression of AmiA from a plasmid. Using this system, we tested various AmiA mutants for their ability to rescue these defects in the ΔamiABC background using the empty vector as a control. We used phase-contrast microscopy to inspect cells for the chaining phenotype (Fig. 2B), and viability on detergent agar as an indicator of cell envelope integrity (Fig. 2C). Four of the five AmiA variants (L185K, V188K, L189K, and L192K) and the empty vector control were highly chained and detergent sensitive, while the wild-type AmiA rescued both defects and appeared otherwise indistinguishable from the parental strain. The fifth mutant, L184K, was only modestly chained with detergent sensitivity close to wild type. These data confirm the importance of the solvent-facing residues in the interaction helix for in vivo functionality of AmiA and are consistent with roles for these residues in interactions with EnvC. Structure of the AmiB Hydrolase Domain Bound to the Amidase-Activating EnvC LytM Domain. To better understand the molecular basis for activation of the FtsEX-EnvC-dependent amidases, we sought to determine a crystal structure of an amidase bound to its cognate LytM domain. Our strategy was to identify well-expressed and stable amidase activator pairs and screen for crystallization using robotics. Using copurification experiments, we first demonstrated that AmiA could be successfully purified with isolated LytM domain of EnvC (SI Appendix, Fig. S8). We also showed that AmiA does not copurify with either the fulllength EnvC protein or an EnvC construct lacking the coiled coil, consistent with EnvC being autoinhibited by the presence of the EnvC restraining arm in these constructs. These experiments complement previous work showing copurification of the EnvC periplasmic domain with AmiB, and bacterial 2-hybrid assays confirming this pattern of interactions for AmiA and AmiB in E. coli (17). The experiment also confirms that the same autoinhibition mechanism that regulates EnvC's activation of AmiB applies to AmiA. Screening activator/amidase pairs from multiple organisms, we identified several well-expressed amidase hydrolytic domains that copurified with their cognate EnvC LytM domains. This included both AmiA and AmiB constructs, the latter of which were cloned without their N-terminal "AmiN" domain. After extensive crystallization trials, we were ultimately successful in solving a crystal structure of the AmiB hydrolytic domain bound to the EnvC LytM domain using proteins from Citrobacter rodentium. The 3.4 Å structure of the AmiB hydrolytic domain bound to the EnvC LytM domain is shown in Fig. 3. Inspecting the architecture of the complex (Fig. 3A), three observations are immediately apparent. First, the EnvC LytM domain is bound directly to the amidase interaction helix, with the latter's hydrophobic residues all pointing directly into the LytM groove (Fig. 3B). Second, the amidase regulatory domain has a very different conformation in the EnvC-bound structure such that the interaction helix is contiguous with helix 5 and the blocking helix is displaced from the active site (Fig. 3A). Finally, the active-site zinc is ligated by three residues rather than the five (Fig. 3C) due to the absence of the blocking helix glutamate (Glu167 in AmiA) and dissociation of one of the aspartates (Asp271 in AmiB, equivalent to Asp135 in AmiA). An alignment of AmiA and AmiB regulatory domain sequences is provided in Fig. 3D to assist the reader in matching equivalent residues. The structure confirms the predicted importance of the surface-facing hydrophobic residues along the interaction helix and confirms binding-induced conformational change as a mechanism for amidase activation. The Amidase's Interaction Helix Binds in the Same Groove as the EnvC Restraining Arm. In a crystal structure of full-length EnvC bound to the two periplasmic domains of FtsX, it was noted that the LytM domain is occupied by a long helix (the "restraining arm") that blocks access to the amidase-binding groove (17). This led to the proposal that the restraining arm would need to be displaced by a conformational change to allow amidase to bind and be activated. The conformational change is expected to be driven by ATP binding and hydrolysis by FtsEX and propagated through the coiled coil of EnvC. The structure of the EnvC-LytM AmiB complex shows that the amidase interaction helix binds within the same groove as the restraining arm, lending further support for this mechanism (Fig. 4 A and B). The interface of the AmiB-EnvC complex is governed by interactions between the exterior hydrophobic residues of the amidase and residues lining the interior of the EnvC LytM groove. Consistent with the structure, several contact residues inside the LytM groove have previously been identified as important for amidase activation in E. coli EnvC (7). Additionally, the activator-amidase structure further identifies a distinctive loop (C. rodentium EnvC residues G319-G330) that contacts residues located between the interaction helix and helix 5. The extended loop wraps around the residues located between the interaction helix and helix 5, causing them to form a single, continuous, helical element (Fig. 4B). Taken together, the two amidase structures capture a significant conformational change in the regulatory domain as the N-terminal end of interaction helix is prised away from the enzymatic domain by the binding of EnvC. The knock-on effect of this levering motion is to pull the sequence-neighboring blocking helix away from the zinc exposing the active site for peptidoglycan binding. Reorganization of the Activated Zinc Site in the EnvC-Bound Amidase. In addition to the dislocation of the blocking helix, which is tied to dissociation of Glu167/Glu303 from the activesite zinc, we also observed displacement of Asp135/Asp271 (Fig. 3C). Consequently, the amidase zinc site is surrounded by five residues in the resting state (Fig. 1C) but only three in the activated (EnvC LytM bound) state (Fig. 3C). In the EnvC LytM AmiB costructure, the angles between neighboring ligating residues and the zinc are all close to 109°, suggesting a tetrahedral coordination state in which the fourth position remains open for substrate binding and catalysis. Inspecting the density surrounding the zinc, a low-occupancy ligand is present at the fourth position of the coordination sphere. The ligand is consistent with a sugar molecule in chair conformation (SI Appendix, Fig. S9). No sugars were used in crystallizing the complex and thus this molecule seems to have been copurified during protein production. The sugar could be a product of peptidoglycan hydrolysis; however, due to modest resolution (3.4 Å) and partial occupancy, we have not yet formally identified this molecule. We anticipate that future high-resolution studies may be able to resolve this ligand, and perhaps characterize further natural substrates, reaction products, or even inhibitors, bound to the amidase. Discussion Peptidoglycan amidases are key hydrolytic enzymes that are needed to break the peptidoglycan layer during cell division to allow for separation of daughter cells. Here, we described two crystal structures that capture both the active and inactive states of the amidase, showing precisely how division-associated peptidoglycan hydrolases are activated by their interaction with a cognate partner, EnvC. We first described the crystal structure of an isolated E. coli amidase, AmiA, at 2.4 Å resolution. The structure reveals an inactive form of the enzyme where the active-site zinc is blocked by an autoinhibitory helix, with a solvent-facing helix that forms the binding site for its cognate activator (Fig. 1). We then showed the importance of the interaction helix using site-directed mutagenesis. Mutations in the interaction helix disrupt interaction with EnvC and prevent activation of the amidase in vivo (Fig. 2). A structure of AmiB bound to the EnvC LytM domain further establishes the interaction helix as the EnvC-binding site and reveals the activation mechanism; the amidase interaction helix docks inside of an EnvC surface groove, forcing conformational changes in the neighboring autoinhibitory helix that expose the active site (Fig. 3). Displacement of the blocking helix not only makes the active site accessible to substrates, but also reconfigures the ligands surrounding the active-site zinc. Finally, we show that the groove in the LytM domain of EnvC that forms the amidase-binding site is the same groove that is blocked by the EnvC restraining arm (Fig. 4). The structures show that autoinhibition is a feature of both the amidase and the activator, and that the interaction between the activator and amidase involves substantial conformational changes. An updated mechanism for amidase activation in the FtsEX-EnvC-AmiA system is presented in Fig. 5. During periods of inactivity, both the amidase and the FtsEX-EnvC complex are autoinhibited (Fig. 5, Top Left): AmiA is autoinhibited by its blocking helix which prevents the binding of peptidoglycan and the FtsEX-EnvC complex is autoinhibited by the restraining arm which prevents recruitment of the amidase to the LytM domain. Upon ATP binding to FtsEX-EnvC, a long-range conformational change is transmitted through EnvC, freeing the LytM domain from the restraining arm and exposing the amidase-binding site (Fig. 5, Top Right). Binding of the amidase to the EnvC LytM domain relies upon the amidase interaction helix, which binds in the same location from which the restraining arm was displaced (Fig. 5, Bottom). Upon binding, the amidase undergoes an induced conformational change in which the blocking helix is displaced from the active site. Rearrangement of the ligands surrounding the active-site zinc leads to peptidoglycan amidase activity. Finally, ATP hydrolysis allows the system to reset; the amidase is released and EnvC restraining arm returns to the LytM groove (Fig. 5, Top Left). A near-identical mechanism likely operates for FtsEX-EnvC-AmiB, although in that case the amidase may additionally be prelocated at the division site by interactions between the N-terminal "AmiN" domain and the peptidoglycan layer (4-7). For AmiA, which lacks an AmiN domain, localization to the division site most likely relies on interactions between FtsEX-EnvC or other components of the division machinery. A key feature of this proposed mechanism is that the amidase is only briefly activated since the eventual hydrolysis of ATP by FtsEX returns the complex to the inactive autoinhibited state. Once the amidase is released, it is rapidly autoinhibited by the blocking helix returning to the peptidoglycan binding groove. LytM domains are widespread among amidase activators (12), with predicted LytM domains in both NlpD (30) (the activator for AmiC) and ActS (12,14) (formerly known as YgeR-an activator of AmiC, AmiA, and AmiB) of E. coli. The interaction described here for EnvC and its cognate amidases (AmiA and AmiB) may well serve as a useful model for understanding these interactions. The apparent redundancy between division-associated amidases and the large number of murein hydrolase activators raises the question of why such complexity is required. This is especially true for the FtsEX-EnvC-AmiA/AmiB system since the gram-positive equivalent, FtsEX-PcsB, operates without separate amidases and instead uses an EnvC-like protein (PcsB) that has its own peptidoglycan hydrolase activity (31,32). Maintaining amidases in several different parts of the gram-negative cell envelope may be advantageous for separating the peptidoglycan layer while coordinating invagination of the outer membrane, as suggested for NlpD-AmiC (30). Overlapping specificity of amidases and activators may be useful under different environmental stress conditions as has been suggested for ActS (33). In summary, we have determined the structures of two peptidoglycan amidases (AmiA and AmiB) in autoinhibited and activated states and relate these to their wider regulation through interactions with a cognate activator, the LytM domain of FtsEX-EnvC. The structures reveal near-atomic details of the conformational changes in the amidase that are required for activation of peptidoglycan hydrolysis including displacement of the autoinhibitory helix, and rearrangement of the sidechains that surround the active-site zinc. Our data significantly advance our understanding of a key event in bacterial cell division (breakage of the peptidoglycan layer) and provide fascinating molecular insights into the conformational changes that regulate amidase activity. Methods A full set of methods are given in SI Appendix. In brief, structures of AmiA and the complex between the AmiB enzymatic domain and the EnvC LytM domain were determined using X-ray crystallography using software from CCP4 suite (34) with molecular replacement probes generated by Alphafold (28,29). Coordinates and structure factors have been deposited with the protein data bank (accession codes 8C2O and 8C0J). Bacterial viability and detergent susceptibility were assessed by spotting out bacterial cultures in 10-fold series dilution on LB agar or LB agar supplemented with 0.1 % (w/v) SDS. All strains carry a plasmid providing ampicillin resistance as a selection marker and agar was supplemented with 50 μg/mL ampicillin and 1 mM IPTG. MICs were determined in microbroth culture using LB containing 50 μg/mL ampicillin and 1 mM IPTG. Bacterial 2-hybrid experiments used the BACTH system (35). The wild-type E. coli BW25113 and single-amidase knockout strains were obtained from the Keio collection (36). The double-and triple-amidase knockout strains were produced in the same background using The displaced blocking helix (red) was too disordered to build in the crystal structure of the complex (most likely due to high mobility), but is shown here in a semi-transparent form to indicate a feasible position given the observed location of the (well ordered) interaction helix (blue). A closeup view of the amidase interaction helix bound in the LytM groove is shown to the right for comparison with the restraining-arm-bound LytM domain immediately above. Genebridges gene deletion kit (37). Phase-contrast microscopy was performed after overnight growth in LB containing 1mM IPTG and 50 μg/mL ampicillin. Molecular dynamics simulations used Gromacs (38) with the Charm forcefield (39). Structural figures were produced with Pymol (40). Data, Materials, and Software Availability. Crystal structure coordinates and structure factors data have been deposited in Protein Data Bank (8C2O (41) is the structure of E. coli AmiA and 8C0J (42) is the structure of the AmiB enzymatic domain bound to the EnvC LytM domain.
v3-fos-license
2016-05-12T22:15:10.714Z
2012-08-31T00:00:00.000
16680644
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0044428&type=printable", "pdf_hash": "38016621e255cf2272e0311ee5032927222b71ab", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42184", "s2fieldsofstudy": [ "Computer Science", "Psychology" ], "sha1": "38016621e255cf2272e0311ee5032927222b71ab", "year": 2012 }
pes2o/s2orc
Consistency of Network Modules in Resting-State fMRI Connectome Data At rest, spontaneous brain activity measured by fMRI is summarized by a number of distinct resting state networks (RSNs) following similar temporal time courses. Such networks have been consistently identified across subjects using spatial ICA (independent component analysis). Moreover, graph theory-based network analyses have also been applied to resting-state fMRI data, identifying similar RSNs, although typically at a coarser spatial resolution. In this work, we examined resting-state fMRI networks from 194 subjects at a voxel-level resolution, and examined the consistency of RSNs across subjects using a metric called scaled inclusivity (SI), which summarizes consistency of modular partitions across networks. Our SI analyses indicated that some RSNs are robust across subjects, comparable to the corresponding RSNs identified by ICA. We also found that some commonly reported RSNs are less consistent across subjects. This is the first direct comparison of RSNs between ICAs and graph-based network analyses at a comparable resolution. Introduction In a typical fMRI data set acquired during resting-state, BOLD (blood-oxygen level-dependent) signals often exhibit strong correlations between distant brain areas despite a lack of external stimuli or a cognitive engagement [1][2][3]. Such elevated correlation, known as functional connectivity, has been identified in the motor cortex [1], the dorsal and ventral pathways [3], and the default mode network (DMN) [4], to name a few. One way to find such networks following similar time courses is ICA (independent component analysis). Without an explicit model, ICA is able to separate time course data into a collection of independent signals, or components, with each component representing a network following a similar temporal pattern. For example, Damoiseaux et al. [5] examined resting-state fMRI data using spatial tensor PICA (probabilistic ICA) and discovered 10 components that consistently occurred in multiple subjects. Similarly, De Luca et al. [6] identified 5 distinct resting-state networks (RSNs) in BOLD fMRI data as 5 ICA components. More recently, Doucet et al. [7] examined the hierarchical structure of 23 components found by ICA and identified 5 major clusters among those. Throughout the text, such a network following a similar temporal pattern discovered by ICA is referred as a ''component.' ' Another approach to finding temporally correlated areas in resting-fMRI data is a graph theory-based approach. In such an approach, a functional connectivity network can be constructed based on a strong temporal correlation between brain areas [8]. In particular, various brain areas, represented as nodes, are considered connected to each other if the correlation between them is strong. These strong correlations among nodes are represented by edges connecting the nodes. In the resulting graph representing the brain network, some subsets of nodes may be highly interconnected among themselves, effectively forming communities of nodes. Such communities of nodes, also known as modules, have been identified in a number of brain network studies of resting-state fMRI [9][10][11][12][13], and although the number of nodes may substantially differ in these studies, the number of modules seems fairly comparable. Such modules represent areas of high temporal coherence in the brain, and some of the modules coincide with the RSNs discovered by ICA. For example, a module corresponding to the default mode network has been reported by multiple studies [9,11,13] whereas a module covering the motor network was also found in some studies [9,10,12,13]. However, comparing the network modules directly to RSNs from ICA is challenging due to the difference in their spatial resolutions. While RSNs from ICA have a voxel-level resolution, most whole-brain networks are typically much coarser and consist of only a few hundred nodes. It is worthy to note here that, recently, a study combined the spatial ICA and graph theoretical analysis to demonstrate topological properties of each RSN [14]. Even though both ICA and graph theory-based network approach can find similar organization structure in the brain, a network approach offers two advantages. First, a network approach can be used to assess similarity or differences in overall network structure quantitatively. Recent advances in network science provide methods to examine how network modules change over time [15,16]. Such techniques have been applied to fMRI data to examine dynamic reconfiguration of brain network organization [17]. Secondly, a network approach can examine how different modules are connected to and interact with each other. Although network modules tend to form cliques of their own, such modules are also connected to other modules, allowing exchange of information and forming the network as a whole. This is in contrast to ICA, in which each component is independent and isolated from the other components. Therefore, when functional brain networks are constructed at the voxel-level, a resolution similar to ICA, a network based approach offers distinct advantages over ICA in understanding the overall organization of the brain network. A major challenge in examining network module organization is to summarize the consistency of modules across subjects. This is particularly a concern since each subject's network structure varies slightly from other subjects even though the overall organization appears similar. One possible solution is to generate a group network summarizing the consistent network connectivity observed in a large number of subjects. Examining the modular organization of the resulting group network may enable evaluation of consistent network modules. The notion of an ''average'' network sounds very appealing in such a scenario. In fact, several functional brain network studies have generated a group network by simply averaging the correlation coefficients between the same set of nodes across subjects [10,12,13]. Another study has examined whether or not the correlation coefficient between each voxel pair significantly differs from zero [11]. Although averaging correlation matrices across subjects can represent the connectivity between two nodes as an element in the averaged matrix, such an approach may not accurately summarize the consistent network structure. In other words, such an approach may adequately capture the connection strength between nodes A and B, but this method does not consider how node A is connected to other nodes in the network. In this work, we attempt to examine modular organization of the resting-state brain network and compare the results to that of the RSNs identified by ICA. To do so, we constructed functional brain networks with fMRI voxels as network nodes [18], and thus the resulting network resolution is comparable to that of previous ICA studies. We then examined network modules in these voxelbased networks for consistency across subjects, and whether consistent modules are comparable to the RSNs found by ICA studies. To do so, we employed scaled inclusivity (SI), a metric quantifying consistency of modules across multiple networks of a similar type [16]. Our hypothesis is that, if RSNs are stable across subjects, our approach should be able to identify such RSNs as network modules associated with high SI. Since SI can be calculated at the nodal level, the consistency of the resulting modules can be assessed at the voxel-level. Moreover, this allows us to compare the consistency of modules to the variability of the corresponding RSNs observed in an ICA study [5]. Results The data used in this work were part of the 1,000 Functional Connectomes Project (http://fcon_1000.projects.nitrc.org/), a collection of resting-state fMRI data sets from a number of laboratories around the world. Of all the data sets available, 4 data sets from 4 different sites (Baltimore, Leipzig, Oulu, and St. Louis), consisting of n = 194 subjects in total, were selected because (i) these data sets consisted of young to middle-aged subjects (20-42 years old) and (ii) these data sets were acquired while subjects' eyes were open and fixated on a cross. The original resting-state fMRI data were processed using the same preprocessing pipeline available in our laboratory (see Materials and Methods). Networks were formed by calculating a correlation coefficient for every voxel pair then by thresholding the resulting correlation matrix to identify strong correlations. The threshold was adjusted for each subject in a way that the density of connections was comparable across subjects (see Materials and Methods). Each voxel was treated as a node in the resulting network. Each subject's network consisted of an average of 20,743 nodes. Modules in each subject's network were identified by the Qcut algorithm [19]. The algorithm identified sets of nodes that were highly interconnected among themselves and designated them as distinct modules. Each node in the network can only be part of one module at a time. After modules were identified in all the subjects, the consistency of modules across subjects was assessed using SI. In brief, SI summarizes the overlap of nodes in modules across different subjects while penalizing any disjunction between modules (see Materials and Methods). SI is calculated at each node, forming an SI image summarizing across-subject consistency of the modular structure. More specifically, each SI value measures how consistently a particular node falls into a particular module. A high SI value indicates that the voxel is located in the same module across subjects, while a low SI value signifies that the voxel is likely part of different modules in different subjects. Theoretically, SI ranges from 0 to n-1 (n-1 = 193 in this study) [16]. However, in practice the SI values are considerably lower than the possible maximum value of n-1 due to disjunction between modules across subjects. Figure 1 shows the SI image generated from all the subjects' modular organization, thresholded at SI.15. This threshold was maintained throughout the manuscript to facilitate comparison between modular organizations. The areas of high SI correspond to areas that were consistently part of the same modules across subjects. These areas include the occipital lobe, precuneus, posterior cingulate cortex, pre-and post-central gyri, medial frontal gyri and the components of basal ganglia. During the calculation of the global SI map shown in Figure 1, we were able to determine which subject's module was the most representative at a particular node (see Materials and Methods) [16]. This representative module resulted in the largest SI value at that particular voxel location among all the subjects' modules. To further examine high SI areas, the most representative modules Figure 1. Consistency of whole-brain functional modular organization across subjects. Global scaled inclusivity (SI) shows that several brain regions are consistently partitioned into the same modules across individuals. These areas include portions of the following cortices: visual, motor/sensory, precuneus/posterior cingulate, basal ganglia, and frontal. doi:10.1371/journal.pone.0044428.g001 that correspond to the brain regions in Figure 1 were identified. These representative modules were then used to summarize consistency among subjects and SI was calculated with respect to these modules. The resulting images are module-specific SI images and summarize group consistency at a voxel-level. Module-specific SI images are analogous to coefficient of variation (CV) images, which are used in ICA analyses to summarize consistency of RSNs at the voxel-level [5]. The visual module covers the entire span of visual cortex and includes both primary and secondary cortices ( Figure 2). This module is comparable to ICA components like components A and E in Damoiseaux et al. [5], RSN1 in De Luca et al. [6], and module M2b in Doucet et al. [7]. The corresponding module has also been reported in previous functional brain network analyses, including Module II of He et al. [11], Module 4 of Rubinov and Sporns [12], and the posterior module of Meunier et al. [10]. Thus, this module is highly consistent among individuals and easily identifiable by both ICA and network methodologies. Moreover, the secondary cortices of the occipital lobe exhibited high SI values (Figure 2), which is comparable to the reduced variability observed in visual components found by a previous ICA study [5]. The sensory/motor module ( Figure 2) is analogous to the motor network identified by the seed-based correlation method [1]. The most consistent regions within this module include the pre-and post-central gyri. On the other hand, the supplementary somatosensory area (S2), surrounding auditory cortex and portions of the posterior insula show reduced consistency across subjects. This module roughly corresponds to component F in Damoiseaux et al. [5], RSN3 in De Luca et al. [6], and module M2a in Doucet et al. [7]. Similar to the results reported by Damoiseaux et al. [5], the consistency of this module was lower than that observed for both default mode network (DMN) and visual modules ( Figure 2). Module I of He et al. [11] and Module 1 of Rubinov and Sporns [12] demonstrate similarities with our sensory/motor module. Interestingly, these previously reported modules also include portions of the insula and auditory cortices. These findings are not only consistent with ours but also to previous reports of the ICA results. The basal ganglia module ( Figure 2) consisted of the caudate, globus pallidus, putamen, and thalamus. It also extended into the medial temporal lobe, temporal pole, parahippocampal gyrus, hippocampus, amygdala and cerebellum. Interestingly, these brain regions have not been consistently classified into one component by ICA. While De Luca et al.'s RSN3 suggests some involvement of the hippocampus and thalamus [6] within the motor component, some ICA studies did not find a component similar to this module [5,7]. However, another ICA study by Damoiseaux et al. revealed a component consisting of the thalamus, putamen and insula (component K) [20] which led to other ICA studies on connectivity. In particular, the basal ganglia component has been shown to include portions of the striatum, such as the caudate and the globus pallidus [21][22][23]. Similarly, basal ganglia modules have been previously reported in studies that have used network methodologies. For example, Module V found by He et al. [11] and Module 3 by Rubinov and Sporns [12] contain all the regions of the basal ganglia. Variations of this have also been described in the central module of Meunier et al. [10] and in the RSN3 of De Luca et al. [6]. Though these findings contain similar regions as our module, they extend further into the insular and motor cortices. Functional connectivity of the cerebellum with the rest of the basal ganglia proved unique in our results compared to previous network module findings. Although global SI ( Figure 1) values did not indicate high modular consistency of the cerebellum across subjects, the module-specific SI map shows that it is consistently part of the basal ganglia module across subjects ( Figure 2). The default mode network (DMN) [4,24] was also identified as a consistent module across subjects ( Figure 2). This module included the precuneus (PCun), posterior cingulate cortex (PCC), inferior parietal cortex, superior medial frontal cortex, and anterior cingulate cortex (ACC). The PCC exhibited elevated SI values and was found to be the most consistent brain region of the DMN. In comparison, the SI values of the medial frontal gyri were attenuated, indicating this region to be less consistently found in the DMN module. The intra-modular consistency of this module appeared comparable to the reduced variability of the DMN component found by an ICA [5]. While this module covers the brain areas typically considered as part of the DMN, weaker SI in the frontal portion also suggests that the anterior and posterior portion of the DMN may not be as strongly coupled as the rest of the DMN. This may be because the connectivity pattern is slightly different between the anterior and the posterior portions of the DMN. Research supporting this hypothesis includes that of Andrews-Hanna et al. [25] using temporal correlation analysis. They determined that the DMN was composed of multiple components, including a medial core and a medial temporal lobe subsystem. Using ICA, Damoiseaux et al. [20] described two RSN components that together included the superior and middle frontal gyrus, posterior cingulate, middle temporal gyrus and superior parietal cortices. Finally, the work of Greicius et al. notes some differences in the seed-based connectivity of the DMN when the seed was placed in either the PCC or the ventral ACC [2]. Among the modules shown in Figure 2, there were more than one choice for the most representative subject in the sensory/ motor module and the default mode module. This can be seen in Figure 3 showing the image of the most representative subject by voxel locations. Within the motor / sensory strip and the precuneus, there were two subjects with the highest SI values. Even though either of these subjects could serve as the representative subject for these modules, the overall consistency of the entire module was still captured, as the module specific SI images appear strikingly similar even if different subjects were chosen as the representative subject ( Figure 3). The number of modules in Figure 1 seems surprisingly few, especially when compared to previous reports of ICA [5,7]. Our results, however, do not indicate the absence of modules similar to previously found ICA components. Instead, some were only found to be less consistently organized across subjects ( Figure 4). These modules do not necessarily include similar sets of nodes across subjects, and consequently do not exhibit high global SI values ( Figure 1). Two of such modules are the ventral (superior parietal cortex as well as superior and medial frontal gyri) and dorsal (superior parietal cortex, superior and dorsal lateral frontal, and precentral gyri) attention networks identified by previous fMRI analyses [3]. A previous ICA finding has combined these two systems into the same component [6] while others have separated them into separate components for the left and right hemispheres [5,7]. Here we present two distinct modules corresponding to the separate ventral and dorsal attention systems which have also been found in previous network analyses [11,13]. It is interesting to note that low SI values in our ventral and dorsal attention modules ( Figure 4) are in contrast to the stability of corresponding components found using ICA [5]. In addition to the ventral and dorsal attention modules, we present a module containing the cerebellum (Figure 4). Though the cerebellum was found to be consistently connected to the basal ganglia ( Figure 2), many nodes within the cerebellum formed a unique module by themselves. However, reduced modulespecific SI values indicate that this module demonstrates limited consistency across subjects. Thus, while the cerebellum may belong to the same module as the basal ganglia in some subjects, in another group of individuals the cerebellum belong to an isolated module as shown in Figure 4. We used SI to assess the consistency of modules across subjects rather than calculating the average network, which has been used by some researchers to generate a ''summary'' network for a study population [10][11][12][13]. An average network, which is produced by averaging correlation matrices across subjects, does not properly represent the characteristics of the individual networks [26]. Rather, it produces a network whose key modular structure is altered from that of the individual networks. Figure 5 shows an example of such an alteration. In particular, we generated an average network by averaging the correlation matrices from all the subjects (n = 194). This average correlation matrix was then thresholded (see Materials and Methods) and modular organization was then detected on the resulting adjacency matrix. The modular organization of this average network is shown in Figure 5a, with each color denoting a network module. The data used in our analysis represent a subset of the subjects used by Zuo et al. [27] and show that modular organization is very similar to theirs. Most striking, however, is the modules associated with the DMN. Using an average network, we found that two distinct anterior ( Figure 5b) and posterior (Figure 5c) modules exist. This is in stark contrast to the DMN module-specific SI image, which does not separate into anterior and posterior parts (Figure 5d). To add further confidence in this finding, DMN modules of the individuals of each data set were examined. We found that the Module-specific SI of four most consistent modules across subjects. Row 1: Four functional modules were found to be highly consistent across subjects. These modules include the visual (yellow), sensory/motor (orange) and basal ganglia (red) cortices as well as the default mode network (precuneus/posterior cingulate, inferior parietal lobes, and medial frontal gyrus; maroon). Overlap among these modules was present but minimal (white). Rows 2-5: Module-specific SI images for each of the four most consistent modules, namely the visual (row 2), sensory/motor (row 3), basal ganglia (row 4), and default mode (row 5) modules. Note that the visual, sensory/motor and basal ganglia all show higher consistency across subjects than the default mode module. Among the default mode areas, the precuneus and posterior cingulate cortex show the greatest consistency across subjects. A comparison between the other three SI modules shown in Figure 2 and the two shown in Figure 4 with those from the average network are presented in Figure 6. Here we show the modules for the visual and motor/sensory cortices as well as the basal ganglia from the average network. These three modules comprise similar areas represented in the module specific SI images of the corresponding modules in Figure 2. In addition to previously mentioned differences (Figure 5b, c), we show that average modules corresponding to the ventral and dorsal attention brain regions are quite different than those found using module specific SI in Figure 4. For instance, averaging correlation matrices across individual subjects resulted in the separation of the left from the right dorsal lateral prefrontal cortex. Neither of these modules included the superior portions of the parietal lobules. Instead, these brain areas were identified as a separate module. Interestingly, this module included bilateral secondary sensory cortices. Averaging alters not only modular organization, but also other network characteristics [26]. Figure 7 shows the distributions of node degree, or the number of edges per node, for all n = 194 subjects (blue) as well as that of the average network (red). As it can be seen in Figure 7, the average network has far more low degree nodes than any of the subjects in the data set. However, the average network lacks medium degree nodes and thus its degree distribution drops faster than that of the other individual networks. Various network metrics are also altered in the average network. Of the four functional modules that were found to be highly consistent across subjects, two (motor/sensory cortices and the default mode) had multiple representative subjects that could have been chosen to calculate module-specific SI. Here we show that in each case the resulting module specific SI map is similar in the brain areas that are included as part of the overall module. For instance, images of the most representative subject by voxel location (top panel) show that two individuals are the most representative for the motor and sensory cortices, respectively. However, when each of these individuals was used to calculate modulespecific SI it was found that the resulting module included both cortices. doi:10.1371/journal.pone.0044428.g003 For example, the clustering coefficient and the path length, describing tight local interconnections and efficient global communication respectively [28], are significantly different (p,0.0001, one-sample T-test) from that of the individual networks (see Table 1). Taking all these observations together, we can conclude that the average network does not accurately represent characteristics of individual networks in the data. Discussion In this work, consistency of modules in resting-state functional connectivity networks was examined at the voxel-level, a resolution comparable to that of group ICA. Module consistency across subjects (n = 194) was assessed and the results were compared to ICA components and network modules previously reported by other studies. Modular consistency was assessed using SI, which quantifies inter-subject variability in modular organization. The use of SI also allowed us to examine inter-subject consistency of a particular module at the voxel-level. This showed what brain regions within a module were consistent in an analogous fashion to group ICA results [5]. Our global SI data show that only a handful of brain areas were consistently organized in modules. These modules alone, however, were not found to constitute the entire network. Instead, we show that other network modules are less consistent across subjects; multiple examples are presented to convey this point. Interestingly, despite the large number of nodes in our brain network data, the number of major modules did not change dramatically from the previously reported brain network modularity [9][10][11][12][13]29]. Increasing the network resolution (the number of nodes used to model the brain as a network) did not result in more modules. Power et al. [13] also discovered a similar number of modules despite differences in the network resolution. This is particularly interesting because Power et al. [13] cautioned against using network nodes that were not derived based on brain functional anatomy. Based on our finding and Power et al.'s finding, we conclude that modular structure is robust and can be ascertained despite differences in the parcellation scheme of the brain. However, a voxel-based network is advantageous since the shape of each module can be determined at finer granularity. A voxel-based network also enables examination of intra-modular characteristics within a particular brain area. Some RSNs, although reported in multiple studies, were not found to be consistent in our analysis. This may falsely suggest that there are only a handful of modules in the resting-state functional brain network. Other modules, however, exist and are found when modules from each subject's network are examined carefully. A few examples of such network modules are shown in Figure 4, with somewhat attenuated SI values than the RSN modules reported in Figure 2. Thus, the global SI image needs to be interpreted carefully. It cannot be used to identify a ''significant'' module that exceeds a certain threshold, an approach commonly used in a typical fMRI analysis. It only enables assessment of modular consistency across subjects and does not eliminate the need to qualitatively evaluate network structure [30]. In fact, the extension of the basal ganglia module into the cerebellum (Figure 2) could not have been observed if this module were not carefully examined. The work of Kiviniemi et al. [23] serves as a prime example of ICA of data similar in demographic characteristics and scanning protocol to the data in our analysis. Their data, which were collected at Oulu University -one of the sites for the data used in our analysis -identified several components that are similar to modules described in our study. Kiviniemi et al. used peri-Sylvian, occipito-parietal, frontal and temporal signal sources to describe 42 RSN components [23]. These results do bear similarity with our presented findings. For example, they described components consistent with the functional association of major cortical areas, including the visual, sensory and motor cortices. In addition to this, they present a component similar to the dorsal attention module presented in Fig. 4. However, modular analysis of consistent functional neighborhoods in the brain does in fact differ from the results of ICA. The most prominent dissimilarity is the number of components in relation to the number of modules. For instance, the visual module identified in our study corresponds to seven separate components found in Kiviniemi et al. [23]. Also, the ventral attention module described in our results comprised of the DLPFC (dorsolateral prefrontal cortex) and the superior parietal lobules. Using ICA, however, the DLFPC was found to be an isolated component. Finally, Kiviniemi et al. showed that, depending on the number of components, the anterior and posterior portions of the DMN are separated into distinct components [23]. When examining the consistency of modular organization across a group of subjects, one may be tempted to generate an average network and examine its modular organization. This approach seems intuitive and reasonable especially for those neuroimaging researchers who are accustomed to voxel-based analyses of neuroimaging data. The notion of average images may sound reasonable in fMRI analyses examining activation patterns through the averaging of multiple individual activation maps, hence one may believe that averaging connection strengths across subjects may also result in a network that summarizes the overall characteristics of the group. Although such an averaging process may be able to summarize the correlation between two particular nodes, it alters the characteristics of the network as a whole tremendously. Such altered characteristics include the modular organization ( Figure 5), degree distribution (Figure 7), and network metrics (Table 1). Moussa et al. also demonstrated that average metrics do not imply regional consistency [31]. Since the average network does not necessarily represent the characteristics of the networks it aims to represent, an alternative approach should be considered in summarizing a collection of networks. For the modular organization in particular, selecting a representative subject, based on the Jaccard index, is a simple solution [9,32]. The SI-based approach, as used in this paper, is a more sophisticated way to examine consistency of modular organizations across subjects. Several network science methods have been developed to compare the modular organization across multiple networks [15,16], thus application of such methods in brain network data is more appropriate than simply averaging correlation matrices. Our use of SI demonstrated consistency of the network modular structure quantitatively. However, there are some limitations associated with our approach. First, in the algorithm we used to identify modules [19], each node can only be part of one module. However, it is plausible that some parts of the brain, in particular multi-modal areas, may be associated with multiple modules at once. In recent years, a number of algorithms have been proposed to analyze overlapping modules [33][34][35] in which some nodes are assigned to multiple modules. Such an algorithm has been applied Figure 6. Selected modules from the average network. Shown here are the modules from the average network that correspond to the module-specific SI images shown in Figures 2 & 4. The modules from the average network that correspond to the motor/ sensory cortices, the basal ganglia and the cerebellum were found to be similar with respect to their corresponding module-specific SI image. However, two distinctions were found in addition to those demonstrated in Figure 5. First, the average visual module includes only the area of the primary visual cortex. This is in contrast to the module-specific SI image for the visual cortex shown in Figure 2, which extends into secondary visual cortices. Second, the average network segregates the anterior from the posterior portions of the ventral and dorsal attention systems. In this case, the anterior portion consists of two modules, one for each of the bilateral dorsal lateral prefrontal cortices. Interestingly the posterior element of both ventral and dorsal attention systems (superior parietal lobules) is not separated into bilateral portions. It does, however, include secondary sensory cortices S2. doi:10.1371/journal.pone.0044428.g006 to an analysis of a 90-node structural brain network and overlap between modules has been outlined [29]. However, interpretation of such overlapping modules is unclear. Moreover, since overlapping module algorithms tend to be computationally intensive, applying such methods to brain networks at the voxel-level may pose a significant challenge. However, the evaluation of modular consistency across a group of individuals can identify multiple modular structures that contain a single brain region. This was observed with the cerebellum in our work. Another limitation of our approach is that the algorithm to identify modules is imprecise. Identifying the true modular structure of a network is an NP-hard problem [36]. Most algorithms that find modular organization, including Qcut [19], can yield only an approximation to the true solution and have some variability associated with each approximated solution. To overcome this problem, we ran Qcut 10 times for each subject's network, and selected the most representative modular partition as the best solution (see Materials and Methods). Even then, the variability in the modular organization cannot be completely eliminated. However, we believe that, if the modular organization of the brain network is truly robust across subjects, our global SI image can identify nodes that belong to the same module despite some variability in modular partitions. Finally, some issues remain as inherent confounds. One example includes the effect of head movement correction on our analyses and their functional interpretation. For instance, the work of Van Dijk et al. [37] demonstrates the difficulty of controlling for head movement even after extensive correction. This confound has also been described in the work of Power et al. [38] and Satterthwaite et al. [39]. In summary, we found that the functional brain network at resting-state consisted of several modules that are highly consistent across subjects. These modules were analogous to the RSNs found in previous ICA and network analyses, even at the voxel-level resolution. Consistency of these modules across multiple study sites, with different MRI scanners and imaging protocols, indicates robust yet consistent organization of the functional connectivity network at rest. The methodology used in this work can be further extended to examine alterations in the modular structure of the brain network under various cognitive states or neurological conditions. Data Data used in this work is publicly available as part of the 1,000 Functional Connectome Project (http://fcon_1000.projects.nitrc. org/), a collection of resting-state fMRI data sets from a number of laboratories around the world. From all the data sets available, 4 data sets from 4 different sites were chosen, all consisting of young to middle aged subjects (ages 20-42 years old). Namely, Leipzig data (n = 37, male/female = 16/21), Baltimore data (n = 23, m/ f = 8/15), Oulu data (n = 103, m/f = 37/66), and St. Louis data (n = 31, m/f = 14/17). BOLD fMRI data in total of n = 194 subjects (m/f = 75/119) were included in our analysis, and all the images were acquired during resting-state with eyes open with a fixation cross. Network Formation The resting-state fMRI time series data from each subject was realigned to the accompanying T1-weighted structural image and spatially normalized to the MNI (Montréal Neurological Institute) template by the FSL software package (FMRIB; Oxford, UK), and any non-brain voxels were removed from the fMRI data. The normalized fMRI data was masked so that only the gray matter voxels corresponding to the areas specified by the AAL (Automated Anatomical Labeling) atlas [40] were included in the subsequent analyses. A band-pass filter (0.009-0.08 Hz) was applied to the masked time series data to filter out any physiological noises and low-frequency drift [18,41,42]. From the filtered data, confounding signals were regressed out, including 6 rigid-body transformation parameters generated during the realignment process and 3 global mean time courses (whole-brain, white matter, and ventricles) [18,41,42]. Then a cross-correlation matrix was calculated, correlating each voxel's time course to all other voxels in the data set. The resulting correlation matrix was thresholded with a positive threshold, yielding a binary adjacency matrix describing a network with each voxel as a node. In the adjacency matrix, 0 or 1 indicated the absence or presence of an edge between two nodes, respectively. The threshold was determined in a way that the number of nodes N and the average degree K followed the relationship N = K 2 . 5 . This thresholding method was used in order to match the edge density across subjects [18]. The resulting network had the edge density comparable to other types of self-organized networks of similar sizes [43]. N and K varied among subjects; the averages of N and K were 20,743 (range = 17,255-21,813) and 55.5 (range = 53.2-65.5), respectively. Module Identification In a network, the modular organization of nodes can be identified by finding densely connected groups of nodes that are only sparsely connected to other groups of nodes [44]. Thus a network can be partitioned into such groups of nodes, or modules, based on connectivity patterns. There are a number of community detection algorithms, calculating a metric known as modularity Q, a quality function describing optimal modular partition [44]. Finding the optimal community structure, or maximizing Q, is an NP-hard problem [36]. Thus most algorithms only find an approximate modular partition of a network, and such algorithms often produce a different solution for each run. In this work, we used an algorithm called Qcut [19] to find modular organization in each subject's brain network. Since Qcut is an algorithm producing a different solution in each run, it was run 10 times for each subject's network, and the solution producing the highest Q was selected as the representative modular partition for that subject. The number of modules varied across subjects, with 14.5 modules in each subject's network on average (range = 6-29). Global Scaled Inclusivity Scaled inclusivity (SI) was developed as a metric to evaluate consistency of the modular organization across multiple realizations of similar networks. It is calculated by measuring the overlap of modules across multiple networks while penalizing for disjunction of modules. For example, a node V is part of module A in subject i and module B subject j. Then SI for node V, denoted as SI V , is calculated as where S A and S B denote sets of nodes in modules A and B, respectively, and |?| denotes the cardinality of a set [16]. Figure 8 shows a schematic of how SI can be calculated across different subjects. Although the overall modular organization is similar across subjects, modules slightly vary from subject to subject (Figure 8a). To assess the similarity between two modules from two different subjects, SI can be calculated based on (1) (see Figure 8b). If the two modules A and B consist of the identical set of nodes, then SI V = 1. As the overlap between S A and S B diminishes, the numerator of (1) decreases, leading to SI V ,1. Or, if either S A or S B is larger than the other, then the denominator of (1) increases, resulting in SI V ,1. SI can be calculated between all modules in a particular subject, or the referent subject, against modules from all the other subjects [16]. If there is any overlap between the referent subject's module and a module from another subject, then SI is calculated between the modules and the overlapping nodes are identified (see Figure 8c). This process results in maps of overlapping nodes between the referent subject's modules and the other subjects' module, with the corresponding SI values (Figure 8c). A weighted sum of these maps is calculated, using SI as the weight, and the result is a subject-specific SI map. The subject-specific SI map shows the consistency of the referent subject's modules when compared to the modular organization of all the other subjects (Figure 8c). In the subject-specific SI map, each node's SI value reflects how consistently that particular node falls into the same module across subjects. Although a subject specific SI map can summarize the consistency of the modular organization across subjects, it is highly influenced by the choice of the referent subject [16], as can be seen in Figure 8d. In order to avoid a potential bias caused by selection of a particular referent subject, subject-specific SI maps from all the subjects are summarized as a weighted average, with the Jaccard index for each subject as the weight. The Jaccard index summarizes the similarity in modular partitions between two subjects as a single number, ranging from 0 (dissimilar) to 1 (identical) [19]. The Jaccard indices are calculated between each subject against all the other subjects, and the resulting indices are averaged. The average Jaccard index for each subject describes how similar that subject's modular partition is to all the other subjects'. The average Jaccard indices are appropriately scaled during the weighted averaging process. The resulting weighted average map is the global SI map, demonstrating the consistency of modules at each node (see Figure 8d). The group SI image is scaled between 0 and n-1; if SI = n-1 at a particular node, that means that node is in the same module with exactly the same set of nodes in all the subjects. Needless to say, such an occurrence is very rare in the brain network. Details on the calculation of the global SI is found in Steen et al. [16]. In order to calculate SI across subjects, it is imperative that all the subjects' networks have the same set of nodes. Since some subjects' networks had fewer nodes than that of the others, artificial isolated nodes were also included to match the number of nodes. These artificial nodes were treated as a single dummy module during the calculation of SI, and later eliminated from the group SI image. Module-Specific Scaled Inclusivity As described above, the global SI image is calculated based on multiple subject-specific SI images (see Figure 8). Consequently, at a particular node location, it is possible to determine the subject yielding the highest SI value, referred as the representative subject ( Figure 9a). The highest SI value at that particular node location indicates that the module from the representative subject is considered most consistent across subjects. It is possible to visualize which subject is most representative at different voxel locations, as seen in Figure 3. It should be noted that representative subjects represented in Figure 3 exhibit some spatially consistent pattern, indicating that the most representative subject at one voxel location is likely the most representative subject in the neighboring voxels as well. Once the representative subject is identified, its modular organization is examined and the module containing the node of interest is identified (Figure 9a). That module is considered as the representative module yielding the highest SI at that particular node location. Once the representative module is identified in the representative subject's network, then it is possible to evaluate SI between that particular module and modules from all the other subjects. Modules with any overlap with the representative module are recorded, along with the corresponding SI value (Figure 9b). All nodes in the overlapping modules, not just overlapping nodes, are recorded during this process; this is in contrast to the global SI calculation (Figure 8d) in which only the overlapping nodes are recorded. Finally a weighted sum of the modules is calculated, with SI values as weights, resulting in the module-specific SI map (Figure 9b). Such a module-specific SI map shows the consistency of the representative module across subjects. This is because a module-specific SI map summarizes any modules centered around the representative module by summing them together. Although nodes belonging to the representative module may have high SI values, nodes outside the representative module can also have high SI values if those nodes are consistently part of the same module across subjects [16]. A module-specific SI image has the same range as the global SI image, from 0 to n-1. As in the global SI image, SI = n-1 means all the subjects had exactly the same module comprising exactly the same set of nodes. Average Network In brain network analyses involving networks from multiple subjects, some researchers generate an average network in order to summarize the common network characteristics present among the study subjects [10,11,13,27]. However, it is not clear if such an Figure 8. A schematic of global SI calculation. Although the modular organization appears similar across subjects, modules slightly vary from subject to subject (a). Different colors denote nodes belonging to different modules. Among the subjects, one subject is chosen as the referent subject, and any overlap between that subject's modules and any other modules from the other subjects are determined (b). This process results in maps of overlapping nodes between modules, along with SI values summarizing the fidelity of the overlaps. A weighted sum of the overlap maps, with the SI values as the weights, is calculated, yielding a subject-specific SI map (c). A weighted average of the subject-specific SI maps, with the Jaccard indices as weights, is then calculated, resulting in the global SI map summarizing the consistency of the modular organization across subjects at the nodal level (d). doi:10.1371/journal.pone.0044428.g008 Figure 9. A schematic of module-specific SI calculation. For a particular node of interest, the most representative subject with the highest SI is determined from subject-specific SI maps (a). Then the modular organization of the representative subject's network is examined, and the module containing the node of interest is identified as the representative module. Next, modules with any overlap with the representative module are identified, and the corresponding SI values are calculated (b). A weighted sum of the overlapping modules is calculated with the SI values as weights, summing modules centered around the representative module. The resulting module-specific SI shows the consistency of the representative module across subjects. doi:10.1371/journal.pone.0044428.g009 average network truly captures the characteristics of individual networks it aims to represent. In particular, it is not clear whether the modular organization of the network is preserved in an average network. Thus, in order to examine whether an average network has similar characteristics as individual networks, we generated an average network for the data we used in this study. This was done by averaging the correlation matrices from all the subjects, element by element. Since the number of voxels differed across subjects as described above, for each element in the correlation matrix, some subjects may have a valid correlation coefficient for the corresponding node-pair whereas the other subjects may not have a valid correlation coefficient because either node in the node-pair is missing. Thus, in the calculation of the average correlation matrix, the denominator was adjusted for the number of all valid correlation coefficients at each element of the matrix. The resulting correlation matrix was thresholded in the same way as described above, producing an adjacency matrix based on the average correlation. The modular organization of this average network was examined by the Qcut algorithm as described above.
v3-fos-license
2023-03-08T06:18:24.681Z
2023-03-06T00:00:00.000
257376929
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "8f0f8de9e07184d9e68ebc03fb9063647ca05485", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42185", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "sha1": "3b54da53fbdb1c33e04daae9b01224386c0a1d61", "year": 2023 }
pes2o/s2orc
Experimental Infection of North American Deer Mice with Clade I and II Monkeypox Virus Isolates The global spread of monkeypox virus has raised concerns over the establishment of novel enzootic reservoirs in expanded geographic regions. We demonstrate that although deer mice are permissive to experimental infection with clade I and II monkeypox viruses, the infection is short-lived and has limited capability for active transmission. M onkeypox virus (MPXV; genus Orthopoxvirus, Poxviridae), which causes mpox disease, is a zoonotic pathogen that is endemic in Central Africa (clade I) and Western Africa (clade II) (1). In mid-May 2022, the World Health Organization first reported an increasing number of mpox cases in nonendemic countries, most of which had no established travel links to endemic regions (2). By October 2022, the outbreak encompassed >100 countries with reported confirmed mpox cases (3). The global spread of MPXV outside of regions in which this virus was known to be endemic raises concerns over reverse zoonotic events resulting in the establishment of novel wildlife reservoirs. Small mammals, including rodents, have previously been implicated as enzootic reservoirs of MPXV. In North America, studies have shown that prairie dogs are susceptible to MPXV infection and may serve as a potential reservoir, but data on other wild rodents are limited (4). Peromyscus species rodents have an extensive and geographically diverse host range spanning most regions across North America and are well-established reservoirs for several zoonotic pathogens (5). We evaluated the competency of deer mice (Peromyscus maniculatus rufinus) as a potential zoonotic reservoir for MPXV by using representative isolates from both clades. We infected groups of 12 adult (>6 weeks of age) deer mice with 1 of 3 MPXV isolates through intranasal instillation. The isolates included a clade II human isolate from the 2022 outbreak (MPXV/SP2833) (challenge dose 10 6 PFU); a second clade II virus isolated directly from a North American prairie dog (USA-2003) (challenge dose 10 6 PFU); and a historical clade I isolate (MPXV/V79-1-005) (challenge dose 10 4 PFU). For each virus preparation, we administered the maximum challenge dose based on titration on Vero cells. On days 4 and 10 postinfection, we euthanized 3 male and 3 female mice and collected selected solid organs for analysis of viral titers using molecular assays targeting of envelope protein gene (B6R) (6) and infectious viral quantification assays. In addition, we collected oral and rectal swab specimens and tested them similarly to assess the potential for shedding. We conducted animal studies in accordance with the Canadian Council of Animal Care guidelines and following an animal use document approved by an institutional Animal Care and Use Committee, in a Biosafety Level 4 laboratory of the Public Health Agency of Canada. We conducted fully validated molecular assays in accordance with Public Health Agency of Canada special pathogens diagnostic procedures. Throughout the course of the study, we observed no obvious signs of disease in any of the infected deer mice. We did not record daily weights because of the requirement for anesthetizing animals before any hands-on manipulation. Analysis of tissue samples from mice infected with the 2022 Canada isolate (MPXV/SP2833) revealed limited and sporadic spread of MPXV beyond the sites of inoculation (nasal turbinates and lungs) (Table). By comparison, USA-2003 appeared to disseminate beyond the respiratory tract, resulting in uniform detection of MPXV DNA in liver and spleen specimens collected at 4 days postinfection (dpi). The clade I virus (MPXV/V79-1-005) yielded results more similar to those for USA-2003; nasal turbinate, lung, liver and spleen samples were positive at 4 dpi. By day 10 dpi, organ specimens from most mice across the 3 infection groups were trending toward clearance (Table). Infectious titers conducted on lung and nasal turbinate specimens collected at both timepoints from the 3 challenge groups corroborated these findings and demonstrated decreasing viral titers between the 2 timepoints ( Figure). Of note, the clade I virus did not achieve high titers in either organ, even when analyzed at 4 dpi. Although this finding may suggest the MPXV/V79-1-005 isolate does not replicate as efficiently in deer mice, the apparent low viral titers observed may be attributable to the lower inoculum dose. A similar challenge dose of this strain resulted in lethal infection in CAST/EiJ mice (7). Further, subsequent cell culture propagations of MPXV/V79-1-005 resulted in similar titers as the clade II isolates used previously, suggesting that all 3 replicate to a similar extent on Vero cells. Nevertheless, follow-up studies with other clade I viruses are warranted. We collected oral and rectal swab specimens to assess shedding and the potential for transmission of MPXV from infected deer mice. Overall, shedding, as suggested by the presence of MPXV DNA in swab extracts, was readily detectable in deer mice inoculated with either clade II virus at day 4, but we noted decreasing levels of positivity by day 10. Shedding of MPXV/V79-1-005 (clade 1) was far less than that of either of the clade II viruses we evaluated (Table). RESEARCH LETTERS Our study suggests that these rodents may support a short-term but abortive infection with at least clade II MPXV isolates, although with limited capacity to spread. Given the short duration of infection, these animals probably do not represent a viable enzootic reservoir for MPXV. Further studies should be conducted on other rodents in North America and Europe to assess their competency as vectors or reservoirs of MPXV. Particular interest should be given to Rattus species rodents that may frequently come into contact with medical waste containing viable MPXV. (1). It is transmitted to humans by contact with sheep or goats that are affected by contagious ecthyma. The term orf is used to designate contagious ovine pustular dermatitis. In infected sheep and goats, infection most often results in perioral and perinasal ulcerations but occasionally also in a generalized pustular rash. Orf Nodule with Erythema The most at risk for exposure are persons working in the meat sector, such as farmers or butchers (1). Farmers and those who maintain animal herds are often familiar with the condition and do not seek RESEARCH LETTERS A 26-year-old patient in France who worked as a butcher sought care initially for erythema multiforme. Clinical examination revealed a nodule with a crusty center, which upon investigation appeared to be an orf nodule. Diagnosis was confirmed by PCR. The patient was not isolated and had a favorable outcome after basic wound care.
v3-fos-license
2022-07-31T21:37:32.310Z
2022-07-30T00:00:00.000
251211971
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-666X/13/8/1224/pdf?version=1659321379", "pdf_hash": "6c0480fd6495b8388eb0c19b8d912509001f2620", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42186", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "4b12357adc507eb32a44c1157320e2f23b90d29d", "year": 2022 }
pes2o/s2orc
A Portable ‘Plug-and-Play’ Fibre Optic Sensor for In-Situ Measurements of pH Values for Microfluidic Applications Microfluidics is used in many applications ranging from chemistry, medicine, biology and biomedical research, and the ability to measure pH values in-situ is an important parameter for creating and monitoring environments within a microfluidic chip for many such applications. We present a portable, optical fibre-based sensor for monitoring the pH based on the fluorescent intensity change of an acrylamidofluorescein dye, immobilized on the tip of a multimode optical fibre, and its performance is evaluated in-situ in a microfluidic channel. The sensor showed a sigmoid response over the pH range of 6.0–8.5, with a maximum sensitivity of 0.2/pH in the mid-range at pH 7.5. Following its evaluation, the sensor developed was used in a single microfluidic PDMS channel and its response was monitored for various flow rates within the channel. The results thus obtained showed that the sensor is sufficiently robust and well-suited to be used for measuring the pH value of the flowing liquid in the microchannel, allowing it to be used for a number of practical applications in ‘lab-on-a-chip’ applications where microfluidics are used. A key feature of the sensor is its simplicity and the ease of integrating the sensor with the microfluidic channel being probed. Introduction The accurate and rapid measurement of the pH value of a solution is important in determining its chemical condition, and hence such measurements are widely needed by and used in industry [1]. Measuring the pH is essential not only for finding the key characteristics of a substance but also in the management of many chemical reactions. Further, pH measurement is used in nearly all industries that deal with water contamination and purity, not only the chemical industry but also public organizations, including the agriculture and manufacturing industries. Some of these techniques can be employed in an optical fibre-based configuration, as that offers advantages (in contrast to the electrical counterparts) in terms of immunity to electromagnetic interference and resistance to harsh and corrosive chemical environments, also allowing a remote sensing capability [36][37][38]. In several of the techniques mentioned above, optical fibres have been used as a passive element (simply to transport light to and from the sensing head [39]), whereas in many others they play an active role in the sensing process itself [40][41][42]. One of the main issues faced when developing pH sensors for microfluidics is the small volume of the liquid used, coupled with a constant flow within the device, which makes any direct pH measurement challenging. This field of research and development in pH monitoring is an active one and has been for many years. When introducing a new sensor device, it is important to assess the strengths and weaknesses of those that have been reported before and to build on that-the recent paper by some of the authors have done that [1] (and this is not reproduced here). In summary, a brief overview comparison of the key features of electronic and optical fibre sensors, applied for measuring pH, can be used to highlight both their main characteristics and advantages/disadvantages, which then is helpful in the design of the sensor discussed in this work. For example, a range of commercial pH sensors has been designed to work in the world's most extreme liquid analysis applications [43], for tough industrial applications. This sensor has been optimized to create measurement solutions for extreme applications, such as precious metal refining (gold, copper, nickel and zinc), or for titanium dioxide production, ammonium nitrate, solvent extraction and industrial wastewater applications. By comparison to the optical fibre sensor discussed here, the commercial device [43] requires a solid-state reference, combined with a glass electrode-new types of toughened glasses have been used when conventional glass or lab-grade electrodes are not suitable. The optical fibre approach avoids the need for such electrodes in the measurement process. Thus, in spite of considerable work to date worldwide, and indeed over several decades, there is still considerable scope for improvement of the device in terms of their suitability for long-term use in real-life applications. Many of the previously reported optical sensors demonstrated in laboratories either are not readily portable for use in the field [26,27,37,44], require to be used in darkness [26,27,39,44] (to avoid interference from the ambient light) or in a static liquid [27,37,39]: all these render them less than optimal for use in most industrial situations. In this investigation, which aims to validate the approach put forward and which is strongly application-focused, a study of these problems that restrict in-the-field use and are associated with some of previously developed and reported optical pH sensors are addressed. Acrylamidofluorescein (AAF) dye-based optical fibre sensors have been designed for a variety of real-world applications, including the important measuring of pH of a flowing liquid in a microfluidic channel. Liao et al. [45] provided an interesting review paper, and this provided some useful background through an overview of the state-of-the-art. Further, Moradi et al. [46] referenced the use of a polymethylmethacrylate (PMMA) mixing device actively to produce solutions at different pH values using HPTS. Design-wise, this has some similarities to the work reported herein, in the sense that it requires a larger detection chamber and they used two detectors and fluorescence-based detection of pH. Zamboni et al. [47] used a different but interesting technique of pH measurement. The sensor is inherently built into the silicon device itself but created a pH sensor, which differs from the approach herein, in that it is not optical. This does show that different methods can be used, and this gives the user maximum flexibility in the choice of method. In addition, Pinto et al. [48] employed a colourimetric technique using an indicator dye-this is also an interesting but different technique using photodetectors. In their paper, Budinski et al. [49], in their paper, detailed the manufacturing of a glass sensor that uses optical absorbance at the reagent-specific wavelength to determine pH, while Elmas et al. [50] discussed a photometric sensing method in which the chip is made in glass for a clear pathway. Thus, there is considerable work in the literature on which to build as well as to point in new directions that can be exploited herein. This design discussed in this work was created to overcome some of the issues seen with prior research and thus to take advantage of the excellent work done by others (discussed above). In the design proposed herein, we proposed that not only would the sensor readings be conveniently collected under ambient light conditions but also a perylene red dye would be added to the probe as an 'internal reference' to allow a correction to be made to minimize the influence of important, potentially interfering external factors to the instrument reading, such as light source intensity fluctuations and temperature changes. The microfluidic channels (from 'lab-on-a-chip' applications) considered here were used in this demonstration because of their increased usage in several important, practical areas where pH measurement is needed, such as chemical analysis [51,52] and biological analysis [53][54][55][56]. Since such microfluidic channels can readily be realized at low cost and show important intrinsic advantages (in particular, needing only a small volume sample of reagent), they are key to creating effective, real-time point-of-care devices, including the important 'lab-on-a chip' devices that are being widely used today [51][52][53][54][55][56][57][58][59]. The major novelty and key insight in this work can be summarized as providing a demonstration in the probe developed of a portable, plug-and-play microfluidic pH sensor, working well in ambient light conditions and providing an additional device available to the user. Principle of Operation The sensor developed uses the protonation-deprotonation of fluorescent AAF dye immobilized at the tip of optical fibre, represented as HA, in an aqueous solution-this being the reason that the pH-induced intensity change is observed. The fluorescence intensity of the deprotonated (basic) form is greater than that of the protonated (acidic) form, and this reaction in its equilibrium form, depicted in Figure 1, can be represented by: Micromachines 2022, 13, x FOR PEER REVIEW 4 of 21 where The relationship between the concentration of the protonated and the deprotonated forms and the value of pH is governed by the Henderson-Hasselbalch reaction, given in Equation (2): where [A − ] and [HA] are the concentrations of the deprotonated and protonated forms of the fluorescence dye, respectively, and pK a is an acid-base constant. Since the concentrations of the deprotonated and the deprotonated forms are directly proportional to the intensity of the fluorescence observed, Equation (2) can be written in terms of the observed intensities as shown in Equation (3): where F min , F max and F are the fluorescence intensity of fully protonated system, the fluorescence intensity of fully deprotonated system and the measured fluorescence of the system. In the case where a reference signal is used, the fluorescence intensity ratio (R) can be determined by dividing the signal intensity by the reference intensity (F ref ) (as given in Equation (4)). In the present work, the fluorescence intensity from a perylene red dye was used as a reference because it has a convenient, overlapping excitation band with the AAF dye, thus a single light source is sufficient for the excitation of both chemicals. The presence of F ref then can be used to modify Equation (3) to Equation (5) as shown below: where R min and R max are, respectively, the minimum and maximum ratio obtained. Thus, R can be written as: It can be seen that Equation (6) shows the 'S-shaped' relationship between the fluorescence ratio, R, and the value of the pH, which is centred around the pK a value. This equation is used to calibrate the response of the sensor in a static liquid, as well as in the microfluidic channel. As it will be shown later, the dye (AAF + perylene red) becomes coated on the side of fibre during the functionalization process; however, it should be noted that only the dye present on the tip of the optical fibre participates in the creation of a pH-sensitive signal-not the dye present on the side of the fibre-as the sensing mechanism is not, in this case, based on the modification of the evanescent wave 'tail' (a technique previously reported in the operation of several evanescent wave-based sensors [60][61][62]). Thus, the optical fibre is only acting as a passive element, i.e., carrying the light from the source to the dye and then from the dye to the spectrometer. Chemicals and Reagents All chemicals were of analytical grade, purchased from Sigma-Aldrich, UK (except perylene red, which was purchased from Kremer Pigmente, Germany). All solvents used were of HPLC or analytical grade from Fisher Scientific UK. All aqueous solutions were prepared using deionized water. Synthesis of the Fluorescent Dye and Optical Fibre Probe Preparation AAF was prepared from fluoresceinamine according to the procedures reported in the literature [63], and perylene red was added to the pre-polymerization mixture to provide an 'internal reference'. After synthesis of the fluorescent dye, it was immobilized on the surface of the optical fibre selected according to the method successfully employed by some of the authors in the development of other such probes (previously reported in the literature e.g., [39]). In summary, a 150 mm long polymer-clad multimode silica fibre, with a 1000 µm core diameter (FT1000UMT; Thorlabs, UK), was used as the substrate for the coating. The 1000 µm diameter multi-mode optical fibre gives a much greater coating area than single-mode communications-type fibre-with a typical diameter of around 5 µmand in addition, provides the sensor with good mechanical strength. The polymer cladding from the optical fibre was removed, and it was (manually) polished using 5, 3 and 1 µm grit polishing sheet (LFG series, Thorlabs, UK) in that same sequence, to minimize any unwanted losses during light coupling. After polishing, one facet was glued to an SMA connector (11050A; Thorlabs, UK) and to another end, the dye-based coating was functionalized by immersing it in 10% KOH in isopropanol for 30 min, with subsequent rinsing in copious amounts of distilled water and dried with compressed nitrogen. Following that, it was treated in a 30:70 (v/v) mixture of H 2 O 2 (30%) and H 2 SO 4 (95% laboratory Reagent Grade) (Piranha solution) for 30 min, rinsed in distilled water for 15 min and dried in an oven at 100 • C for 30 min. This procedure leaves the surface with exposed hydroxyl groups, which facilitate the bonding of a silane agent. The fibre surface was then modified by silanizing for 2 h in a 10% solution of 3-(trimethoxysilyl) propyl methacrylate in dry ethanol. The fibre was washed with ethanol repeatedly in an ultrasonic bath. Subsequently, it was dried in an oven at 70 • C for 2 h. This procedure functionalizes the fibre surface with polymerizable acrylate groups. The pre-polymerization mixture was prepared by dissolving AAF (4.0 mg, 0.01 mmol), perylene red reference polymer (2.5 mg), ethylene glycol dimethacrylate crosslinker (150.9 µL, 0.8 mmol), acrylamide co-monomer (10.0 mg, 0.14 mmol) and 2,2 -azobisisobutyronitrile initiator (1.1 mg) in 222 µL dry MeCN. The solution was purged thoroughly with argon for 10 min. A small volume of the solution was placed into a capillary tube using a syringe, and the distal end of the fibre was inserted. They were sealed quickly with PTFE tape and polymerized in an oven at 70 • C for 16 h. This procedure forms a polymer layer on both the cylindrical surface and the distal end surface of the fibre. The probe prepared by this procedure is shown in Figure 2a where it can be seen that the distal end of the probe shows a distinctive colouration due to the presence of the fluorophore. The sensor tip was washed repeatedly with MeOH-AcOH (8:2, v/v) in an ultrasonic bath, followed by the same procedure with MeOH alone to remove all unreacted materials and the excess amount of polymer formed, which was not directly bound to the fibre. The probe was then stored at room temperature in a dark box until needed for use in the experiments described below. Fabrication of the Microfluidic Channel The device consists of a circular sensing well, 1 cm in diameter and 100 µ m in depth. The device consists of a single inlet port with two outlet ports to provide an alternative flow path in an event of a blockage due to trapped bubbles within the channel. The inlet and outlet channels are 200 µ m wide and 100 µ m in depth. The channel microstructure Fabrication of the Microfluidic Channel The device consists of a circular sensing well, 1 cm in diameter and 100 µm in depth. The device consists of a single inlet port with two outlet ports to provide an alternative flow path in an event of a blockage due to trapped bubbles within the channel. The inlet and outlet channels are 200 µm wide and 100 µm in depth. The channel microstructure used in this work was manufactured 'in-house' using a rapid prototyping process previously reported by Johnston et al. [64] where, in summary, SU-8 2050 (A-Gas Electronic Materials, Warwickshire, UK) moulds were fabricated on silicon wafers. The silicon moulds provide a replication template for any future castings. PDMS structures were then cast from Sylgard 184 elastomer (Onecall Farnell, Leeds, UK) mixed in the standard 10:1 component ratio. All PDMS devices cast from the same SU-8 mould replicate the structures on the mould. The PDMS was then cured at 65 • C for 2 h and afterwards connected to a 5 mm poly (methyl methacrylate) (PMMA) sheet was used to close the fluid channels and provide fluid connectivity. The PMMA (Weatherall, Aylesbury, UK) was drilled to implement accurately located 'through vias' for inserting the locating tubing and the fibre optic sensor developed. All through-vias in the PMMA were drilled using a standard benchtop drill press as countersinks to ensure that any adhesives used to secure the tubing will not flow into the channel. The tubing used was 1.57 mm (1/16 ) OD and 0.76 mm (0.03 ) ID PEEK tubing (Cole-Parmer, Eaton Socon, UK). The vias were drilled in two parts, a 1 mm hole was first made through the PMMA bulk, followed by a 1.8 mm hole drilled halfway through the PMMA bulk. The PDMS and PMMA components were then bonded by using a modification of the PMMA substrate using silane. The in-house protocol used was as follows. Clean, dry PMMA was exposed to UV-Ozone using a PSD-UVT system (Novoscan Technologies Inc., Ames, IA, USA) for 5 min. The PMMA was then silanized using aminopropyltriethoxysilane (APTES, 80 µL in a gas-tight 100 mL container) vapour for 1.5 h at 60 • C, at atmospheric pressure. The cooled PMMA substrate was immediately rinsed with isopropanol and dried with filtered nitrogen gas. Clean, dry PDMS was then exposed, bonding side up, to UV-Ozone for 3 min with the PSD-UVT system. The two treated components were carefully aligned and then brought together immediately. The composite device was then baked at 60 • C for 12 h to create a strong, irreversible covalent bond between the two materials. Integration of Optical Fibre with Microfluidic Channel The microfluidic device was designed and manufactured to permit the fitting of the optical fibre sensor into the sensing channel without the need for any additional sealing element. In the absence of any permanent fixture between the sensor and the microfluidic device, this allows the sensor to be extracted for reuse if needed. The optical fibre used was carefully inserted into the 1.2 mm diameter hole drilled into the PMMA cover to ensure a snug fit. To avoid scratching the coating on the tip, a lab jack was used to position the fibre in place at a depth of 5 mm-the same as the thickness of the PMMA sheet and sealed with a polymer (Elastosil RT 601A, Wacker Chemie, Munish, Germany). Due to the size of the hole drilled, the fibre was able to sit tightly in the hole drilled, even without the use of the polymer. Whilst unnecessary, the decision to use RT601 A silicone rubber was taken to provide further security-to further secure the fibre so that it is not movable during experiments and to create a semi-permanent, water-tight seal around the fibre to prevent leakage. Using a flexible polymer, instead of permanent glue, enables a clean extraction of the sensor, if necessary. Elastosil sets to a shore hardness of 45, which is akin to the mechanical properties of rubber bands, hence, that it could be easily pulled apart, without damaging either the sensor or the microfluidic device, allowing both parts to be re-used, as necessary. Figure 2b,c, respectively, show the optical fibre sealed in the microfluidic channel and the top and front schematic views of the setup. One out of three through-vias was used as an inlet, connected to the syringe pump through the tubing while the other two were used as outlets. The optical fibre was inserted in the central chamber in such a way that its coated tip will be in contact with the flowing liquid. Such a setup can easily be modified for injecting two reagents (from two through-vias), and thus the measurement of the pH of the resultant solution in the central chamber. Characterization Setup The schematic of the setup used to investigate the performance of the sensor showing the key instruments needed for the characterizations carried out is shown in Figure 3a,b. As can be seen from the figure, the LED source (model number NS375L-ERLM; λ = 395 nm; power = 3 mW) was coupled to one end of the 2 × 1 bundle (φ = 230 µm; Ocean Optics) with the help of a collimating and focusing lens (not visible in the figure as it was enclosed in the black LED source box), with a further end connected to a portable spectrometer (Maya-type 2000PRO; Ocean Insight, Wales, UK). The third (and the last) end, containing the other ends of the source and the 'spectrometer fibres' was connected to the optical fibre probe. It should be noted that the spectrometer is not strictly necessary (it was used here as one was available in the laboratory) but for a lower-cost option, a photodiode with two band-pass filters could have been used. This would further reduce the setup size and, of course, the cost. To evaluate its performance, the fibre probe was dipped into different solutions of known pH (this being pre-determined using a commercial pH sensor) to measure its response in static liquid: whereas for pH response in the microfluidic device, a single syringe infusion pump (model KDS 100; Cole-Parmer, Eaton Socon, UK) was used to regulate the flow rate of the solution (of known pH) into the channel. The sensor was tested for maximum and minimum flow rate, i.e., 6 mL/h and 509 mL/h, which can be achieved by the syringe infusion pump with 60 mL and 30 mL syringe, respectively. Figure 3b also shows the overall size of the setup, indicating that it is well suited for use as a portable system outside the laboratory. This is confirmed as the figure shows that the overall setup can easily be arranged on a small (~80 cm × 45 cm) desk. The third (and the last) end, containing the other ends of the source and the 'spectrometer fibres' was connected to the optical fibre probe. It should be noted that the spectrometer is not strictly necessary (it was used here as one was available in the laboratory) but for a lower-cost option, a photodiode with two band-pass filters could have been used. This would further reduce the setup size and, of course, the cost. To evaluate its performance, the fibre probe was dipped into different solutions of known pH (this being pre-determined using a commercial pH sensor) to measure its re- The spectrometer used was accessed using MATLAB code written by the authors, with an integration time of 400 ms being chosen (after some trial and error testing to enable a satisfactory signal level under ambient light conditions, without saturating the spectrometer) and the fluorescent spectrum was monitored over the wavelength range from 475 to 770 nm (with a resolution of 1.4 nm). Toggling of the LED was toggled by sending a pulse to an electromechanical relay using the spectrometer and the 'dark spectrum' was collected. This was subtracted digitally from the 'bright spectrum' to remove the effects of any interference due to the ambient light, thus enabling the sensor to be used effectively in the prevailing ambient light conditions. The mean of 10 recorded values each was used to create the data set employed in the determination of the pH value of the solution. (The source code developed is provided in the supplementary material.) Characterization of the Optical Fibre Probe in a Static Liquid Sample The typical fluorescence response of the sensor exhibiting two peaks, on excitation with light from a 375 nm LED source for pH 6.5 and pH 8.5 is shown in Figure 4a. The first peak, centred at a wavelength of~534 nm, arises due to the AAF dye, and this signal is the one that is responsive to the external pH changes-hence, it is termed the 'signal peak'. The second peak, centred at~600 nm, is due to the perylene red. Since it is less responsive to any change in the external pH, it can be used to create an optical 'reference signal' in this way to allow for other non-pH-based fluctuations to be corrected. The large Stokes shift of the fluorescent peaks reduces the interference from light signal and allows for accurate measurements to be made without the need for any optical filters. Toggling of the LED was toggled by sending a pulse to an electromechanical relay using the spectrometer and the 'dark spectrum' was collected. This was subtracted digitally from the 'bright spectrum' to remove the effects of any interference due to the ambient light, thus enabling the sensor to be used effectively in the prevailing ambient light conditions. The mean of 10 recorded values each was used to create the data set employed in the determination of the pH value of the solution. (The source code developed is provided in the supplementary material.) Characterization of the Optical Fibre Probe in a Static Liquid Sample The typical fluorescence response of the sensor exhibiting two peaks, on excitation with light from a 375 nm LED source for pH 6.5 and pH 8.5 is shown in Figure 4a. The first peak, centred at a wavelength of ~534 nm, arises due to the AAF dye, and this signal is the one that is responsive to the external pH changes-hence, it is termed the 'signal peak'. The second peak, centred at ~600 nm, is due to the perylene red. Since it is less responsive to any change in the external pH, it can be used to create an optical 'reference signal' in this way to allow for other non-pH-based fluctuations to be corrected. The large Stokes shift of the fluorescent peaks reduces the interference from light signal and allows for accurate measurements to be made without the need for any optical filters. The Ratio (R) of the intensities of signal and reference peaks, which is the pH-dependent quantity (as given in Equation (4)), monitored for the change in the pH from a value of 3.0 to 11, is plotted in Figure 4b. It can be seen from the figure that the sensor showed a negligible change in response at low pH values, i.e., up to pH 6.0; however, then, the response increases approximately linearly with pH increases before saturating again, thus, showing an overall "S" shaped response. The Ratio (R) of the intensities of signal and reference peaks, which is the pHdependent quantity (as given in Equation (4)), monitored for the change in the pH from a value of 3.0 to 11, is plotted in Figure 4b. It can be seen from the figure that the sensor showed a negligible change in response at low pH values, i.e., up to pH 6.0; however, then, the response increases approximately linearly with pH increases before saturating again, thus, showing an overall "S" shaped response. Based on this result, the effective working pH range of this sensor can be defined as from pH 6.0 to 8.5, a range useful for many applications, including for the survival of aquatic life-which thrives in this pH range and beyond which, disturbance is seen to the physiological systems of any organisms [65]. This sigmoid response is similar to that for a low pH sensor, as reported in previous work by some of the authors, where the working pH range was less, 0.5 to 6.0 [39]. This new sensor, while extending the range of pH measurements, is thus complementary in its response to that previously developed device. The combination of these devices thus allows a wide range of pH measurements, from 0.5 to 8.5, using either two individual probes or integrating their essential components in a combined probe. On fitting Equation (6) to the experimental data obtained, it can be seen that there is a good agreement (R 2 = 0.990) as shown in Figure 4b. The value obtained for pK a , 7.33 ± 0.1, which was determined from the fitting process, represents the value of pH at which 50% of the dye population in the solution is protonated. The maximum sensitivity of the sensor using Equation (7) is found to be 0.2/pH unit, at a value of pH = 7.5. The R min and R max values were taken at the pH values 3 and 11 and pKa were taken as 7.33 in static liquid. The repeatability of the sensor scheme thus developed was studied by measuring its cyclical response, with two extreme pH values, these being 3 and 11. Figure 5a shows the consistency of the response of the sensor. Figure 5b shows the rise time (t 90 -t 10 ) and fall time (t 10 -t 90 ) of this sensor. From this graph, these are determined to be as follows: the rise time and fall times were 5.93 ± 0.94 min and 1.25 ± 0.17 min, respectively. This maximum response time is better than of some previously reported sensors (e.g., Wallace et al.), which report a response time of ≈8.33 min [66]. However, from a general sensor perspective, the rise time measured in this way is high-this also can be seen in comparison to earlier reported work by some of the authors on a low-value pH sensor, where the (rising) response time was~25 s [39]. Characterization of the Optical Fibre Probe in a Microfluidic Channel with Fluid Flow After characterizing the sensor in a static liquid, its response was measured in a microfluidic channel with various flow rates being used. All data were taken after the device The first 100 min of the sensor response shows the rise and fall times, t 10 and t 90 , which, respectively, represent the time taken to reach 10% of the lower and 90% of the higher value of the measured pH. It seems likely that the rise time is affected by the greater thickness of the sensing layer (from that used in previous work performed), its affinity to water and the porosity of the coating. The thickness of the sensing layer is estimated to be around 3 mm. The thicker the layer, the stronger the signal but the longer the response time. Different thicknesses have been used depending on the applications. In this work, a stronger signal is more important to evaluate the performance of the system. The porosity of the layer was not the focus of this work. The polymer is hydrophilic and interacts well with water. The thickness of the coating used on a probe inevitably created a 'trade-off', usually involving sensitivity, speed of response, stability and durability. Thus, while a thicker, sensitive coating on the fibre is desirable to provide stability and to prevent damage, the high response time seen in the static liquid may be too long for some measurement situations. To achieve that trade-off requires further optimization of the probe, and this will be investigated in future work (with a view to its reduction by minimizing the sensor coating thickness yet functioning in a way commensurate with a satisfactory performance)-in the longer term, a lower rise time is sought to more closely match the sensor fall time. The advantage of cross-selectivity can be seen from the pH sensors, which have been designed around the protonation-deprotonation mechanism in the aqueous solution. Unlike other chemical sensors where cross-selectivity is a critical issue (that often affects the successful application of the system), sensors based on the protonation-deprotonation mechanism are not affected by the presence of other species since the only parameter that causes a shift in the acid-base equilibrium is the pH change. It may be argued that ionic strength can affect pKa values, thus, resulting in errors in pH determination. However, previous studies by the authors have shown that this type of polymer sensor has no sensitivity to ionic strength, even at high concentrations of NaCl [39]. Characterization of the Optical Fibre Probe in a Microfluidic Channel with Fluid Flow After characterizing the sensor in a static liquid, its response was measured in a microfluidic channel with various flow rates being used. All data were taken after the device was primed and was free of bubbles. Priming was done filling the channels at a slower flow rate of circa 500 µL/min. This allows the liquid to absorb any trapped air along the internal walls, preventing the formation of trapped bubbles. The change in the fluorescence ratio, R (as described above) and monitored as a function of the change in pH, is shown in Figure 6a. As can be seen from the figure, the sensor showed the expected "S-shaped" response and the experimentally determined performance matches well with that described by Equation (6), with R 2 = 0.994. The value of pK a obtained was 7.68 ± 0.08, which is also close to that of the observed value in the static liquid (pK a = 7.33 ± 0.1). The maximum sensitivity of the sensor in the microfluidics using Equation (7) was found to be 0.16/pH unit, at a value of pH = 7.5. This result provides positive confirmation that a sensor of this design can be used effectively for pH measurements in a microfluidic channel, with a flowing, millilitre volume of liquid. A syringe containing a pre-determined pH solution was used to allow the dye to be pumped into the inlet during experiments. Solutions of varying pH were syringe-pumped into the channel separately, with the channel being washed and dried after every solution. This was performed to ensure that the probe is only reading the pH of interest. At the lowest volumetric flowrate used of 6 mL/h, the flow velocity resulting from the dimensions of the rectangular inlet channel was calculated to be approximately 0.08 m/s (assuming the density of water = 998 kg/m 3 ). With the distance from the inlet to the probe measuring around 250 mm, the slowest time required for the solution to reach the probe was estimated to be circa 3 s. For the highest volumetric flowrate used in Figure 6b of 509 mL/h, the solution will reach the probe in approximately 0.03 s. The repeatability and time response of the sensor in the microfluidic channel arrangement was measured over the range from pH 3 to pH 11, on three consecutive days, with different flow rates being used as shown in Figure 6b. On the same day of measurement, as well as across several subsequent days, the response of the sensor was highly repeatable. However, the response time of the sensor was seen to be dependent on the flow rate used. In general, the response rise time seems to reduce with increased flow rate, likely due to the constant refreshing of hydrogen ions around the sensor head, increasing the availability of hydrogen ions reaching the sensor, as opposed to sensing in a static condition or at lower flow rates. The exchange of ions near the active area of the probe itself affects the protonation and deprotonation rate of the fluorescent dye. The active sensing area in contact with the liquid is approximately 3 mm 2 in a 1 cm diameter well of approximately 30 µL in volume. Due to the location and position (on the top of the flow channel and in the middle) of the sensor tip, bubble formation around the sensor is unlikely once the channel has been properly primed. Changing the cylindrical hole pattern of the microfluidic channel (used for integrating the fibre) to an inverted funnel-shaped design might allow for more liquid to be in contact with the sensor head might resolve this, as the latter design will allow more solutions to come in contact with the sensor head. The exact reason for the presence of small oscillation in the signal in Figure 6b is still unclear and the subject of ongoing work: however, it does not create a major influence on the measurements made. Performance Comparison with Previously Reported Optical Chemical pH Sensors A comparison of the performance of the sensor developed in this work with several representatives and previously reported laboratory-based and commercially available optically-based chemical pH sensors is shown in Table 1. With the exception of the commercial sensors, most of the pH sensors reported in Table 1 lend themselves to integration into the design of most microfluidic platforms due to their size and currently reported manufacturing methods. However, it is important to note that most of the available, commercial optical chemical sensors are not readily compatible with microfluidics channels, and thus the development of such sensors is still an area of active research. It can be seen from the table that the response time of the current sensor is somewhat higher; however, the real strength of this sensor lies in its portability, use of ratiometric detection, ease of integration with microfluidics (thus reducing the fabrication complexities), ease of multiplexing and ability to work in the ambient light. As stated earlier, current ongoing work seeks to reduce the response time by optimizing the thickness of the film and by changing the design of the microfluidic channel, as well also by increasing the pH working range by changing or multiplexing the dye used, such as to a coumarin dye (working pH range: 0.5-6.0) [36]. Optimizing this aspect is part of ongoing work. The estimated average precision of the measurement (see Figure 6a) of pH is~± 0.2 pH units (from the data reported). pH-sensitive with PANI-PVA composite film as a stimuli-responsive layer. pH-responsive changes in absorption properties due to changes in molecular conformation. [76] Conclusions In the research we conducted, the pH-dependent fluorescence intensity of acrylamidofluorescein dye was exploited to develop a portable optical fibre-based pH sensor, and its response was studied in static liquid as well as in the dynamic flow conditions of the microfluidic channel-for which it was particularly suited. The results show that the sensor developed can be used both in a static measurement situation and in a microfluidic channel with an active flowrate. For the sensor scheme, with the dye used, the current optimum working pH range (of 6.0-8.5) is within the maximum range of 3-11 pH units; however, the pH range can be easily altered by changing the dye used, such as coumarin dye (working pH range: 0.5-6.0), or can be used in parallel (multiplexed on a single optical fibre) with other sensors working on same principles to cover a wider pH range if needed. Importantly, the portable optical fibre sensor developed can be easily integrated (and then separated from) with the microfluidic channel, without destroying either and allowing easy cleaning and reuse-in this way, reducing the cost of ownership and thus opening the door for its usage in a range of 'real-life' applications demonstrating that an accurate in-situ evaluation of pH is possible in a standard microfluidic device that is applicable for a variety of future applications. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2017-04-20T12:55:32.364Z
2008-07-06T00:00:00.000
7823876
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://journals.library.ualberta.ca/jpps/index.php/JPPS/article/download/2968/2358", "pdf_hash": "a19f3eac192ec8fa1feef49c085dbf5832163f79", "pdf_src": "Grobid", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42187", "s2fieldsofstudy": [ "Biology" ], "sha1": "a19f3eac192ec8fa1feef49c085dbf5832163f79", "year": 2008 }
pes2o/s2orc
Discovery of Chlorogenic Acid-based Peptidomimetics as a Novel Class of Antifungals. a Success Story in Rational Drug Design Life-threatening fungal infections have increased dramatically in recent decades in immunocompromised patients. An estimated 40% of all deaths from hospital-acquired infections are due to infections caused by opportunistic fungi. The current treatment options are either causing serious toxicity, or becoming inactive against the drug-resistant fungal strains. Thus, the discovery and development of new antifungal agents that are economically feasible, have excellent therapeutic value, and address the problems of toxicity and species resistance is very important. We have recently designed and synthesized a series of chlorogenic acid-based peptidimimetics using structure-based methodology starting with cyclic peptides of the candin class of antifungals. These novel and totally synthetic compounds exhibit promising antifungal activity against pathogenic fungi with very low toxicity against brine shrimps. The possible novelty in their mechanism of action and economically feasible synthetic approach are the attractive features of this class of compounds that make them different from the already utilized antifungal agents. INTRODUCTION Life-threatening fungal infections have increased dramatically in recent decades in immunocompromised patients such as those undergoing cancer chemotherapy, organ transplant, and patients with AIDS (1-4).Candida spp.(including albicans and non-albicans) have been the major opportunistic pathogens (2,5,6).Aspergillus fumigatus (the causative pathogen in invasive pulmonary aspergillosis) is the leading factor of mortality in bone-marrow transplant recipients (7), while HIV-infected patients are particularly susceptible to mucosal candidiasis, cryptococcal meningitis, disseminated histoplasmosis, coccidioidomycosis, and Pneumocystis carinii pneumonia (8-10). Treatment of systemic and invasive fungal infections is a major challenge in immunocompromised patients.Amphotericin B is still the gold standard for the treatment of most severe invasive fungal infections.However, it exhibits acute and chronic side effects, which may be reduced by newer formulations within liposomes (11), lipid complexes (12), and colloidal dispersions (13,14).Azole antifungals including fluconazole, itraconazole, and the recently introduced posaconazole, are totally synthetic compounds with broad fungistatic activity against most yeasts and filamentous fungi.Despite being free of serious toxicity, they may produce endocrine-related side effects such as depletion of testosterone and glucocorticoids, resulting in gynecomastia and adrenal insufficiency (15,16).Another major limitation in the application of azoles, especially fluconazole, is the emergence of resistant fungal strains including Candida spp.(albicans and nonalbicans) (17,18),and Cryptococcus neoformans (19). Since late 1970, inhibitors of 1,3-β-glucan synthase, the enzyme involved in the formation of 1,3-β-glucan, one of the main components of the fungal cell wall, have gained worldwide popularity as potential drugs for the treatment of systemic and invasive mycoses.These compounds, which are mainly natural products or their semisynthetic analogues, have been classified as lipopolysaccharides, such as papulacandins, lipoproteins, such as echinocandins (20), and acidic terpenoids such as enfumafungin (21). ______________________________________ Corresponding Author: Mohsen Daneshtalab, School of Pharmacy, Memorial University of Newfoundland, St. John's, Canada, Email: mohsen@mun.caPapulacandins are no longer being utilized as antifungal agents since their antifungal activity is limited to Candida species and, most importantly, their in vitro activity does not translate to in vivo activity (22).Echinocandins on the other hand exhibit strong fungicidal activity in both in vitro and in vivo animal models (23,24).Echinocandins have been chemically modified to produce semisynthetic analogues with improved pharmacological properties.Among the structurally modified compounds in cyclic hexapeptide series, two semisynthetic derivatives, 366 (anidulafungin) Considering the mode of action, pharmacological, and toxicological profiles of the above classes of compounds (amphotericin B, disruption of fungal cell wall function; azoles, inhibition of fungal cell-membrane formation via inhibition of CYP 450 -dependent lanosterol 14-αdemethylase; and candin class of compounds, inhibition of fungal cell wall formation via inhibition of 1,3-β-glucan synthase), the candin class of antifungals exhibit the most promising target selectivity, as 1,3-β-glucan is only found in fungi not in mammalian cells.This imminently results in less physiologically toxic effects, as compared to the other two classes.Moreover, the semi-synthetically modified candins have rarely shown fungal resistance selection and are freely water soluble, the properties which are attractive for any clinically utilized antifungal drug.Despite the advantages in hand, the candin class of compounds has its own limitations as well.Firstly, due to their semi-synthetic nature, they are costly (29).Secondly, none of these compounds exhibit activity against Cryptococcus neoformans which is the causative agent for cryptococcal meningitis in AIDS patients, and is the major cause of opportunistic fungal mortality in these patients (23,24).This difference is due to selectivity of candin class of compounds against 1,3-β-glucan synthase which does not exist in C. neoformans, as the glucan structure of this fungus consists of 1,6-βglucan rather than 1,3-β-glucan, and is formed by the catalytic action of 1,6-β-glucan synthase (22,32).Finally, these compounds are only available as injectable forms due to their poor oral bioavailability.Considering the advantages and limitations of candin class of compounds, our group attempted the design and synthesis of peptidomimetic analogues of echinocandin B using structure-based methodology and HyperChem TM program. RATIONALE The enzyme 1,3-β-glucan synthase has at least two functional components: a catalytic component, which acts on UDP-glucose substrate, and a regulatory component, which binds to GTP (33,34).Considering the possible interaction of echinocandins (hexapeptides with symmetric structures consisting dipeptidic backbones of hydroxyproline-threonine at their south-eastern and north-western parts of the molecule) with the catalytic component of the enzyme and, as a result, inhibition of enzymatic activity, we attempted to design linear as well as cyclic peptidomimetic molecules that would possibly mimic the dipeptidic backbone of echinocandins.Using HyperChem TM program, we designed and synthesized representative compounds I and II (linear) and III and IV (cyclic) peptidomimetics and evaluated them for antifungal activity (Figures 2 and 3a,b) (35). The cell-based antifungal activity and 1,3β-glucan inhibitory evaluations of these new peptidomimetics revealed that none of these compounds were active.These results were in agreement with the general structural requirement for the enzyme inhibitory/antifungal activity of echinocandin class of compounds as described by Zambias et. al.,and Taft et. al. (36,37).Namely, the echinocandin lipophilic side chain at the northern part of the structure, and the homotyrosine moiety at the southern part, alongside the dipeptidic hydroxyproline-threonine, are the essential groups for the activity of echinocandin series.Also the orientation of homotyrosine ring with respect to the lipophilic side chain may be the determining factor for the antifungal activity of this class of compounds (Figure 4). Micafungin Chlorogenic acid is a natural product existing widely in many vegetables and plants.Structurally, it is a caffeoyl ester of quinic acid (Figure 5).Derivatives of chlorogenic acid have been reported to have interesting bioactivity such as inhibitory activity on HIV integrase (38) and protease (39).Chlorogenic acid can be considered a bioisostere of homotyrosine-hydroxyproline / theronine component located at the southern and south-eastern parts of echinocandin B. Based on the above documentations and considering bioisosterism, a molecule such as chlorogenic acid would be an ideal bioiososteric replacement for the homo-tyrosine-hydroxyproline / threonine (southern and south-eastern parts of echinocandin B) when coupled with a lipophilic side chain, which may result in formation of a series of novel and potential bioactive antifungal molecules. Considering the above facts, we hypothesized that the coupling of chlorogenic acid with appropriate lipophilic groups should result in compounds that are able to mimic the structural feature of homotyrosine-hydroxyproline/theroninelipophilic moieties, required for 1,3-β-glucan synthase inhibitory/antifungal activity of echinocandin class of compounds.In order to prove this hypothesis, 3-dimensional models of the energetically/stereochemically minimized echinocandin B, chlorogenic acid, and chlorogenic acid coupled with octyloxyaniline were established using HyperChem TM software.The 3 -points overlay of chlorogenic acid/echinocandin (Figure 6) and chlorogenic acid -coupled -octyloxyanilide / echinocandin (Figure 7) were determined. Indeed, the overlay matching of chlorogenic acid and its octyloxyanilide with echinocandin derivative was confirmed, as shown in Figures 6 and 7, which led us to design and synthesize different chlorogenic acid-based peptidomimetics with potential antifungal properties that are structurally novel. CHEMISTRY AND STRUCTURE-ACTIVITY RELATIONSHIPS The synthesis of these chlorogenic acid derivatives was reported previously (40) and is depicted in Scheme 1. Namely, the diacetonide derivative of chlorogenic acid was condensed with 4-(octyloxy)aniline to obtain the corresponding amide intermediate which upon deprotection yielded compound 1.Both compound 1 and its diacetonide analogue showed reasonable antifungal activity.To investigate the influence of physicochemical properties on antifungal activity, selected amino acids were introduced into the structures.For the synthesis of these compounds, protected amino acid derivatives were condensed with 4 -(octyloxy) aniline to obtain the corresponding protected amides with a lipophilic side-chain (2).Deprotection of these compounds under NHEt 2 followed by their reaction with the acetonide of chlorogenic acid afforded the corresponding protected amides (3).Acid hydrolyses of these protected acetonides under controlled condition resulted in the formation of the corresponding amine-protected derivatives, which upon further deprotection using 90% TFA yielded the desired peptidomimetics of chlorogenic acid (4). To further investigate the SAR among this class of compounds, the corresponding dihydro derivatives (5) and monohydroxyphenyls (6) were also synthesized and tested for antifungal activity and toxicity (M.Daneshtalab, unpublished data). The synthesized compounds were evaluated for in vitro antifungal activity against Candida albicans ATCC90028, Cryptococcus neoformans ATCC32045 and Aspergillus fumigatus ATCC13073, and toxicity using "brine-shrimp lethality assay, " and the results were reported previously (40).Overall, chlorogenic acid derivatives (3 and 4) exhibited better antifungal activity or less toxicity than those of chlorogenic acid analogues (5 and 6).This suggests that the structural modification on the caffeoyl group, such as saturating the double bond or reducing the number of hydroxyl groups results in reduction of antifungal activity.Significant antifungal activity was observed in most of the chlorogenic acid derivatives (3 and 4).The MIC on Cryptococcus neoformans of nearly all these chlorogenic acid derivatives were as low as 1-4 µg/ml, except the compound possessing a free carboxylic acid group in its structure which had MIC of 16 μg/ml.It has been reported that incorporation of an amino group, such as aminoproline residue, into the ring of echinocadin analogues leads to improvement of antifungal potency (41).Similar effects were observed in the chlorogenic acid derivatives that are reported here.Namely, compound 4d with a free amino group in its structure showed good activity against all the fungi tested, including A. Fumigatus (MIC of 16 μg/ml).The MIC of the chlorogenic acid derivatives against C. Albicans varied from 2 to >64 µg/ml.All the acetonide compounds showed weaker inhibitory activity against C. Albicans than the corresponding compounds with free hydroxyl groups, suggesting that the two hydroxyl groups in the quinic acid part are essential for the activity. The general toxicity of these compounds was assessed using brine shrimp lethality assay according to the reported method (42) and is reported in detail in our previous paper (40).Most of the synthesized compounds exhibited moderate to very low toxicity against brine shrimp. Based on the selective activity of these compounds against Cryptococcus neoformans, weak activity against Candida albicans, and very weak or no activity against Aspergillus fumigatus, we hypothesize that these compounds may have selective inhibitory activity against 1,6-β-glucan synthase (which is mainly found in C. neoformans and partially in Candida species).Other possible mechanism is increasing the permeability of fungal cell wall via mimicking the action of bactericidal/permeability-increasing protein, a mechanism that has been reported for antifungal activity of some peptides that are structurally related to chlorogenic acid (43). CONCLUSION A systematic structural modification of the cyclic peptides of candin class of antifunfgals resulted in identification of a novel class of small molecule chlorogenic acid-based peptidomimetics with impressive antifungal/toxicity profile and short synthetic steps.Based on the in vitro activity / toxicity profile, compound 4a has been selected as the lead compound for further structural modifications.We expect that a sequential structural modification on compound 4d through changing the amino acid components that are coupled with the quinic acid part of chlorogenic acid, the lipophilic side chain (octyloxyaniline), and the quinic acid moiety may lead to discovery of a preclinical lead compound with optimum activity/toxicity profile. The results obtained in our preliminary investigation on this novel class of compounds strongly confirm their potential as new leads for the discovery and development of novel mechanismbased antifungal agents.
v3-fos-license
2019-09-17T03:09:10.749Z
2019-09-05T00:00:00.000
202865308
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2019/09/05/759290.full.pdf", "pdf_hash": "b2c0e0a6997e78b67821e1eb9ea93f9650e0eabf", "pdf_src": "BioRxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42188", "s2fieldsofstudy": [ "Biology", "Psychology" ], "sha1": "9d167aea86645150929f01b58fe7d024e9b1aed0", "year": 2019 }
pes2o/s2orc
Effects of Transcranial Direct Current Stimulation on GABA and Glutamate in Children: A Pilot Study Transcranial direct current stimulation (tDCS) is a form of non-invasive brain stimulation that safely modulates brain excitability and has therapeutic potential for many conditions. Several studies have shown that anodal tDCS of the primary motor cortex (M1) facilitates motor learning and plasticity, but there is little information about the underlying mechanisms. Using magnetic resonance spectroscopy (MRS) it has been shown that tDCS can affect local levels of γ-aminobutyric acid (GABA) and Glx (a measure of glutamate and glutamine combined) in adults, both of which are known to be associated with skill acquisition and plasticity; however this has yet to be studied in children and adolescents. This study examined GABA and Glx in response to conventional anodal tDCS (a-tDCS) and high definition tDCS (HD-tDCS) targeting the M1 in a pediatric population. Twenty-four typically developing, right handed children ages 12–18 years participated in five consecutive days of tDCS intervention (sham, a-tDCS or HD-tDCS) targeting the right M1 while training in a fine motor task (Purdue Pegboard Task) with their left hand. Glutamate and GABA were measured before and after the protocol (at day 5 and 6 weeks) using conventional MRS and GABA-edited MRS in the sensorimotor cortices. Glutamate measured in the left sensorimotor cortex was higher in the HD-tDCS group compared to a-tDCS and sham at 6 weeks (p = 0.001). No changes in GABA were observed in either sensorimotor cortex at any time. These results suggest that neither a-tDCS or HD-tDCS locally affect GABA and glutamate in the developing brain and therefore it may demonstrate different responses in adults. Introduction Transcranial direct current stimulation (tDCS) is a form of non-invasive brain stimulation in which a weak electrical current is passed between two electrodes placed on the scalp. Using various tDCS montages, cortical excitability can shift to a state of excitation (anodal tDCS) or inhibitory (cathodal tDCS). Placing the anode electrode over M1 for instance typically increases cortical excitability in M1 (1)(2)(3). Previous research suggests that changes in excitability outlasts the stimulation session by up to 90 minutes (2,4). The prolonged and promising changes in both cortical excitability and promising changes in behavioral outcomes combined with its simple application and low cost makes tDCS an attractive as a possible therapeutic tool for a range of clinical conditions (5). For example, tDCS has been suggested to improve symptoms and/or assist in rehabilitation for many neurological disorders with minimal side effects (6), including migraine (7), stroke (8), Parkinson's disease (9), pain disorders (10) and neurodegenerative disorders (11), as well as psychiatric disorders including depression (12). High definition tDCS (HD-tDCS) is a newer, more focal form in tDCS that uses arrays of smaller electrodes to improve stimulation localization (13). Most typically used is the 4 x 1 configuration where a central electrode, which determines montage polarity, is placed over the target cortical region, and four outer electrodes (arranged as a ring), act as the reference electrodes. The radii of the surrounding reference electrodes define the region undergoing modulation (14). This configuration has been shown to modulate excitability in a smaller, more specific region compared to conventional tDCS (14,15). In addition to a more focussed current, its effects on patterns of cortical excitability in the M1 outlast those induced by conventional tDCS, as quantified by motor evoked potentials in response to stimulation (16). Studies support its tolerability in both healthy subjects and patients at intensities up to 2 mA for up to 20 minutes (15)(16)(17). Few studies have investigated tDCS in children, despite its potential (18)(19)(20)(21). tDCS administered in a multiday paradigm to the M1 of healthy children while performing a motor task demonstrated greater increases in motor skill compared to sham and improvements are retained 6 weeks later (22,23). These findings suggest the potential utility of tDCS as a therapeutic tool in children with motor impairments but the biological mechanisms behind these effects remain unknown (24). Adult studies using magnetic resonance spectroscopy (MRS) to measure regional brain metabolites typically show a decrease in GABA (4,25,26) and an increase in Glx (glutamate and glutamine in combination) (4,26,27) in the sensorimotor cortex following M1 anodal stimulation. Both GABA, a major inhibitory neurotransmitter, and glutamate, a major excitatory neurotransmitter, are mediators in long-term potentiation (28,29) and have been associated with behavioral changes following anodal tDCS, quantified as changes in task performance (4,25,30). However, it is unknown if these finding translate to a pediatric population and how long these changes in metabolites persist. Conventional MRS at 3T measures glutamate, though it is often reported as Glx, representing the combination of glutamate and glutamine as their spectra are highly overlapped, making it difficult to reliably resolve these two signals. GABA, on the other hand, is at low concentration and its signal is overlapped by more abundant metabolites and therefore requires editing for accurate measurement (31). GABA-edited MEGA-PRESS, selectively manipulates the GABA signal at 3 ppm by applying an editing pulse to the coupled GABA signal at 1.9 ppm in half of the averages (ON), which are interleaved with averages in which the editing pulse is applied elsewhere not coupled to GABA (OFF). The difference spectrum is acquired by subtracting the ON from the OFF, which removes all peaks not affected by the 1.9 ppm editing pulse (specifically the 3 ppm creatine peak), revealing the GABA signal at 3 ppm. In this study, GABA-edited and conventional MRS were used to investigate changes in GABA and Glx in response to anodal tDCS (a-tDCS) and anodal HD-tDCS in a pediatric population. By observing metabolite changes in the targeted right sensorimotor cortex and the contralateral left sensorimotor cortex, we aimed to gain insight into the metabolite changes induced by tDCS both after stimulation has concluded and at 6 weeks follow up, with the overall goal of better understanding the mechanism by which tDCS modulates motor learning in the developing brain. Based on the adult literature, we expected GABA to decrease following tDCS and at 6-weeks follow up we expect metabolites to return towards baseline with similar results observed for both anodal and high definition tDCS groups. Materials and Methods This study was a component of the Accelerated Motor Learning in Pediatrics (AMPED) study, a randomized, double-blind, single-center, sham-controlled intervention trial registered at clinicaltrials.gov (NCT03193580) with ethics approval from the University of Calgary Research Ethics Board (REB16-2474). Upon enrolment, participants and guardians provided written, informed consent or assent and were screened to ensure they met safety criteria for non-invasive brain stimulation and MRI scanning. Participants were blinded to the experimental group to which they were assigned and only the investigator administering stimulation was aware of the group until all data was collected. Group assignment was only revealed for data analysis after the study was completed. Additional details regarding the parent study design, recruitment and primary motor learning outcomes can be found in Cole and Giuffre et al (23). Experimental Design Twenty-four typically developing right-handed participants ages 12 to 18 were recruited through the Healthy Infants and Children Clinical Research (HICCUP) Database. The Edinburgh Handedness Inventory was used to confirm right hand dominance with a laterality index -28. For the HD-tDCS group, a 10:20 EEG cap was used to center the anodal electrode on the right M1, after identifying the location with single pulse TMS as above. The four cathodes were placed ~5 cm away in a 4 x 1 configuration (Fig 1b) using a 4 x 1 HD-tDCS Adaptor and a SMARTscan Stimulator (Soterix) as described previously (15,34,35). For the active stimulation conditions (a-tDCS and HD-tDCS), current was ramped up to 1 mA over 30 seconds and remained at 1mA for 20 minutes. The current was then ramped back down to 0 mA over 30 seconds. For the sham stimulation condition, current was ramped up to 1 mA over 30 seconds and then immediately ramped back down to 0 mA over 30 seconds. After 20 minutes, current was ramped up to 1 mA and then back down to 0 mA over 30 seconds. This procedure is used to mimic the sensations associated with active stimulation and has been previously validated (36). During the 20 mins of stimulation (or sham) participants performed the Purdue Pegboard Task with their left hand (PPT L ) three every 5 minutes. Motor Assessments The motor assessment was the Purdue Pegboard Task (PPT) (37). This test uses a rectangular board with two sets of 25 holes running vertically down the board and four concave cups at the top of the board that contain small metal pegs. Subjects are asked to remove pegs form the cups and place them in the holes one-at-a-time, as quickly as possible. This task challenges hand dexterity and coordination. A score is given as the number of pegs successfully placed in the holes in 30 seconds with the left hand (PPT L ). Secondary assessments were the performance of this task with the right hand (PPT R ) or bimanually (PPT LR ). Changes in score is reported as PPT. ∆ MRS Acquisition Spectroscopy data was collected before the tDCS intervention (baseline), after 5 days of tDCS paired with motor training, and at 6-weeks after tDCS in all 24 subjects on a 3T GE MRI MRS data analysis GABA data were analyzed using GANNET 3.0 (39) software in MATLAB R2014a (The Mathworks, Natick, MA, USA), including retrospective frequency and phase correction and correction for voxel tissue content, assuming grey matter contains twice as much GABA as white matter (i.e.,  = 0.5 as per literature) (40). In this experiment, we assumed sensorimotor voxels were composed of 40% grey matter and 60% white matter in the GABA tissue correction (41). Conventional PRESS data was corrected for frequency and phase drift using the FID-A toolkit (42) and then analyzed using LCModel (43) with basis sets developed from LCModel. Metabolite levels from LCModel were tissue-corrected using the Gasparovic approach (44) and the CSF voxel fraction, accounting for the negligible metabolites present in CSF. As a confirmatory analysis, metabolite levels referenced to creatine were also examined. Partial correlations controlling for intervention were used to examine the relationship between changes in metabolites and changes in motor assessment performance before and after stimulation, and 6 weeks after stimulation had concluded. Initially these correlations were pooled across all groups and follow-up analyses were performed in each group as appropriate. Population Characteristics Twenty-four typically developing children (mean 15.5 1.7 years, 13 females and 11 ± males) completed all phases of the study with no drop outs. Due to technical difficulties, one participant did not have GABA or Glx data available in both sensorimotor cortices in the post intervention timepoint. Population demographics are shown in Table 1. Age, sex and laterality index did not differ significantly between groups (p > 0.3 for all parameters). Data Quality The GABA-edited spectra from the right and left sensorimotor cortices from all time points are show in Fig 2b; the grey shows a single standard deviation range across all data and the black line is the average of all data. All data, both GABA-edited and conventional PRESS, were assessed for quality by visual inspection as well as a CRLB threshold of 20%. One PRESS dataset was excluded due to poor data quality, the remaining spectra were of high quality with a mean SNR of 41.4 6.3, all FWHM water <15 Hz, mean FWHM water 6.01 1.92 Hz. ± ± MEGA-PRESS GABA data was also of high quality across all data sets: all fit errors < 10%, mean fit error 4.59 1.21, all FWHM Cr <10%, mean FWHM Cr: 9.57 0.92 Hz. Generally, ± ± spectra with fit errors below 12% are deemed to be of sufficient quality (39). Post-hoc assessments by intervention groups showed this relationship was maintained in the anodal tDCS group only (r = 0.864, p = 0.006; Fig 4d). Fig 4. Relationship between changes in metabolite concentration and motor performance. Correlationn between change in metabolite concentration (% Glx and %GABA) and change in Purdue Pegboard Task post intervention (ΔPPTL) controlling for intervention group and age. Left sensorimotor cortex GABA is significantly correlated with PPTL for the pooled intervention groups (grey line). This relationship is also observed in the anodal tDCS intervention group (red). No significant relationship was observed between PPT L and changes in GABA in the Δ right sensorimotor cortex (r = -0.065, p = 0.784; Fig 4c). Additionally, no significant relationship was seen between changes in PPT score and changes in Glx in the right (Fig 4a) or left (Fig 4b) sensorimotor cortex (p > 0.05). Discussion Several adult studies have shown that single (43,44) or multiple session (30,45) tDCS paired with training in a motor task is associated with improvements in said task and improvements in performance are greater than motor training alone (i.e., sham-tDCS). The same is observed in pediatric studies (22,23), however results may differ slightly in terms of the phase of learning affected by stimulation. Results in children suggest that tDCS facilitates online learning (22) while in adults evidence suggests tDCS enhances learning primarily through offline effects (30). GABA and glutamate are involved in learning (24,28,46) and have both been observed to change in response to anodal tDCS in adults (4,(24)(25)(26)46,47). This study examined changes in GABA and Glx in response to right M1 anodal tDCS and HD-tDCS in a pediatric population. Metabolites were measured at baseline, after a 5-day tDCS and motor learning intervention (post-intervention) and at 6 weeks follow-up. To our knowledge, this is the first investigation of metabolite changes in response to tDCS in a typically developing pediatric population. Additionally, this is the first-time metabolites have been measured in a control population after a multiday protocol with a followup assessment. Previous studies in adults have illustrated that GABA decreases (33,46) and glutamate increases (47), with skill acquisition and improved function in the region responsible for the skill execution, the M1. It has been suggested that tDCS facilitates changes in GABA and glutamate to augment learning. Studies conducted in adults have shown anodal tDCS increases sensorimotor glutamate (4,26,27) and decreases GABA (4,25,26,48); however, others have failed to replicate these findings. Similarly, we did not see decreased GABA and increased Glx at the site of stimulation, though we did see contralateral changes. Our results potentially indicate the developing brain responds differently to tDCS compared to the adult brain. Post-Intervention Changes in GABA and Glx Following five days of tDCS and motor training there were no significant changes in metabolite levels in either the right or left sensorimotor cortex, though trends toward decreased left sensorimotor GABA (contralateral to the tDCS target) in the a-tDCS group were seen. Adult literature using healthy controls suggests acute decrease in GABA local to the tDCS target (4,25,26,48). Similarly, participants with a neurodegenerative condition who followed a protocol of 15 a-tDCS sessions also showed decreased GABA in the tissue targeted with a-tDCS (11). Given the contrast of our results and those in the literature, we suggest that the pediatric brain responds differently to tDCS. In healthy adults, GABA and glutamate in the motor cortex work together to maintain an excitation-inhibition balance that is crucial for plasticity (49). It has been suggested that this balance of GABA and glutamate can be shifted to a relative optimum level that is thought to mediate behavioral outcomes (50). It is possible that in the developing brain, this excitation/inhibition balance is more dynamic while in the adult brain it is relatively static. When an external stimulus is introduced, like tDCS or a foreign motor task, the adult brain shows a shift to facilitate plasticity while the pediatric brain was already in its "plastic state". There is also evidence describing the pediatric brain as being hyperexcitable (19) which may suggest it has a lower concentration of GABA (51,52), and therefore less dynamic range to reduce GABA compared to the adult brain where increased GABAergic inhibition is necessary to refine already acquired skills. Secondly, transcallosal inhibitory processes (53) may have a more pronounced effect in the pediatric brain. Here we show trends towards decreased GABA in the left sensorimotor cortex, contralateral to the site of stimulation, as opposed to changes in the site of stimulation (right cortex). This suggests lateralization of motor learning in the left dominant cortex as previously described by Schambra et al (54). The impact of transcallosal inhibition is also seen in pediatric studies applying tDCS contralateral to stroke lesions in an effort to augment motor learning of the affected hemisphere (55,56). According to pediatric models of anodal tDCS, the current appears to travel through the motor fibers of the corpus callosum into the contralateral hemisphere (56). However, the same mechanism is not expected to be true for HD-tDCS which has a more focal current. Finally, as mentioned above, tDCS may act on different phases of learning in children compared to adults, therefore the paradigm in which we expect GABA and glutamate changes to appear shortly after stimulation is not the appropriate time window to detect changes. Similarly, it is possible that the metabolic response to stimulation changes with applications over consecutive days. In this study, we suspect participants may have transitioned into a phase of learning that requires less plasticity and the cortex is no longer responding to tDCS with the predicted GABA and Glx changes at five days when our measures were taken. Adult literature suggests the changes in GABA and glutamate measured by MRS in response to learning vary with time (46,57) and it is possible that a ceiling of PPT skill, and also of metabolite change, was reached before our MRS measurements were taken. Although reports Glx increases after anodal tDCS and suggest that tDCS may involve the NMDA pathway (27). Stagg et al. also reports changes in Glx in response to cathodal tDCS (4). They propose MRS measures of Glx lack sensitivity to consistently detect Glx changes following tDCS (4,25). Several other studies report an absence of significant changes in Glx in response to a-tDCS with little speculation as to why (4,26,58,59,61). Week Follow Up in GABA and Glx At 6 weeks follow up, it was expected that metabolites would return to baseline to maintain homeostatic balance in the brain after the initial phases of skill acquisition had concluded, while retaining motor skill improvements. However, we observed a Relationship Between Changes in Metabolites and Changes in Motor Performance We found a significant, positive relationship between change in left sensorimotor GABA (cortex contralateral to stimulation) and improvement in the task performance by the left hand post tDCS intervention and training, further supporting the above mentioned callosal hypothesis. Those participants who experience a greater positive change in GABA concentration in the hemisphere contralateral to stimulation (left motor cortex) present a greater improvement in PPT score over the 5-day stimulation and training period. This relationship is specifically seen in the a-tDCS group only, suggesting that anodal stimulation induces a contralateral inhibition that does not occur with HD-tDCS or in normal (sham group) learning, driving an enhanced improvement in PPT score. No relationship between changes in Glx and task performance post-intervention nor between GABA or Glx and change in PPT score 6 weeks after stimulation and training was observed. These results are in accordance with adult studies that report no significant relationship between change in motor skill and concentration of Glx in the motor cortex contralateral to the hand executing the task (33). However, adult studies have reported a relationship between task improvement and GABA changes in the tDCS targeted cortex (i.e. right sensorimotor GABA changes and left hand training and task performance) (25,33). This dissimilarity suggests that neurochemistry in the pediatric and adult brain respond in different ways during motor learning, warranting further investigation. Conclusions Non-invasive stimulation is an expanding area of research with investigations into the use of modalities similar to tDCS being investigated as a therapy for a range of disorders including migraine, pain and stroke (6,7,9,11,12,18,67). While these studies have suggested that noninvasive brain stimulation can improve outcomes, there is little analysis into the underlying physiological changes behind these responses are not well understood, particularly in the developing brain. This study aimed to shed light on the metabolite changes induced by M1 anodal tDCS in conjunction with a motor training paradigm. We investigated changes, in GABA and glutamate concentrations following 5 consecutive days tDCS comparing conventional anodal tDCS, HD-tDCS and sham. Unexpectedly, Transcranial direct current stimulation (tDCS) produces localized and specific alterations in neurochemistry: A 1H magnetic resonance spectroscopy study significant changes in metabolites at the site of stimulation post 5-day tDCS intervention or 6 weeks after the intervention. It is possible that changes in metabolites occur immediately after stimulation and learning and this effect is diminished over the 5 days stimulation as skill level improves. However, we suggest the pediatric brain responds differently to tDCS compared to adults. In particular, we suggest contralateral modulation of learning and metabolites has a greater role in the pediatric brain, highlighting the need for further study of the effects of non-invasive stimulation on the pediatric brain specifically. Furthermore, we also show the response to HD-tDCS is different compared to a-tDCS based on the observation of increased glutamate in the left sensorimotor cortex 6 weeks after stimulation specifically in response to HD-tDCS. Further investigation into the effects of HD-tDCS is needed to determine its efficacy on motor learning. Funding Funding for this project was received from the Behaviour and the Developing Brain Theme of the Alberta Children's Hospital Research Institute (ADH) the Hotchkiss Brain Institute (ADH), University of Calgary and the Canadian Institute of Health Research (AK).
v3-fos-license
2021-07-27T00:06:24.003Z
2021-05-18T00:00:00.000
236374215
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=109603", "pdf_hash": "74a054ca3027b676a6e3913c446821d1a263ccd3", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42189", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "9b11745b0af23ebfa6becfdf433c7a210ee04497", "year": 2021 }
pes2o/s2orc
Dosimetric Comparison: Volumetric Modulated Arc Therapy (VMAT) and 3D Conformal Radiotherapy (3D-CRT) in High Grade Glioma Cancer—Experience of Casablanca Cancer Center at the Cheikh Khalifa International University Hospital Background: Intensity Modulated Radiation Therapy (IMRT) is currently employed as a major arm of treatment in multiforme glioblastoma (GBM). The present study aimed to compare 3D-CRT with IMRT to assess tumor volume coverage and OAR sparing for the treatment of malignant gliomas. Materials and methods: We assessed 22 anonymized patients datasets with High Grade Glioblastoma who had undergone post-operative Intensity Modulated Radiotherapy (IMRT) and 3D Conformal Radiotherapy (3D-CRT), This study will compare and contrast treatment plans Rapidarc and 3D-CRT to determine which technology improves significantly dosimetric parameters. Results: Plans will be assessed by reviewing the coverage of the PTV using mean, maximum and minimum doses while the OAR doses will be compared using the maximal doses for each, as set out in the QUANTEC dose limits. Conclusion: The use of IMRT seems a superior technique as compared to 3D-CRT for the treatment of malignant gliomas having the potential to in-crease the dose to the PTV while sparing OARs optimally. Introduction Intensity Modulated Radiation Therapy (IMRT) and Three Dimensional Conformal Radiation Therapy (3D-CRT) are both very promising techniques for the thhhe treatement of brain tumors. The standard of care for patients with multiforme glioblastoma is represented by a combination of surgical resection, adjuvant radiation therapy (RT), and chemotherapy, despite this aggressive multimodal strategy the prognosis remains poor. Radiotherapy is usually delivered with Three Dimensional Conformal Radiation Therapy (3D-CRT) in 1.8 -2 Gy per fraction to a total dose of 59.4 -60 Gy. However, considering that GBM may lie in close proximity to several organs at risk, radiation treatment planning may lead to sub-optimal target coverage. In the attempt to improve clinical outcomes, Intensity Modulated Radiation Therapy (IMRT) has been increasingly evaluated and exploited for the treatment of GBM. Overall, from the dosimetric standpoint, Three Dimensional Conformal Radiation Therapy (3D-CRT) and IMRT seem to provide similar results in terms of target coverage, while IMRT, regardless of the employed technique, is better in terms of dose conformity, in reducing the maximum dose to the organs at risk (OARs) and in healthy brain sparing. Purpose: We aimed to evaluate the dosimetric interest of Volumetric Modulated Arc Therapy (VMAT) using Rapidarc® the varian solution for the treatment of patients with multiforme glioblastoma near to organs at risk. We report the results of a retrospective study of 22 patients treated at the Casablanca Cancer Center of Cheikh Khalifa International University Hospital. Materials and Methods Through a retrospective study, we assessed 22 patients with High Grade Glioblastoma who had undergone post-operative Intensity Modulated Radiotherapy (IMRT) and 3D Conformal Radiotherapy (3D-CRT). The patients' characteristics are summarized in Table 1. We included patients with tumors in a variety of locations. The cases were selected to be representative of four dosimetric scenarios, there were no overlaps between OARs and the PTV, the second, the third and the last scenarios were characterized by the superposition with the PTV of 1, 2 and 3 OARs respectively. To improve delineation of target volumes and normal tissues, planning computed tomography (CT) and post-surgical magnetic resonance imaging (MRI) were automatically co-registered by using the dedicated treatment planning system (TPS). A visual check was performed at the end of the registration process: in case the results were not satisfactory, the radiation oncologist manually edited the co-registration. MRI were acquired for diagnostic purposes without the immobilization device. Gross tumor volume (GTV) was defined as the resection cavity plus any contrast-enhancing area on a postgadolinium T1-weighed MRI. The clinical target volume (CTV) was obtained by adding a three-dimensional 2 cm expansion to the GTV. The physicians manually edited the CTV to respect natural anatomical barriers (bone, tentorium, falx). The CTV was then expanded by 0.5 cm to create Results Were Discussion Postoperative Radiotherapy with chemotherapy has been standard treatment for newly diagnosed glioblastoma as it had shown significant survival benefits after surgery. Unfortunately HGG can develop in different sites of the brain, some lésions can be very proximal to several critical organs at risk (e.g. optical nerves, brainstem, chiasma and retina), that can cause late radiation toxicity including neurocognitive deficits and necrosis. Therefore the potential for using the best technique to insure maximal coverage of the predicted target volume and simultaneously reducing radiation dose to OAR is discussed. Our results indicate that, as compared with 3D-CRT, IMRT showed significant reductions in mean dose delivered to the brainstem, optic chiasma, normal brain and to the optic nerve, moreover IMRT also improved predicted target volume coverage and dose homogeneity over 3D-CRT. [4] have been performed over the last years and nearly all, with few exceptions [4], suggest that IMRT techniques (static, volumetric, rotational) lead to a reduction of doses to OAR and to the healthy brain tissue [5] surrounding PTV, while maintaining target coverage without significant variations. MacDonald et al. [6] and Zach et al. [7] highlighted no differences in terms of PTV V95%. At the same time, in their comparative dosimetric study Wagner et al. [8] and Thilmann et al. [3] pointed out that IMRT achieved better target coverage with respect to 3D-CRT, scoring a V95% improvement of 13.5% and 13.1% respectively. This advantage was much more significant when PTV was in proximity of OAR [8]. MacDonald et al. [6] compared the dosimetry of Intensity Modulated Radiation Therapy and Three Dimensional Conformal Radiation Therapy techniques in patients treated for high-grade glioma. A total of 20 patients underwent computed tomography treatment planning in conjunction with magnetic resonance imaging fusion. Prescription dose and normal-tissue constraints were identical for the 3D-CRT and IMRT plans. As compared with 3D-CRT, IMRT significantly increased the tumor control probability (p < or = 0.005) and lowered the normal-tissue complication probability for the brain and brainstem (p < 0.033). Intensity Modulated Radiation Therapy improved target coverage and reduced radiation dose to the brain, brainstem, and optic chiasma. With the availability of new cancer imaging tools and more effective systemic agents, IMRT may be used to intensify tumor doses while minimizing toxicity, therefore potentially improving outcomes in patients with high grade glioma. At the same time, in their comparative dosimetric study, Wagner et al. [8] and Thilmann et al. [3] pointed out that IMRT achieved better target coverage with respect to 3D-CRT, scoring a V95% improvement of 13.5% and 13.1%, respectively. Recently most Radiotherapy technical platforms offer a choice among these different techniques, it is important to define the parameters which will guide the final decision adopted for the treatment, following a comparative dosimetric study. IMRT planning has demonstrated its superiority over Three Dimensional Conformal Radiotherapy with regard to the preservation of organs at risk. Conclusions IMRT seems a superior technique as compared to 3D-CRT, in our study it allows for a better target dose coverage and improves the homogeneity of the dose received by the predicted target volume while maintaining equivalent OARs sparing and reducing healthy brain irradiation. What is already known on this topic:  The standard of care in GBM is represented by multimodal strategy consisting of surgical resection, adjuvant radiation therapy (RT), and chemotherapy.  Radiotherapy can be challenging as the target volume is surrounded by critical organs at risk. What this study adds:  The goal of this study was to compare 3D-CRT and IMRT in GBM patients according to the dosimetric impact of the different scenarios on the critical structures irradiation.
v3-fos-license
2021-10-01T13:23:08.434Z
2021-09-30T00:00:00.000
238232459
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.703130/pdf", "pdf_hash": "41e2ea15dd154ec91113d87ae5cbb3f9ecaae170", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42191", "s2fieldsofstudy": [ "Education", "Psychology" ], "sha1": "41e2ea15dd154ec91113d87ae5cbb3f9ecaae170", "year": 2021 }
pes2o/s2orc
Teacher Satisfaction in Relationships With Students and Parents and Burnout In the educational field, the role of the support component of the teacher-student relationship is well known, while the role of the teacher-student relationship on teacher burnout is a more current field of investigation. Several studies on the sources of burnout have recently focused on job satisfaction and teacher-student satisfaction. However, the role of teacher-parent satisfaction is still little explored in this field. Moreover, in the Italian school context, students’ seniority and educational level require further investigation, as the average age of teachers is particularly high compared to their European colleagues. The present study aims to examine in a sample of 882 Italian teachers the presence of burnout and differences in teacher-student and teacher-parent satisfaction between primary (students aged 6–10years) and lower secondary (students aged 11–13years) teachers. A further objective is to test whether teacher-student and teacher-parent satisfaction and seniority can be significant predictors of burnout. Teachers completed the Job Satisfaction Scale (MESI) and the MBI-Educators Survey and the data were then processed using MANOVA and multiple linear regression analysis. The results revealed that 8.2% of the teachers suffered from burnout and lower secondary teachers showed the highest levels of emotional exhaustion, depersonalisation and reduced personal accomplishment. Predictors of emotional exhaustion were job dissatisfaction and seniority, and predictors of depersonalisation were job dissatisfaction and teacher-student dissatisfaction. Finally, predictors of personal accomplishment were also teacher-parent satisfaction and teacher-student satisfaction. The implications of these findings for practice and research are discussed in this article. INTRODUCTION Since the 1990s, new theoretical perspectives in education (Bruner, 1990;Lave and Wenger, 1991;Ford and Lerner, 1992;Cole, 1998) have emphasised the relational and contextualistic component of educational systems, and with the studies of Pianta (1999), the teacher-student relationship has become an independent field of investigation in educational psychology. Through various research findings in this area, it has been shown that teacher relationships can affect the quality of learning (Howes and Hamilton, 1992;Pianta, 1999;Darling-Hammond, 2006) and can alter pupils' success-or failure-oriented trajectories (Birch and Ladd, 1996;Fraire et al., 2008;Kuriloff et al., 2019). Several studies have also shown that the teacher-student relationship is able to influence teachers' well-being and psychological health (Friedman, 2006;Spilt et al., 2011). The relationships that teachers establish with their students can be a source of teacher satisfaction and motivation (Hargreaves, 2000;Quan-McGimpsey et al., 2013) or a source of stress and burnout (Friedman, 2006;Corbin et al., 2019). One of the most prominent definitions describes burnout 'as a syndrome of emotional exhaustion, depersonalization, and reduced personal accomplishment that can occur among individuals who work with people in some capacity' (Maslach et al., 1996, p. 4). Emotional exhaustion is a feeling of tiredness and fatigue at work that leads to a feelings of reduced personal accomplishment. Depersonalisation results from attitudes of refusal to relate to the clients/patients or students and is associated with ineffective and impersonal responses to their requests (Maslach et al., 1996). Recent studies on teacher burnout consider student misbehaviour one of the main sources of the syndrome (Aloe et al., 2014) and identify the teacher-student relationship as a possible mediator between unruly student behaviour and teacher burnout (Aldrup et al., 2018). Other studies particularly focus on the conflictual nature of the teacher-student relationship as being responsible for the syndrome (Evans et al., 2019). Aldrup et al. (2018) point out that the positive quality of the teacher-student relationship is able to positively affect the increase in teachers' well-being and work enthusiasm and protect against the potential for conflict in the teacher-student relationship (Evans et al., 2019;Klassen et al., 2012). In a recent study, Corbin et al. (2019) explored relational conflict and closeness using the teacher-student relationship scale (Pianta, 2001) in relation to burnout (Maslach et al., 1996). This study showed that relational conflict with students is able to predict teachers' emotional exhaustion and relational closeness is able to predict personal accomplishment. Taken together, these findings are among the first to empirically support the theoretical model outlining the importance of student-teacher relationships for teacher well-being (Spilt et al., 2011). In order to further investigate this line of research aimed at exploring the relationship between the quality of the educational relationship and burnout, this study investigated the role of teacher-student satisfaction and teacher-parent satisfaction to see if they could be considered significant predictors of the syndrome. Indeed, there are still few studies investigating this specific dimension of job satisfaction as a source of teacher burnout (Skaalvik and Skaalvik, 2009). Job satisfaction is a pleasurable or positive emotional state resulting from the appraisal of one's job or job experiences (Locke, 1969); it is an emotional state of well-being when there is correspondence between an individual's characteristics (e.g., needs, expectations, and preferences) and the benefits that derive from their performances at work (Skaalvik and Skaalvik, 2010). Evans (1997) describes job satisfaction as a state of mind determined by the extent to which the individual perceives her/his job-related needs to be met. In the field of job satisfaction studies, Spector (1997) describes job satisfaction as the extent to which people like (satisfaction) or dislike (dissatisfaction) their jobs. Skaalvik and Skaalvik (2009) showed that burnout is associated with low teacher job satisfaction. Corbin et al. (2019), in a sample of German primary school teachers, highlighted the role of teacher-student relationships in predicting teachers' personal emotional burnout. Velasco et al. (2013), in a sample of lower and upper secondary school teachers from northern Italy, also highlighted the relationship between job satisfaction and Burnout, showing a strong influence of social support on teachers' job satisfaction and only a weak influence of managing disciplinary problems with students on burnout levels. Skaalvik and Skaalvik (2009) pointed out instead that negative relationships with students and students' parents and lack of social support can influence teachers' low job satisfaction and burnout onset. In considering the teacher-student and teacher-parent relationship, a variable that requires particular attention is the school level. Some studies tend in this regard to highlight the presence of higher levels of burnout among secondary school teachers working with adolescents than among teachers working at primary school level (Quattrin et al., 2009;Vercambre et al., 2009;Betoret and Artiga, 2010;Ullrich et al., 2012;Hall-Kenyon et al., 2014), while the opposite was found in other studies (Tatar and Horenczyk, 2003;Kokkinos, 2006;Tsigilis et al., 2011). Several studies conducted in Italian secondary schools tend to underline the greater conflictual nature of the relationship between teachers and pre-adolescent students, especially in the presence of unruly, turbulent, hyperactive and demotivated student behaviour in overcrowded classes (Di Pietro and Rampazzo, 1997;Pinelli et al., 1999). Some research also highlights a general discomfort in Italian secondary school teachers, which is associated with a representation of their work as predominantly individual and solitary (Buonomo et al., 2017). This would seem to be in line with their university training, which is less focused on supervision and collaboration with colleagues than primary school teachers. 1 In addition to the school factor, another aspect that constitutes a peculiarity in Italy is the age of teachers, as the percentage of those over 50 is exceptionally higher than in other European countries. Data published by the Ministry of Education in Italy in 2016/2107 show that the average age of Italian teachers is around 51 years old and the regions where the oldest teachers work are in the south where 44.2% of teachers are 54 years old (OECD, 2019). Several studies have shown an age-related increase in burnout (Anastasiou and Belios, 2020;Park and Shin, 2020;Luisa et al., 2020;Polatcan et al., 2020) and in particular an increase in levels of emotional exhaustion (Pedditzi et al., 2020); other studies, in contrast, have shown that some veteran teachers can achieve fair levels of job fulfilment (Anderson, 2000;Luisa, 2015). These contradictory results made it necessary to explore seniority in order to understand the possible role of teaching experience. Here again, however, the literature shows that the data are not always consistent. Some research points to a greater vulnerability to burnout occurring when seniority of service 1 Primary school teacher training is regulated by Ministerial Decree 249/2010 and secondary school teacher training by the more recent Legislative Decree 59 of 2017. Frontiers in Psychology | www.frontiersin.org increases due to limited energy and resources (Zavidovique et al., 2018); other research, however, notes that greater teacher experience may be associated with greater satisfaction (Veldman et al., 2013) and commitment (Ryan et al., 2017;Lowe et al., 2019). Veldman et al. (2016) also showed that in veteran teachers, the job satisfaction was positively related to the extent to which their aspirations in teacher-student relationships had been realized. Given the not always unambiguous results concerning the above variables and their relationship with burnout, the present study aims to: 1. verify the possible presence of burnout in a sample of Italian teachers from central and southern Italy; 2. verify whether there are significant differences in burnout between primary school teachers working with children between 6 and 10 years old and lower secondary school teachers working with preadolescents; 3. test whether there are significant differences in job satisfaction and teacher-student and teacher-parent satisfaction between primary and secondary teachers; 4. verify whether teacher satisfaction and in particular teacherstudent and teacher-parent satisfaction and seniority of service can be significant predictors of burnout, in its components of emotional exhaustion, depersonalisation and reduced personal accomplishment. MATERIALS AND METHODS Participants 882 Italian teachers participated in the research: 52.4% from primary schools (N = 462) and 47.6% from secondary schools (N = 420). In regard to gender and age, 84.4% of the teachers were female (N = 744) and only 15.6% were male, all aged between 27 and 63 years (mean = 47.5, SD = 7.98). All the teachers worked in public schools and came from central and southern Italy (18.5% from Rome; 30% from Sassari, 20.2% from Bari; and 31.3% from Cagliari). The length of service ranged from 1 to 39 years (mean = 19.56, SD = 9.3). Participants received permission from their schools to take part in the research and completed the questionnaire individually in a paper-pencil survey during breaks at school. The sample obtained was therefore one of convenience and the response rate to the questionnaire was 75% (out of 1,200 distributed, 902 were completed, of which 882 were valid). The study was conducted according to the APA (American Psychological Association, 2002) guidelines for ethical research in psychology and the Ethics Committee of the University of Cagliari approved the research (UniCa no. 0040431, 13/02/2020 -II/9). The job satisfaction scale derived from MESI -Motivations, Emotions, Strategies, Incremental beliefs of teaching (Moè et al., 2010) assesses general job satisfaction in teaching and consists of 5 items (Alpha = 0.84) such as: "I am satisfied with my job" and "My working conditions are excellent. " The items are rated on a 7-point Likert scale from strongly disagree (1) to strongly agree (7). The psychometric characteristics of the Italian version of the Job Satisfaction Scale are reported in Moè et al. (2010). In order to deepen the analysis of teachers' satisfaction regarding specific relationships with students and parents, two more ad hoc items were constructed using a 7-point Likert scale (1 = strongly dissatisfied; 7 = fully satisfied). The items are: "I feel satisfied with my relationship with students" and "I feel satisfied with my relationship with parents" and were considered for a separate integrative evaluation with respect to the other sets of questions. The MBI-ES (Maslach Burnout Inventory-Educators Survey by Maslach and Jackson, 1986) consists of 22 items assessable on a 6-point Likert scale and evaluates emotional exhaustion, depersonalisation, and personal accomplishment (Sirigatti and Stefanile, 1993). MBI-ES maintains its specificity for analysing teachers' burnout. The MBI consists of 22 items and the frequency of responses was tested using a 6-point response method, where the extremes are defined by never (0) and every day (6). The scales forming the MBI are as follows: • Emotional Exhaustion (EE), which examines the feeling of being emotionally drained and exhausted by one's work (9 items such as: "I feel tired when I get up in the morning and have to face another day of work" and "I feel exhausted by my work"; Alpha = 0.87). • Depersonalisation (DP), which measures a cold and impersonal response towards service users (5 items such as: "I seem to treat some students as if they were objects" and "I do not really care what happens to some students"; Alpha = 0.71). • Personal Accomplishment (PA), which assesses the feeling of one's competence and the desire to succeed at work (eight items such as: "I feel full of energy" and "I have achieved many valuable things in my work"; Alpha = 0.76). High scores on the Emotional Exhaustion (EE) and Depersonalisation (DP) scales and low scores on the Personal Achievement (PA) scale demonstrate a high degree of burnout. The psychometric characteristics of the Italian version of the MBI-ES are reported in Sirigatti and Stefanile (1993). Data Analysis In the first phase of the work, reliability checks were carried out on the scales using Cronbach's Alpha. Subsequently, to identify burnout condition, we calculated the frequency of subjects with a combination of high levels of Emotional Exhaustion, Depersonalisation, and low Personal Accomplishment scores, as suggested by the MBI-ES coding manual for Italy (Sirigatti and Stefanile, 1993). To highlight the differences in burnout related to school (primary and secondary), the MANOVA was applied on the dependent variables exhaustion, depersonalisation and personal fulfilment. Then the One-Way ANOVA was applied to find out the specific effects on the individual variables. The MANOVA was also used to test the effect of school (primary and secondary) on the teacher satisfaction variables (job satisfaction, teacher-student satisfaction, and teacher-parent satisfaction) and then the One-way ANOVA was applied with the specific variables. Pearson's bivariate correlational analysis was then calculated to check the correlations between the variables considered (burnout scales, satisfaction scales and seniority) and in view of the regression analysis all collinearity checks were performed. Finally, multiple linear regression analysis (enter method) was carried out in order to identify whether job satisfaction, teacher-student satisfaction, teacher-parent satisfaction and seniority could be considered significant predictors of teacher emotional exhaustion. The same procedure was then applied with the same predictors 2 to the criterion variables of depersonalisation and then personal fulfilment. The statistical significance was always set at p < 0.01. Scale Reliability The reliability of the MBI scale was calculated using Cronbach's alpha coefficient. The data on the MBI were as follows: Emotional Exhaustion (nine items: α = 0.86), Depersonalisation (five items: α = 0.75), Personal Accomplishment (eight items: α = 0.80). The reliability of the satisfaction scale (5 items) was α = 0.76. Burnout Levels Of the interviewed teachers, 29.9% (n = 264) demonstrate a high level of emotional exhaustion; 33.8% (n = 298) have a high level of depersonalization and 28.3% (n = 250) show a low level of professional personal achievement. Meeting all the conditions to be diagnosed at the highest level of the syndrome (high scores simultaneously in EE and DP and low scores in PA), 8.2% (n = 72) were found to possess burnout. Most of these teachers were female (75%), with 65.3% from lower secondary schools and 34.7% from primary schools. Correlations The correlations between burnout scales, satisfaction and seniority of teachers are shown in Table 1. The correlations between burnout scales (EE, DP, and RP) are significant and overall good. As expected, they are positive between emotional exhaustion and depersonalisation and negative between these scales and personal accomplishment. Correlations are moderate and negative between job satisfaction and burnout. With regard to teacher-student satisfaction, a good positive correlation is observed with job satisfaction and teacher-parent satisfaction. The correlation between teacher-student satisfaction and emotional exhaustion is negative and moderate, as is the correlation with depersonalisation. As far as teacher-parent satisfaction is concerned, there is a good positive correlation with job satisfaction and teacher-student satisfaction. Finally, with regard to seniority, a positive correlation emerges only with emotional exhaustion. Predictors of Emotional Exhaustion The following are the results of the multiple linear regression analysis ( Table 2) carried out using emotional exhaustion as the criterion variable and job satisfaction, teacher-student and teacher-parent satisfaction and length of service as predictors (enter method). Predictors of Depersonalisation The results of the multiple linear regression carried out to highlight the significant predictors of depersonalisation are shown in Table 3. DISCUSSION The results of this study revealed that 8.2% of the Italian teachers in the sample were suffering from burnout. Specifically, 29.9% had a high level of emotional exhaustion (as many as 264 teachers) and 33.8% had high depersonalisation scores (as many as 298 teachers). These results confirm once again that the professional category of teachers is at risk of burnout. The comparison between primary and secondary school teachers shows that secondary school teachers are more at risk of burnout than primary school teachers. They were also more dissatisfied with their work and the teacher-student relationship. As already found in other research (Quattrin et al., 2009;Ullrich et al., 2012;Hall-Kenyon et al., 2014), there is evidence that working with secondary school students tires teachers more than at primary school level. Teacher-student dissatisfaction could therefore be associated with teachers' difficulties in dealing with pre-adolescent students. However, given the complexity of the Italian school context, a multiplicity of other interacting relational and organisational factors (Buonomo et al., 2017;Pedditzi and Marcello, 2018) should also be taken into account. In contrast, no difference was observed between primary and secondary teachers regarding satisfaction with the teacherparent relationship. This finding, which could be further explored with qualitative research methods, highlights that the teacherparent relationship may be an under-utilised psychosocial resource at school for promoting well-being. Among the predictors of emotional exhaustion, multiple linear regression analysis revealed job dissatisfaction and seniority of service, confirming previous research pointing to an increase in burnout with teachers' length of service (Zavidovique et al., 2018). These findings are also in line with previous research findings on teachers' age (Pedditzi et al., 2020;Polatcan et al., 2020;Park and Shin, 2020;Anastasiou and Belios, 2020) and show an increase in teachers' emotional exhaustion over time. The predictors of depersonalisation were found instead to be dissatisfaction in the teacher-student relationship and job dissatisfaction. This result is very important because it confirms that teacher-student dissatisfaction contributes to depersonalisation. The applicative implications of this result indicate the possibility of intervening in the teacher-student relationship to improve satisfaction levels and prevent depersonalisation. Teacher-student satisfaction has also been identified as a predictor of personal accomplishment, along with job satisfaction and teacher-parent satisfaction. From the perspective of burnout prevention, it is therefore more important than ever to promote This study therefore, on the one hand, confirms previous research findings on the relationship between job dissatisfaction and burnout (Skaalvik and Skaalvik, 2009;Molero Jurado et al., 2019;Robinson et al., 2019) and on the other hand, highlights original research findings regarding the predictive value of teacher-student satisfaction on depersonalization and of teacherparent relationships on personal fulfilment. However, it is important to consider, among the limitations of this research, the fact that teacher-student and teacher-parent satisfaction are measured through single items and therefore, in a future perspective, it is necessary to deepen these dimensions with the parallel use of Pianta's Teacher Student Relationship Scale, through a longitudinal and experimental design, in order to also capture also possible burnout development phases. It is also necessary to remember that the results of this research are specific to the sample tested and cannot be generalised to all teachers. In fact, the use of convenience samples can lead to distortions in the selection of the group, increasing the probability that the participants are those most likely to answer the questionnaire. A further limitation of our study is that all data are self-reported and therefore not completely objective. However, this study has strengths such as the large sample size and depersonalisation data, which in our research showed acceptable values of internal consistency of the scale. These data allowed us to analyse burnout in relation to teacherstudent satisfaction, taking into account all dimensions of Maslach's model, and not only the dimensions of emotional exhaustion and personal fulfilment as in previous research (Corbin et al., 2019). The practical implications of this study relate to the possibility of designing teacher training and burnout prevention activities aimed at improving teacher-student and teacher-parent relationships to promote the well-being of teachers and the entire school community. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethics Committee of the University of Cagliari (UniCa no. 0040431, 13/02/2020 -II/9). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS MP designed and wrote the study. MN supervised the data collection. EN supervised the statistical analysis. All authors contributed to the article and approved the submitted version. ACKNOWLEDGMENTS Thanks to all the schools and teachers who participated in this research.
v3-fos-license
2018-12-05T11:02:44.790Z
2008-02-15T00:00:00.000
55144869
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.eje.cz/doi/10.14411/eje.2008.005.pdf", "pdf_hash": "dad5a8c453fc2794d9acb00b299e4949533853e0", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42192", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "dad5a8c453fc2794d9acb00b299e4949533853e0", "year": 2008 }
pes2o/s2orc
Origin of Jordanian honeybees Apis mellifera ( Hymenoptera : Apidae ) using amplified mitochondrial DNA The honeybee (Apis mellifera L.) has a large number of geographic subspecies distributed across Europe, Africa and Asia, many of which have been described. This identification is important for bee breeding and preserving honeybee biodiversity. To investigate the origin of Jordanian honeybees, 32 samples collected from different locations in Jordan were analyzed using four different enzyme systems: Bg/II site in cytochrome oxidase b (Cytb), EcoRI site in large ribosomal (lsRNA) subunit, XbaI site in cytochrome c oxidase I (COI) subunit and HinCII site in cytochrome c oxidase I (COI) subunit. The first three enzymes were found to be polymorphic. The DNA banding pattern analyses revealed that Jordanian honeybees belong to the East Mediterranean and Middle Eastern mitochondrial lineages. 41 * Corresponding author: mtDNA (restriction sites and length polymorphism) of different honeybee populations in Jordan in order to determine their origin and evolution. MATERIAL AND METHODS INTRODUCTION Honeybee keeping in Jordan is an important aspect of the agricultural economy.The total number of honeybee colonies in Jordan is around 40 thousands, and the total amount of honey produced is 150 tons; this amount represent only 20% of the local consumption of honey (Agricultural statistical year report, 2005). The bee race in Jordan is Apis mellifera syriaca (Syrian honeybee), which is a native of the Eastern Mediterranean region (Jordan, Palestine, Syria and Lebanon).It is characterized by bright yellow color, small size, aggressiveness and a tendency to construct several swarm cells (Ruttner, 1988).This honeybee subspecies is tolerant of the environmental conditions prevailing in the Jordan valley and mountain areas of Jordan (Zaitoun, 2000).However, in comparison with other honeybee subspecies, this bee is not easy to manage because it is aggressive and produce little honey.For these reasons, many Jordanian beekeepers have imported queens and bees of other subspecies, such as A. m. carnica from Germany and Egypt, A. m. ligustica from Italy and the United State of America, and A. m. anatoliaca from Turkey.In addition, Jordan is also adjacent to the borders of Africa, so African honeybee subspecies may be transported into the country either accidentally or by beekeepers.The imported subspecies are not correctly identified and mating between the different subspecies could occur, producing new hybrids. Honeybees (Apis mellifera) are geographically diverse; with as many as 25 subspecies (Ruttner, 1988;Sheppard et al., 1997).Biodiversity of the honeybee was first assessed using morphometrics.Ruttner et al. (1978) proposed the existence of three distinct branches, a South and Central African, a North African and West European, and a North Mediterranean branch.This classification was further refined by the addition of a fourth evolutionary branch that includes the Near and Middle Eastern subspecies (Ruttner, 1988).Many other scientists have suggested classifications based on morphometric characters (Cornuet et al., 1988;Cornuet & Fresnaye, 1989;Ruttner, 1992;Crewe et al., 1994;Sheppard et al., 1997;Engel, 1999).Morphological characters are not well suited for phylogeographical studies because they can be sensitive to environmental selection pressures, need a lot of time and experience, and some times are unsuitable for identifying some hybrids (Franck et al., 2000). Current trends in the application of DNA marker techniques in a diversity of insect ecological studies show that mitochondrial DNA (mtDNA), microsatellites, random amplified polymorphic DNA (RAPD), expressed sequence tags (EST) and amplified fragment length polymorphism (AFLP) markers have contributed significantly to our understanding of the genetic basis of insect diversity (Behura, 2006). Identification and classification of honeybees is essential for the breeding and improvement of honeybees in Jordan.The aim of this research was to analyze the mtDNA (restriction sites and length polymorphism) of different honeybee populations in Jordan in order to determine their origin and evolution. Digestion with restriction enzymes Each PCR amplification product was digested with the appropriate restriction enzyme (Table 1) using the following: 5 µl of PCR product, 0.2 µl of BSA (Promega), 2 µl of enzyme buffer (Promega) and 1 µl of restriction enzyme (Promega); the final volume was adjusted to 20 µl by adding sterile distilled water.The mixture was incubated at 37°C for 3 h. Gel electrophoresis The resulting restriction fragments were separated by electrophoresis on a 2% agarose gel with 1× TBE buffer (0.1M Trisborate, 0.2 mM EDTA, 0.1M boric acid pH 8.3).The gel was then stained with ethidium bromide and examined under ultraviolet illumination.Restriction sites were scored as present (PCR amplification product cut, resulting in two bands) or absent (PCR amplification product not cut, resulting in one band). RESULTS AND DISCUSSION The genetic variation in 32 honeybee colonies collected from different locations in Jordan was analyzed using four discriminating restriction enzymes.The Bg/II site in cytochrome oxidase b (Cytb), EcoRI site in large ribosomal (lsRNA) subunit, and XbaI site in cytochrome c oxidase I (COI) subunit were present, while the HinCII site in (COI) subunit was absent in the samples that revealed restriction sites for the East Mediterranean lineage.The Bg/II site was present, while the other three sites were absent in the samples that revealed restriction sites for the Middle Eastern lineage (Table l). the cleaved 500 bp amplified fragments yielded two bands (300 bp and 200 bp) in all samples.In Fig. 2, the 900 bp amplified fragments in six samples (10, 11, 12, 15, 27 and 32) were not cleaved, while the other 26 samples plus the two queens were cleaved yielding two bands (600 bp and 300 bp).In Fig. 3, the 1400 bp amplified fragments in five samples (4, 11, 12, 15 and 32) and the 1000 bp amplified fragments in two samples (10 and 25) were not cleaved, while the other 25 samples plus the two queens that have 900 bp amplified fragments were cleaved yielding two bands (700 bp and 200 bp).In Fig. 4, which represents DNA digested with HinCII enzyme, none of the 1400 bp amplified fragments were cleaved.The DNA patterns resulting from the use of four restriction enzymes correspond to the Middle Eastern type in five samples (10, 11, 12, 15 and 32) and the East Mediterranean type in most of the samples. The different DNA patterns of samples 4, 25 and 27 in Figs 2 and 3 may be informative.Samples 4 and 25 were digested with EcoRI but undigested with XbaI; these two samples might include mitotypes to the Middle East lineages and could result from DNA insertion or deletion in their genome.Sample 27 was digested by XbaI but not with EcoRI; this sample might include the North Mediterranean mitotype II, according to the nomenclature of Smith et al. (1997).In order to clarify this, DNA sequencing of the three samples is required.Also, a comparison of the resultant sequences of the mtDNA and genome sequences of the honeybee (Vlasak et al., 1987;Crozier et al., 1989; The Honey Bee Genome Sequencing Consortium, 2006.)might also prove useful.Palmer et al. (2000) also mention the existence of two types of the East Mediterranean lineage in Turkey. The data provided by the mtDNA markers confirmed the existence of the Middle Eastern lineage in Jordan.The existence of this evolutionary lineage, which includes mellifera subspecies from the Middle East era accords with the results of previous studies by Frank et al. (2000b) and Palmer et al. (2000).This finding also accords with maternal transmission of mtDNA, which does not reflect the genetic contribution of the drones of the colonies.Franck et al. (2001) mention that the presence of the Middle East mitochondrial lineage in Egypt and Somalia may result from successive honeybee invasions of Africa from the Middle East, as the Horn of Africa and the Rift Valley are the main channels for colonizing species from Asia. This study proved that variation in the mitochondrial molecule can be used to discriminate among the evolutionary lineages of honeybee subspecies, and that the EcoRI site in the (lsRNA) subunit gene and the XbaI site in the (COI) subunit gene are found in bees of the East Mediterranean group (Smith, 1988;Smith & Brown, 1988, 1990;Hall & Muralidharan, 1989;Smith et al., 1989Smith et al., , 1991;;Crozier et al., 1989;de la Rúa et al., 2001). If queen replacement is maintained or increased over the next years in Jordan, the genetic pool of local populations may be severely disrupted.Therefore, a policy aimed at preserving local populations is required as humans can greatly modify the genetic architecture of honeybee populations. Figs 1, 2 and 3 show the polymorphic fragment patterns of Jordanian honeybee DNA digested with the restriction enzymes Bg/II, EcoRI and XbaI, respectively.In Fig. 1, 42 *E.M. -Eastern Mediterranean; **M.E. -Middle Eastern.Note: The EcoRI site in the large ribosomal subunit is potentially polymorphic both in the Eastern Mediterranean and Middle Eastern lineages.Summary of the pairs of primers, location of amplified fragment, restriction enzymes and detected bands.
v3-fos-license
2017-10-22T12:34:32.000Z
2017-10-22T00:00:00.000
55374457
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.esaim-m2an.org/articles/m2an/pdf/2019/03/m2an170197.pdf", "pdf_hash": "58bdd90a3a310671d570cf0713d88937a76562d6", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42194", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "sha1": "605725fc9e72d421c6532ab3ac20dec3f9e0079d", "year": 2017 }
pes2o/s2orc
An easily computable error estimator in space and time for the wave equation We propose a cheaper version of \textit{a posteriori} error estimator from arXiv:1707.00057 for the linear second-order wave equation discretized by the Newmark scheme in time and by the finite element method in space. The new estimator preserves all the properties of the previous one (reliability, optimality on smooth solutions and quasi-uniform meshes) but no longer requires an extra computation of the Laplacian of the discrete solution on each time step. that the work of Adjerid et al. is related to some space-time Galerkin discretizations, which are different from the time marching schemes considered in the present work as well as in other papers cited above. In deriving our a posteriori estimates, we follow first the approach of [15]. First of all, we recognize that the Newmark method can be reinterpreted as the Crank-Nicolson discretization of the reformulation of the governing equation as the first-order system, as in [5]. We then use the techniques stemming from a posteriori error analysis for the Crank-Nicolson discretization of the heat equation in [17], based on a piecewise quadratic polynomial in time reconstruction of the numerical solution. Finally, in a departure from [15], we replace the second derivatives in space (Laplacian of the discrete solution) in the error estimate with the forth derivatives in time by reusing the governing equation. This leads to the new a posteriori error estimate in time and also allows us to easily recover the error estimates in space that turn out to be the same as those of [15]. The resulting estimate is referred to as the 5-point estimator since it contains the fourth order finite differences in time and thus involves the discrete solution at 5 points in time at each time step. On the other hand, the estimate [15] involves only 3 points in time at each time step and will be thus referred to as the 3-point estimator. Like in the case of the 3-point estimator, we are able to prove that the new 5-point estimator is reliable on general regular meshes in space and non-uniform meshes in time (with constants depending on the regularity of meshes in both space and time). Moreover, the 5-point estimator is proved to be of optimal order at least on sufficiently smooth solutions, quasi-uniform meshes in space and uniform meshes in time, again reproducing the results known for the 3-point estimator. Numerical experiments demonstrate that 3-point and 5-point error estimators produce very similar results in the majority of test cases. Both turn out to be of optimal order in space and time, even in situations not accessible to the current theory (non quasi-uniform meshes, not constant time steps). It should be therefore possible to use the new estimator for mesh adaptation in space and time. In fact, the best strategy in practice may be to combine both estimators to take benefit from the strengths of each of them: the relative cheapness of the 5-point one, and the better numerical behavior of the 3-point estimator under abrupt changes of the mesh. The outline of the paper is as follows. We present the governing equations and the discretization in Section 2. Since our work is based on techniques from [15], Section 3 recalls the a posteriori bounds in time and space from there. In Section 4, the 5-point a posteriori error estimator for the fully discrete wave problem is derived. Numerical experiments on several test cases are presented in Section 5. The Newmark scheme for the wave equation We consider initial-boundary-value problem for the wave equation. Let Ω be a bounded domain in R 2 with boundary ∂Ω and T > 0 be a given final time. Let u = u(x, t) : Ω × [0, T ] → R be the solution to where f, u 0 , v 0 are given functions. Note that if we introduce the auxiliary unknown v = ∂u ∂t then model (2.1) can be rewritten as the following first-order in time system The above problem (2.1) has the following weak formulation [11]: for given f ∈ L 2 (0, T ; L 2 (Ω)), u 0 ∈ H 1 0 (Ω) and v 0 ∈ L 2 (Ω) find a function where ·, · denotes the duality pairing between H −1 (Ω) and H 1 0 (Ω) and the parentheses (·, ·) stand for the inner product in L 2 (Ω). Following Chapter 7, Section 2, Theorem 5 of [11], we observe that in fact Higher regularity results with more regular data are also available in [11]. Let us now discretize (2.1) or, equivalently, (2.2) in space using the finite element method and in time using an appropriate marching scheme. We thus introduce a regular mesh T h on Ω with triangles where E h represents the internal edges of the mesh T h and the standard finite element space V h ⊂ H 1 0 (Ω) of piecewise polynomials of degree k ≥ 1: Let us also introduce a subdivision of the time interval [0, T ] 0 = t 0 < t 1 < · · · < t N = T, with non-uniform time steps τ k = t n+1 − t k for n = 0, . . . , N − 1 and τ = max The Newmark scheme [18,19] with coefficients β = 1/4, γ = 1/2 as applied to the wave equation (2.1): given and then compute u n+1 h ∈ V h for n = 1, . . . , N − 1 from equation where f n is an abbreviation for f (·, t k ). Following [5,15], we observe that this scheme is equivalent to the Crank-Nicolson discretization of the governing equation written in the form (2.2): Note that the additional unknowns v h k are the approximations are not present in the Newmark scheme (2.5)-(2.6). If needed, they can be recovered on each time step by the following easy computation From now on, we shall use the following notations We apply this notations to all quantities indexed by a superscript, so that, for example, f n+1/2 = (f n+1 + f n )/2. We also denote u(x, t k ), v(x, t k ) by u n , v n so that, for example, u n+1/2 = u n+1 + u n /2 = (u(x, t n+1 ) + u(x, t k )) /2. We shall measure the error in the following norm Here and in what follows, we use the notations u(t) and ∂u ∂t (t) as a shorthand for, respectively, u(·, t) and ∂u ∂t (·, t). The norms and semi-norms in Sobolev spaces H k (Ω) are denoted, respectively, by · H k (Ω) and |·| H k (Ω) . We call (2.11) the energy norm referring to the underlying physics of the studied phenomenon. Indeed, the first term in (2.11) may be assimilated to the kinetic energy and the second one to the potential energy. The 3-point time error estimator The aim of this section is to recall a posteriori bounds in time and space from [15] for the error measured in the norm (2.11). Their derivation is based on the following piecewise quadratic (in time) 3-point reconstruction of the discrete solution. Definition 3.1. Let u n h be the discrete solution given by the scheme (2.6). Then, the piecewise quadratic reconstructionũ hτ (t) : [0, T ] → V h is constructed as the continuous in time function that is equal on [t k , t n+1 ], n ≥ 1, to the quadratic polynomial in t that coincides with u n+1 h (respectively u n h , u n−1 h ) at time t n+1 (respectively t k , t n−1 ). Moreover,ũ hτ (t) is defined on [t 0 , t 1 ] as the quadratic polynomial in t that coincides with u 2 . The quadratic reconstructionsũ hτ ,ṽ hτ are thus based on three points in time (normally looking backwards in time, with the exemption of the initial time slab [t 0 , t 1 ]). This is also the case for the time error estimator (3.3), recalled in the following Theorem and therefore referred to as the 3-point estimator. Theorem 3.2. The following a posteriori error estimate holds between the solution u of the wave equation (2.1) and the discrete solution u n h given by (2.5) and (2.6) for all t k , 0 ≤ n ≤ N with v n h given by (2.9): where the space indicator is defined by here C 1 , C 2 are constants depending only on the mesh regularity, [·] stands for a jump on an edge E ∈ E h , and u hτ ,ṽ hτ are given by Definition 3.1. The error indicator in time for k = 1, . . . , N − 1 is 4) and η T (t 0 ) = 5 12 We also recall an optimality result for the 3-point time error estimator. We introduce to this end the , ). Suppose that mesh T h is quasi-uniform, the mesh in time is uniform (t k = kτ ), and the initial approximations are chosen as Then, the 3-point time error estimator η T (t k ) defined by (3.3, 3.5) is of order τ 2 , i.e. with a positive constant C depending only on u, f , and the regularity of mesh T h . Remark 3.4. Note that the particular choice for the approximation of initial conditions in (3.7) using the H 1 0 -orthogonal projection (3.6) is crucial to obtain the optimal order of the 3-point time error estimator, as confirmed both theoretically and numerically in [15]. The 5-point A P OSTERIORI error estimator As already mentioned in the Introduction, the time error estimator (3.3) contains a finite element approximation to the Laplacian of u k h , i.e. z k h given by (3.4). This is unfortunate because z k h should be computed by solving an additional finite element problem that implies additional computational effort. Having in mind that the term ∂ 2 n f h − z n h in (3.3) is a discretization of ∂ 2 f /∂t 2 + ∆u = ∂ 4 u/∂t 4 at time t n our goal now is to avoid the second derivatives in space in the error estimates and replace them with the forth derivatives in time. We introduce a "fourth order finite difference in time" ∂ 4 n defined by on any sequence {w n h } n=0,1,... ∈ V h . This can be rewritten as a composition of two second order finite difference operators where ∂ 2 w h is the standard finite difference (2.10) applied to w h , and∂ 2 n is a modified second order finite difference defined by∂ Note that a lower subscript "n" is lacking from ∂ 2 w h in (4.2) consistent with the fact that∂ 2 n is applied there to the sequence {∂ 2 n w h } n=0,1,... rather than to a single instance of ∂ 2 n w h . In full detail, (4.2) should be interpreted as Remark 4.1. In the case of constant time steps τ n = τ , (4.1) is reduced to It is thus indeed a standard finite difference approximation to the fourth derivative. In particular, it is exact on polynomials (in time) of degree up to 4. However, a standard fourth order finite difference in the general case of non constant time steps would be given by the divided differences Clearly, the formulas for ∂ 4 n w h and∂ 4 n w h , although similar, do not coincide in general, and consequently ∂ 4 n w h is not necessarily consistent with the fourth derivative in time of w h . Definition (4.1) may seem thus artificial and counter-intuitive. We shall see however that it arises naturally in the analysis of Newmark scheme, cf. forthcoming Lemma 4.2. Indeed, in order to "differentiate" in time the averaged quantitiesw n h defined by (4.4) and present in the scheme (2.6), cf. also (4.13), one needs to employ the modified second order finite differencê ∂ 2 n , which shall be composed further with ∂ 2 n to give rise to ∂ 4 n . For any sequence {w n h } n=0,1,... ∈ V h , we denotē Consistently with the conventions above,w h will stand for the collection of any sequence {w n h } n=0,1,... . The following technical lemma establishes a connection between second order discrete derivatives∂ 2 n and ∂ 2 n . where c and C are positive constants depending only on the mesh regularity in time, i.e. on max k≥0 Proof. We first note that relation (4.5) does not contain any derivatives in space and thus it should hold at any point x ∈ Ω. Consequently, it is sufficient to prove this Lemma assuming that w n h , ∂ 2 k w h , etc. are real numbers, i.e. replacing V h by R. This is the assumption adopted in this proof. We shall thus drop the sub-indexes h everywhere. Furthermore, it will be convenient to reinterpret w n in (4.2), (4.3) and (4.4) as the values of a real valued function w(t) at t = t n . We shall also use the notations likew n , ∂ 2 n w, and so on, where w is a continuous function on R, always assuming w n = w(t n ). Observe that∂ 2 nw is a linear combination of 5 numbers {w n−3 , . . . , w n+1 }. Thus, it is enough to check equality (4.5) on any 5 continuous functions φ (k) (t), k = n − 3, . . . , n + 1, such that the vector of values of φ (k) at times t l , l = n − 3, . . . , n + 1, form a basis of R 5 . For fixed n, let us choose these functions as First we notice that for every linear function u(t) on [t n−3 , t n+1 ] we have∂ 2 nū = ∂ 2 n u = 0. Thus, we get immediately∂ 2 nφ(n−3) = ∂ 2 n φ (n−3) = 0 and∂ 2 kφ (n+1) = ∂ 2 k φ (n+1) = 0 so that (4.5) is fulfilled on functions φ (n−3) , φ (n+1) with any coefficients α k , k = n − 2, n − 1, n. Now we want to provide coefficients α k , k = n − 2, n − 1, n for which (4.5) is fulfilled on functions φ (n−2) , φ (n−1) and φ (n) . For brevity, we demonstrate the idea only for function φ (n) (t). Function φ (n) (t) is linear on [t n−3 , t n ] and thus From direct computations it is easy to show that where ∼ hides some factors that can be bounded by constants depending only on the mesh regularity. Thus we are able to establish expression for coefficient α n =∂ 2 kφ (n) ∂ 2 n φ (n) ≤ C. Similar reasoning for function φ (n−1) and φ (n−2) shows that α n−1 =∂ 2 nφ(n−1) The next step is to show boundedness from below of n k=n−2 α k . We will show it by applying equality (4.5) to second order polynomial function s(t) = t 2 2 . Using a Taylor expansion of s(t) aroundt n in the definition ofs n givess n = τ n (t 2 n +t n τ n−1 + 1 4 (τ 2 n + τ 2 n−1 )) + τ n−1 (t 2 n −t n τ n + 1 4 (τ 2 n + τ 2 n−1 )) 2(τ n + τ n−1 ) Substituting this into the definition of∂ 2 ns we obtain Using (4.5) and the fact that ∂ 2 n s = 1 for k = n − 2, n − 1, n we note that This implies n k=n−2 α k ≥ C. For all n ≥ 3 there exist coefficients β k , k = n − 2, n − 1, n such that n k=n−2 where coefficients α k , k = n − 2, n − 1, n are introduced in Lemma 4.2. Moreover where C is a positive constant depending only on the mesh regularity in time, i.e. on max k≥0 Proof. As in proof of Lemma 4.2, we assume V h = R, drop the sub-indexes h and interpret w n , s n as the values of continuous real valued functions w(t), s(t) at t = t n . Using (4.7) and notations (2.10) implies ∂ 2 k w = ∂ k s. Now, we are able to rewrite (4.8) in terms of s n only n k=n−2 As in the proof of Lemma 4.2 we take into account the fact that equation (4.9) should hold for every 5 numbers {s n−3 , . . . , s n+1 } and therefore it's enough to check equality (4.9) on 5 linearly independent piecewise linear functions φ (k) introduced by (4.6). Using the reasoning as in Lemma 4.2 leads to desired result (4.8). We can now prove an a posteriori error estimate involving ∂ 4 n u h . Since the latter is computed through 5 points in time {t n−3 , . . . , t n+1 }, we shall refer to this approach as the 5-point estimator. For the same reason, this estimator is only applicable from time t 4 . The error at first 3 time steps should be thus measured differently, for example using the 3-point estimator from Theorem 3.2. Theorem 4.4. The following a posteriori error estimate holds between the solution u of the wave equation (2.1) and the discrete solution u n h given by (2.5)-(2.6) for all t n , 4 ≤ n ≤ N with v n h given by (2.9): where the space error indicator is defined by (3.2) and the time error indicator iŝ with additional higher order termsη The constant C > 0 depends only on the mesh regularity in time, i.e. on max k≥0 Proof. We note first of all that it is sufficient to prove the Theorem for the final time, i.e. n = N because the statement for the general case n < N will follow by resetting the final time N to n. Introducing the L 2 -orthogonal projection P h : we can rewrite scheme (2.6) as for n = 0, . . . , N − 1 wheref n h is defined through averaging (4.4) from f n h = P h f (t n , ·). Taking a linear combination of instances of (4.13) at steps n, n − 1, n − 2 with appropriate coefficients gives (4.14) Using the definition of operator∂ 2 n and re-introducing v n h by (2.7) leads tô with coefficients α k , β k introduced in Lemmas 4.2 and 4.3. Moreover, by Lemma 4.2 γ = n k=n−2 α k −1 is positive and bounded so that with γ k = γβ k that are all uniformly bounded on regular meshes in time. Similarly, Thus, The rest of the proof follows closely that of Theorem 3.2, cf. [15]. We adopt the vector notation where v = ∂u/∂t. Note that the first equation in (2.2) implies that Similarly, Newmark scheme (2.7) and (2.8) can be rewritten as The a posteriori analysis relies on an appropriate residual equation for the quadratic reconstructionŨ hτ = ũ hτ v hτ . We have thus for t ∈ [t n , t n+1 ], n = 1, . . . , N − 1 17) so that, after some simplifications, Consider now (4.16) at time steps n and n − 1. Subtracting one from another and dividing by τ n−1/2 yields Introduce the error between reconstructionŨ hτ and solution U to problem (4.15): (4.20) or, component-wise Taking the difference between (4.19) and (4.15) we obtain the residual differential equation for the error valid for t ∈ [t n , t n+1 ], n = 1, . . . , andĨ h : H 1 0 (Ω) → V h is a Clément-type interpolation operator which is also a projection [10,20]. Noting that (A∇E, ∇E) = 0 and Integrating (4.21) in time from t 3 to some t * ≥ t 3 yields and assume that t * ∈ [t 3 , t N ] is the point in time where Z attains its maximum and t * ∈ (t n , t n+1 ] for some n. We have for the first and second terms in (4.22) This follows from an integration by parts with respect to time and the estimates on operators Π h andĨ h , cf. [15], and gives rise to the space part of the error estimate (4.10). Indeed, we can summarize the bounds above as The third term in (4.22) is responsible for the time estimator. It can be written as Recalling that Z(t * ) is the maximum of Z(t) and using the estimate Ĩ h E v L 2 (Ω) ≤ C E v L 2 (Ω) we continue as Noting tm+1 tm |p m |dt ≤ 1 12 we can finally bound III as Summing together the estimates on the terms I, II, III, and recalling Z(t * ) ≥ Z(t N ) yields (4.10) at the final time t N . Remark 4.5. The termsη h.o.t T (t k ) in (4.10) are (at least formally) of higher order thanη T (t k ). We propose therefore to ignoreη h.o.t T (t k ) in practice together with the integral of f −f τ , and to useη T (t k ) as the indicator of error due to the discretization in time. The following Theorem shows that the latter is indeed of optimal order τ 2 , at least for sufficiently smooth solutions, on quasi-uniform meshes in space and uniform meshes in time. Proof. The result follows from Theorem 3.3 by using (4.14) and Lemma 4.2 . Remark 4.7. Note, that as in the case for 3-point error estimator, the approximation of initial conditions is crucial for the optimal rate of our time error estimator. A toy model: a second order ordinary differential equation Let us consider first the following ordinary differential equation with a constant A > 0. This problem serves as simplification of the wave equation in which we get rid of the space variable. The Newmark scheme reduces in this case to and the error becomes The 3-point and the 5-point a posteriori error estimates are then defined as follows: We define the following effectivity indices in order to measure the quality of the 3-point and the 5-point estimators We consider problem (5.1) with the exact solution u = cos( √ At), and the final time T = 1. The results of simulations with constant time steps τ n = τ = T /N are presented in Table 1. We observe that 3-point and 5-point estimators are divided by about 100 when the time step τ is divided by 10. The true error e(t N ) also behaves as O(τ 2 ) and hence both time error estimators behave as the true error. In order to check the behavior of time error estimators for variable time step we take the previous example with the following time step ∀n : 0 ≤ n ≤ N τ n = 0.1τ * , if mod(n, 2) = 0, τ * , if mod(n, 2) = 1, (5.6) where τ * is a given fixed value, see Table 2. As in the case of constant time step we have the equivalence between the true error and both estimated errors. We have plotted on Figure τ kηT (t k ) compared to the error e(t n ). Table 3 contains the results for even more non-uniform time step ∀n : 0 ≤ n ≤ N τ n = 0.01τ * , if mod(n, 2) = 0, τ * , if mod(n, 2) = 1, (5.7) on otherwise the same test case. Note that in case when A = 100 and τ * = 0.001 the 5-points error estimator significantly over-predicts the true error, while the 3-point estimator remains very close to it. This effect is consistent with Theorem 3.2. Indeed, the constants the 5-point error estimator may depend on the meshes regularity in time. Our conclusion is that both 3-point and 5-point a posteriori error estimators are reliable for the toy model (5.1), although not asymptotically exact. The effectivity indices range from 1 to around 8 so that the the estimates can be rather pessimistic with respect to the true error. Moreover, Figure 1 suggests that the effectivity of the error estimator can deteriorate in the long time simulations. The error estimator for the wave equation on Delaunay mesh We now report numerical results for initial boundary-value problem for the wave equation (2.1) using piecewise linear finite elements in space) and Newmark scheme with non-uniform time steps and study the behavior of the 3-point time error estimator (3.3) and the 5-point time error estimator (4.11). All the computations are done with the help of FreeFEM++ [16]. In practice, we compute the space estimator (3.2) as follows: with I h denoting the nodal interpolator to piecewise linear functions. The quality of our error estimators in space and time is determined by the effectivity indices Thus, u is a Gaussian function, whose center moves from point (0.3, 0.3) at t = 0 to point (0.7, 0.7) at t = 1. The transport velocity 0.8t(1, 1) T is peaking at t = 1. We choose non-uniform time step for n = 1, . . . , N − 1 as The initial conditions are computed with the orthogonal projections as in (3.7), cf. Remarks 3.4 and 4.7. Unstructured Delaunay meshes in space are used in all the experiments. Numerical results are reported in Table 4. Note that this case is chosen so that the non-uniform time step is required, see Figure 2. Referring to Table 4, we observe that when setting initial time step as τ 2 ∼ O(h) the error is divided by 2 each time h is divided by 2, consistent with e ∼ O(τ 2 + h). The space error estimator and the two time error estimators behave similarly and thus provide a good representation of the true error. Both effectivity indices tend to a constant value. We therefore conclude that our space and time error estimators are reliable in the regime of non-uniform time steps and Delaunay space meshes. They separate well the two sources of the error and can be thus used for the mesh adaptation in space and time. In particularly, 3-point and 5-point time estimators become more and more close to each other when h and τ tend to 0. Finally, we report in Table 5 the computational times of our simulations using either 3-point or 5-point error estimators for the Newmark scheme on uniform meshes in time (with time step τ ) and unstructured quasi-uniform Delaunay meshes in space with maximum mesh size h. We have used FreeFEM++ to implement the algorithm and run it on a modern laptop computer with Intel Core I7 processor and 16GB of memory. The reported CPU times correspond to the whole computation, including the construction of the mesh, setting up the initial conditions, and factorizing the matrices, which is done only once before entering into the time marching loop (we have used the default UMFPACK direct sparse solver). We observe that the advantage of the 5-point estimator over the 3-point one which grows with refining the mesh and is up to 20% in our experiments. Some preliminary results for a fully adaptive algorithm showing the behavior of the estimators from [15] and from this article in more realistic settings are available in the Ph.D. thesis of the first author [14].
v3-fos-license
2019-05-20T13:06:40.545Z
2018-12-12T00:00:00.000
158789611
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-01294-6_5.pdf", "pdf_hash": "6a227c47fe1283901ba352060254fedb60937bc3", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42196", "s2fieldsofstudy": [ "Art" ], "sha1": "84b2bf20c4c5993ab155903def8b68f479c71ad0", "year": 2018 }
pes2o/s2orc
Communicating, Narrating, and Focalizing Minds This chapter affords a broad conceptualization of communicating minds, which is essential for framing transmedial narration. It also suggests a methodical way of analyzing narrators and narratees that are external and internal to narratives. Distinctions are made between actual narrators/narratees and overarching and embedded virtual (represented) narrators/narratees in order to be able to discern both transmedial and media-specific narrative features. Whereas all narratives by definition require actual narrators and narratees, it is sometimes helpful to construe overarching virtual narrators or narratees that are internal to narratives and help in making sense of them. Narratives can also hold embedded virtual narrators and narratees creating stories within stories. Narrators additionally act as focalizers, delimiting the scope of narration in various ways. work that is suitable for using in the setting of all forms of communication by all kinds of media-not only narration and not only media types where the use of language is salient. However, narration will be pinpointed as a special case in order to get us back on the main track. Although my suggested typologies in this chapter naturally resemble earlier categorizations in several ways, they are more flexible, precisely in the sense that they work for all media types. Given that narratives are conceptualized as virtual spheres formed in the perceiver's mind, which means that they are not determined to have any certain characteristics except for those stipulated by the definition of a story, they also allow for a pragmatic approach to the issue of narrators. I suggest that fruitless quarrels regarding whether certain kinds of narrators need to be present in various media types can be avoided by emphasizing the virtual nature of narratives and the modeling nature of the proposed typologies; the distinctions to be made in this chapter correspond to possible ways of construing narratives rather than to definite traits of narratives. I will start by briefly presenting the contours and essential features of this conceptual framework and then discuss parts of it in some detail. This requires reemphasizing some general concepts that I have already introduced in this treatise. In Chap. 2 I distinguished between the intra-and extracommunicational domains and emphasized that they are utterly entwined but nevertheless dissimilar areas in the mind of the perceiver of media products. The point is to mark out a difference between the forming of cognitive import in ongoing communication and what precedes and surrounds it in the form of cognitive import stored in the mind. I call the intracommunicational domain, formed by communicative semiosis, a virtual sphere. Narratives are virtual spheres. Regarding the extracommunicational domain, I have noted that vital parts of it are constituted by perception and interpretation of media products; previous communication is very much part of the background of ongoing communication. Thus, it may be said that the extracommunicational domain, the mental realm that precedes and surrounds the virtual sphere being formed in ongoing communication, consists of two complementary spheres: other virtual spheres (former interpretive results of communication) and what I propose to call the perceived actual sphere. The perceived actual sphere consists of earlier percepts outside of communication and interpretants resulting from semiosis triggered by these percepts. Every instance of communication is dependent on the experience of earlier encounters with things and phenomena in the world that have not been communicated by other minds. In summary, the perceived actual sphere is formed in one's mind through semiosis, immediate external perception, and also interoception, proprioception, and mental introspection. CommuniCating and narrating minds In Chap. 2 I also initially described communication in terms of a transfer of cognitive import between at least two minds, the producer's mind and the perceiver's mind, with the aid of an intermediate entity: the media product. After such a communicative transfer, there are mental configurations in the perceiver's mind-a virtual sphere-that to some extent are similar to those in the producer's mind. Acknowledging the presence of at least one producer's and one perceiver's mind in human communication is the starting point of the following distinctions among different kinds of communicating minds; distinctions that are vital for discerning some intricate conceptual structures of communication at large. Before getting into details, I will present an overview of my proposed typologies in the form of two embryonic lists. The first is an inventory of different forms of communicating minds: • A perceived actual communicating mind that is the actual producer of a media product = an actual communicator • A perceived actual communicating mind that is the actual perceiver of a media product = an actual communicatee • An overarching virtual communicating mind that is the producer of overarching communication = an overarching virtual communicator • An overarching virtual communicating mind that is the perceiver of overarching communication = an overarching virtual communicatee • An embedded virtual communicating mind that is the producer of embedded communication = an embedded virtual communicator • An embedded virtual communicating mind that is the perceiver of embedded communication = an embedded virtual communicatee • And so on; multiple layers of communication embedded in embedded communication. The second list is a catalog of narrative minds. It is identical to the first list, except that communication in general is replaced with the special case of narration, meaning communication including stories: • A perceived actual communicating mind that is the actual producer of a narrative media product = an actual narrator • A perceived actual communicating mind that is the actual perceiver of a narrative media product = an actual narratee • An overarching virtual communicating mind that is the producer of overarching narration = an overarching virtual narrator • An overarching virtual communicating mind that is the perceiver of overarching narration = an overarching virtual narratee • An embedded virtual communicating mind that is the producer of embedded narration = an embedded virtual narrator • An embedded virtual communicating mind that is the perceiver of embedded narration = an embedded virtual narratee • And so on; multiple layers of narration embedded in embedded narration. As narration is a transmedial form of communication, the terms 'narrator' and 'narratee', precisely as the terms 'communicator' and 'communicatee', shall be understood to refer to comprehensive communicative concepts useful for disentangling a range of functions and levels in narration in effectively all (not only verbal) media types. I will now comment on each of these forms of communicative and narrative minds and explain their interrelations and why I think they are useful for conceptualizing certain aspects of communication in general and narration in particular. A perceived actual communicating mind that is the actual producer of a media product = an actual communicator. If communication is at hand, a producer's mind is, by definition, present, at least initially. The producer's mind is responsible for the creation of a media product that may be perceived by some other mind either directly, as in face-to-face communication, or later, as when one watches an old movie. I suggest that a brief and simple term for this entity may be 'actual communicator'. However, it is more specifically a communicating mind that must be understood to stem, more or less directly, from a perceived actual sphere. Hence, it is perceived to be actual, which means that there will always necessarily remain some epistemological doubts regarding its actuality. In face-to-face communication, the perceiver is close to the producer's mind, in both time and space. Hence, the immediate perception of the activities of the body holding the producer's mind becomes part of the perceived actual sphere. In the case of watching an old movie, the perceiver is normally at a spatiotemporal distance from the producer's mind. Consequently, the mind of the producer (the actual communicator) may well be known to the perceiver only indirectly, perhaps through earlier communication. In other words, the perceived actual communicating mind stems from other virtual spheres (that always rely, to some extent, on perceived actual spheres). In any case, actual communicators are by definition parts of the extracommunicational domain. A media product may be produced by either one actual communicator (like speech) or several actual communicators (like most movies). A perceived actual communicating mind that is the actual producer of a narrative media product = an actual narrator. All that was said above regarding actual communicators can be specified in terms of 'actual narrators'. In brief: if narration is at hand, a mind producing the narrative media product must exist; there is, by definition, at least one actual narrator emanating from the extracommunicational domain. A perceived actual communicating mind that is the actual perceiver of a media product = an actual communicatee. Just as communication (as I define it) requires a producer's mind, it also requires a perceiver's mind. The perceiver's mind perceives the media product produced by the producer's mind and forms a virtual sphere through the mediation and representation of this media product. 'Actual communicatee' is a straightforward term for this entity. Although it may seem a bit strange, the actual communicatee is also, like the actual communicator, a perceived actual communicating mind: the mind that perceives a media product has an awareness and understanding of itself and this self-understanding stems from what precedes and surrounds ongoing communication: the extracommunicational domain. Thus, a mind that perceives and makes sense of a media product does so on the background of having perceived and made sense of, among other things, itself-immediately or mediated through earlier communication. Therefore, an actual communicatee is more precisely a perceived actual communicating mind, again necessarily tinted by epistemological doubts regarding its nature and actuality. A media product may be perceived by either one actual communicatee (like someone receiving a nudge indicating which direction she should go) or several actual communicatees (like many people listening to the same talk). A perceived actual communicating mind that is the actual perceiver of a narrative media product = an actual narratee. Again there is no need to repeat what has already been stated about the actual communicatee. What might be termed 'actual narratees' are the same as actual communicatees, except that they are more specifically involved in narration. If narration is at hand, at least one actual narratee perceiving the narrative media product must exist. An overarching virtual communicating mind that is the producer of overarching communication = an overarching virtual communicator. Communicators are minds that make communication possible by producing media products. In virtual spheres, however, communicators are virtual; they are representations of minds forming media products with semiotic qualities. A perceiver of a media product, an actual communicatee, who for various reasons has no knowledge of, access to, or interest in the actual communicator, is likely to form a virtual sphere that includes a construed overarching virtual communicating mind that is the producer of overarching communication-in brief, an 'overarching virtual communicator'-that helps in making the virtual sphere comprehensible. Otherwise, it might be difficult to make sense of media products whose producers are anonymous. Hence, the craving for internal coherence, for gestalt, can be satisfied with the aid of a construed overarching virtual communicator: odd details, vague connections, and apparent inconsistencies may be knitted together through the idea of a virtual communicating mind having certain ideas, peculiarities, purposes, unconscious drives, or ironic inclinations. Overarching virtual communicators may also be needed when perceiving media products formed collectively by several actual communicators and trying to understand the resulting virtual sphere as somehow consistent. By definition, a virtual sphere can have only one overarching virtual communicator. As soon as one thinks in terms of several virtual communicators, they are automatically subordinated to either an overarching virtual communicator or an actual communicator. Whereas at least one actual communicator is needed to bring about communication, the overarching virtual communicator is an optional entity that can be conjured up in the virtual sphere to make sense of it. Although it emerges within the intracommunicational domains, it may well be very similar to communicating minds in the extracommunicational domain, such as the actual communicator. This is because, as noted earlier, all intracommunicational objects, including overarching virtual communicators, are ultimately made up of parts, combinations, or blends of extracommunicational objects. The idea of an overarching virtual communicator accords well with what has long been known in literary theory as "implied author" (Booth 1961) and what in film studies is sometimes referred to as "voice" or "hypothetical filmmaker" (Alber 2010), although different authors construe these latter concepts in different ways. There have also been many philosophical discussions regarding the rather awkward question of the possibly factual existence of this kind of entity in various media (see Diehl 2009). In any case, the concept of overarching virtual communicator is fundamentally transmedial. This is because overarching virtual communicators are only indirectly represented by the sensory configurations of the media products, so to speak; they are formed in a later stage of the chains of semiosis and therefore independent of the modality modes of the media products. An overarching virtual communicating mind that is the producer of overarching narration = an overarching virtual narrator. Narration is communication including narratives, and narrators are minds that make this possible by producing narrative media products. An 'overarching virtual narrator'-a brief term for an overarching virtual communicating mind that is the producer of overarching narration-should be understood in analogy with the concept of an overarching virtual communicator, except that an overarching virtual narrator is obviously relevant in the case of narration. Nothing much needs to be added here, except a brief terminological comment. As the term 'narrator' has been used mainly for media types including verbal language, the term 'monstrator' has been suggested for the realm of visual iconic media (Gaudreault 2009(Gaudreault [1988). Although this term stands for a concept that corresponds quite well with the concept of overarching virtual narrator, it would be unfeasible to use different terms for all basic media types that harbor narratives. For that reason, I stick to the term (actual or virtual) 'narrator' and postulate that it should be understood to stand for a transmedial concept. An overarching virtual communicating mind that is the perceiver of overarching communication = an overarching virtual communicatee. While a perceiver of a media product (an actual communicatee) may have no knowledge of, access to, or interest in the actual communicator, an actual communicatee must be supposed to be aware of and have a certain amount of control of herself. Thus, there is no need to involve an overarching virtual communicating mind that perceives the overarching communication-an 'overarching virtual communicatee'-because the actual communicatee is out of reach. However, an overarching virtual communicatee may, just like an overarching virtual communicator, be helpful for making the virtual sphere comprehensible. In cases where the actual communicatee has a sense of not being an adequate perceiver at all, or when she believes that only parts of what is being communicated is graspable, it may be useful or even necessary to construe a virtual sphere including an overarching virtual communicatee. It is about construing an ideal perceiver's mind that might be able to grasp the entirety in a better way than the actual communicatee and hence achieve fuller understanding and better coherence (the concept of overarching virtual communicatee is a transmedial variation of what is known in literary theory as "implied reader" [Booth 1961]). In other words: the overarching virtual communicatee is the type of actual communicatees that is best suited for perceiving the media product. This ideal type of communicatee is something that may emerge within the virtual sphere as a result of the thought activity of the actual communicatee. Sometimes it is superfluous. As the concept of overarching virtual communicator, the concept of overarching virtual communicatee is profoundly transmedial and does not rely on the modality modes of the media products. An overarching virtual communicating mind that is the perceiver of overarching narration = an overarching virtual narratee. The 'overarching virtual narratee' is a variation of the overarching virtual communicatee, of which nothing more must be said except that the exploration of narratees at different levels in literature was pioneered by Prince (1982: 16-26) in a way that has inspired this account. An embedded virtual communicating mind that is the producer of embedded communication = an embedded virtual communicator. It is common for communication to be about communication. When talking to each other, one may mention other people who have said things or communicated them in other ways. Still images may depict acts of communication such as speaking, writing, drawing, or gesticulating. In all of these cases, one infers that communicating minds are involved in what is being represented. When seeing a still image of a writing person, for instance, one considers that the represented person must have a mind-a virtual mind, of course-that directs the writing performed by the directly represented body. In a case like this, the virtual sphere being formed by the perception of the static, visual, and iconic media product includes an embedded virtual communicating mind that is the producer of embedded communication; or, more succinctly, an 'embedded virtual communicator'. Virtual communicators like these are embedded, not overarching, because they only constitute smaller or larger parts of the virtual sphere, in contrast to overarching virtual communicators that have bearing on the totality of the virtual sphere. Although embedded virtual communicators, like overarch-ing virtual communicators, emerge within the intracommunicational infinite layers of embedded communication: 'Sarah said that she had read in a book that scientists claim that people who eat much sugar report that ….' Also, media types involving visual icons (whether fully spatiotemporal or three-or two-dimensionally spatial) have great potential for representing several layers of embedded communication. We have all seen images of people creating images of people creating images of people creating images ad infinitum. However, it is not as easy for these iconic media types to represent several layers of embedded narration if they are not temporal. Moving images may readily represent events in succession that include narrative events, such as when we see a story about someone going to the cinema, buying tickets, and then watching a movie about someone going to her desk, sharpening a pen, thinking a while, and then writing a letter about a friend who has traveled to Indonesia and fallen in love-and so forth. Still images, on the other hand, are less adequate for representing temporal interrelations being represented within other temporal interrelations, although it is certainly not impossible, especially if the perceiver's background knowledge is explored. In the case of media types that are recognized as potentially narrative only in a more elementary mannersuch as a meal where the interrelated events consist of tastes and taste combinations that are developed and contrasted-the idea of representing embedded narratives offers even more resistance due to the limited amount of complex cognitive functions connected to the gustatory sense. As a rule of thumb, I propose that embedded narration is even more media-sensitive than narration as such, and that the deeper down in embedded narrative layers one goes, the less transmedial it all becomes. FoCalizing minds For a long time, narratology has, for good reasons, scrutinized concepts such as perspective and point of view and in numerous ways related them to narration and narrators. The related concept of focalization was first investigated by Gérard Genette in written literature (1980[1972: 186) and was later explored by, among many others, François Jost in film (2004), Kai Mikkonen in comics (2011), andJonathan Hensher in still images (2016). There is an extensive literature on the entangled issues of perspective, point of view, focalization, and their interrelations (for a recent overview with a transmedial perspective, see Thon 2016: 223-264). Focalization is variously conceptualized in terms of agency or functions that somehow delimit narratives and parts of narratives: not everything is seen, heard, or conveyed in a certain narrative. The scope of narratives can hence be understood to be restricted by one or several focalizers. Although originating from literary theory, focalization is actually a profoundly transmedial concept that, I believe, must be tightly connected to the concept of communicating and narrating minds. It is often noted that the term 'focalization' has certain visual connotations. It is "based on the visual metaphor of a lens through which one can take things, characters, actions in the storyworld into 'focus'. It seems that a media-sensitive narratology has to revise this concept to accommodate all the other sense perceptions, too" (Mildorf and Kinzel 2016a: 14). While it is vital to revise the concept so that it clearly covers all forms of sense perceptions, this does not necessarily include revising the term. Several new terms have actually been coined for focalization of other sense perceptions than the visual, but I think it is untenable to use different terms for each different sense being involved, just as it is untenable to use different terms for narration and narrators in different media types. Such a practice would make transmedial terminology acutely overloaded and transmedial research unnecessarily cumbersome. As the term 'focus' is far from exclusive to the visual domain-it is broadly used for denoting points of convergence, attention, or action in a wide array of sensorial and cognitive domains-I find the term 'focalization' useful for transmedial narratology. Therefore, I prefer to continue talking about focus, focalizing, and focalization in all media types instead of introducing a broad range of new media-specific terms. However, the concept of focalization is also highly useful outside the area of narration. For that reason, I define focalization as a main feature of communication at large. All virtual spheres are demarcated in various ways. The notion of actual and virtual communicators always communicating everything they perceive and know is clearly absurd, so it must be concluded that communicators of all kinds are generally also focalizers to some extent (pace Genette and despite the broad range of knowledge of so-called omniscient narrators). In order to cover the complex field of possible restrictions of what is being communicated, it is also vital to emphasize that focalization concerns not only restrictions on the communication of all kinds of sensory perceptions but also restrictions on all kinds of knowledge, thoughts, ideas, and values. As the awareness of sensory perception and cognition takes place in minds, it is minds that have the ability to select what is to be communicated; therefore, focalization must be performed by focalizing minds. Focalization is an essential and unavoidable aspect of communication in general. As communication on all levels is entirely dependent on minds, from the actual communicators and communicatees to embedded virtual communicators and communicatees, focalization is located in minds on the different levels that have been described in this chapter. The actual communicator determines-consciously or unconsciously-certain frames of what to be communicated. When construing the virtual sphere, the actual communicatee furthermore interprets the communicated in terms of virtual narrators focalizing their sense impressions and cognition in various ways. Therefore, focalization regulates the communicated, both in whole and in detail. From the broadest perspective, the limits of a mind's perceptions and cognitions also constitute a form of focalization: one can never communicate what is outside one's scope, so the presence of a certain virtual mind may result in the communicated being focalized in a way that would not be the case if another virtual mind, harboring other perceptions and cognitions, had been present. By the same token, actual minds can, naturally, only (try to) communicate what they have perceived and what they know or believe. From a narrower perspective, focalization is rather about choosing-for practical or more calculated reasons-to delimit the scope. As communicating minds may choose to pay attention to what they know about other minds, one of many ways of focalizing is to delimit one's scope to what one assumes to be the perceptions, knowledge, and ideas of other minds. Clearly, different minds within the same virtual sphere may focalize in ways that create tensions or even conflicts-clashes that may or may not be satisfactorily resolved by an overarching virtual narrator. Overall, I believe that a clear notion of different levels of communicating, narrating, and focalizing minds is highly useful for understanding how communication at large and narration in particular is structured. It is essential that the conceptual framework is thoroughly transmedial, while at the same time pointing to the limits of transmediality. In the following chapter, where the attention will be on represented events, it should be borne in mind that events may appear both in narratives that actual or overarching virtual narrators are responsible for, and in narratives produced by embedded virtual narrators.
v3-fos-license
2016-01-11T18:29:14.669Z
2012-02-03T00:00:00.000
18187781
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/27400", "pdf_hash": "ae979bba8bc73ffd3fd432cde916b57bbb549a35", "pdf_src": "Grobid", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42197", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "116059509cb1acb1886fb92b7f2f2eac2b99c7e3", "year": 2012 }
pes2o/s2orc
Modular Robotic Approach in Surgical Applications – Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – The trend in surgical robots is moving from traditional master-slave robots to miniaturized devices for screening and simple surgical operations (Cuschieri, A. 2005). For example, capsule endoscopy (Moglia, A. 2007) has been conducted worldwide over the last five years with successful outcomes. To enhance the dexterity of commercial endoscopic capsules, capsule locomotion has been investigated using legged capsules (Quirini, M. 2008) and capsules driven by external magnetic fields (Sendoh, M. 2003; Ciuti, G. 2010; Carpi, F. 2009). Endoscopic capsules with miniaturized arms have also been studied to determine their potential for use in biopsy (Park, S.-K. 2008). Furthermore, new surgical procedures known as natural orifice transluminal endoscopic surgery (NOTES) and Single Port Access surgery are accelerating the development of innovative endoscopic devices (Giday, S. 2006; Bardaro, S.J. 2006). These advanced surgical devices show potential for the future development of minimally invasive and endoluminal surgery. However, the implementable functions in such devices are generally limited owing to space constraints. Moreover, advanced capsules or endoscopes with miniaturized arms have rather poor dexterity because the diameter of such arms must be small (i.e. a few millimeters), which results in a small force being generated at the tip. A modular surgical robotic system known as the ARES (Assembling Reconfigurable Endoluminal Surgical system) system has been proposed based on the aforementioned motivations (Harada, K. 2009; Harada, K. 2010; Menciassi, A. 2010). The ARES system is designed for screening and interventions in the gastrointestinal (GI) tracts to overcome the intrinsic limitations of single-capsules or endoscopic devices. In the proposed system, Introduction The trend in surgical robots is moving from traditional master-slave robots to miniaturized devices for screening and simple surgical operations (Cuschieri, A. 2005).For example, capsule endoscopy (Moglia, A. 2007) has been conducted worldwide over the last five years with successful outcomes.To enhance the dexterity of commercial endoscopic capsules, capsule locomotion has been investigated using legged capsules (Quirini, M. 2008) and capsules driven by external magnetic fields (Sendoh, M. 2003;Ciuti, G. 2010;Carpi, F. 2009).Endoscopic capsules with miniaturized arms have also been studied to determine their potential for use in biopsy (Park, S.-K. 2008).Furthermore, new surgical procedures known as natural orifice transluminal endoscopic surgery (NOTES) and Single Port Access surgery are accelerating the development of innovative endoscopic devices (Giday, S. 2006; Bardaro, S.J. 2006).These advanced surgical devices show potential for the future development of minimally invasive and endoluminal surgery.However, the implementable functions in such devices are generally limited owing to space constraints.Moreover, advanced capsules or endoscopes with miniaturized arms have rather poor dexterity because the diameter of such arms must be small (i.e. a few millimeters), which results in a small force being generated at the tip.A modular surgical robotic system known as the ARES (Assembling Reconfigurable Endoluminal Surgical system) system has been proposed based on the aforementioned motivations (Harada, K. 2009;Harada, K. 2010;Menciassi, A. 2010).The ARES system is designed for screening and interventions in the gastrointestinal (GI) tracts to overcome the intrinsic limitations of single-capsules or endoscopic devices.In the proposed system, miniaturized robotic modules are ingested and assembled in the stomach cavity.The assembled robot can then change its configuration according to the target location and task.Modular surgical robots are interesting owing to their potential for application as selfreconfigurable modular robots and innovative surgical robots.Many self-reconfigurable modular robots have been investigated worldwide (Yim, M. 2007;Murata, S. 2007) with the goal of developing systems that are robust and adaptive to the working environment.Most of these robots have been designed for autonomous exploration or surveillance tasks in unstructured environments; therefore, there are no strict constraints regarding the number of modules, modular size or working space.Because the ARES has specific applications and is used in the GI tract environment, it raises many issues that have not been discussed in depth in the modular robotic field.Modular miniaturization down to the ingestible size is one of the most challenging goals.In addition, a new interface must be developed so that surgeons can intuitively maneuver the modular surgical robot.The purpose of this paper is to clarify the advantages of the modular approach in surgical applications, as well as to present proof of concept of the modular robotic surgical system.The current paper is organized as follows: Section 2 describes the design of the ARES system.Section 3 details the design and prototyping of robotic modules, including the experimental results.Section 4 describes a reconfigurable master device designed for the robotic modules, and its preliminary evaluation is reported. Clinical indications and proposed procedures The clinical target of the ARES system is the entire GI tract, i.e., the esophagus, stomach, small intestine, and colon.Among GI tract pathologies that can benefit from modular robotic features, biopsy for detection of early cancer in the upper side of the stomach (the fundus and the cardia) was selected as the surgical task to be focused on as a first step.Stomach cancer is the second leading cause of cancer-related deaths worldwide (World Health Organization 2006), and stomach cancer occurring in the upper side of the stomach has the worst outcome in terms of the 5-year survival ratio (Pesic, M. 2004).Thus, early diagnosis of cancer utilizing an advanced endoluminal device may lead to better prognosis.The stomach has a large volume (about 1400 ml) when distended, which provides working space to assemble the ingested robotic modules and change the topology of the assembled robot inside (i.e.reconfiguration).Each robotic module should be small enough to be swallowed and pass through the whole GI tract.Because the size of the commercial endoscopic capsules (11 mm in diameter and 26 mm in length (Moglia, A. 2007)) has already been shown to be acceptable for the majority of patients as an ingestible device, each module needs to be miniaturized to this size before being applied to clinical cases.The surgical procedures proposed for the ARES system (Harada, K. 2010) are shown in Fig. 1.Prior to the surgical procedure, the patient drinks a liquid to distend the stomach to a volume of about 1400 ml.Next, the patient ingests 10-15 robotic modules that complete the assembly process before the liquid naturally drains away from the stomach in 10-20 minutes.The number of the modules swallowed depends on the target tasks and is determined in advance based on the pre-diagnosis.Magnetic self-assembly in the liquid using permanent magnets was selected for this study since its feasibility has already been demonstrated (Nagy, Z. 2007).Soon after the assembly, the robot configures its topology according to preoperative planning by repeated docking and undocking of the modules (the undocking mechanism and electrical contacts between modules are necessary for reconfiguration, but they have not been implemented in the presented design).The robotic modules are controlled via wireless bidirectional communication with a master device operated by the surgeon, while the progress in procedure is observed using intraoperative imaging devices such as fluoroscopy and cameras mounted on the modules.After the surgical tasks are completed, the robot reconfigures itself to a snake-like shape to pass through the pyloric sphincter and travel to examine the small intestine and the colon, or it completely disassembles itself into individual modules so that it can be brought out without external aid.One of the modules can bring a biopsy tissue sample out of the body for detailed examination after the procedure is complete. Advantages of the modular approach in surgical applications The modular approach has great potential to provide many advantages to surgical applications.These advantages are summarized below using the ARES system as shown in Fig. 2. The numbering of the items in Fig. 2 is correlated with the following numbering.i.The topology of the modular surgical robot can be customized for each patient according to the location of the disease and the size of the body cavity in which the modular robot is deployed.A set of functional modules such as cameras, needles and forceps can be selected for each patient based on the necessary diagnosis and surgical operation.ii.The modular approach facilitates delivery of more components inside a body cavity that has small entrance/exit hole(s).As there are many cavities in the human body, the modular approach would benefit treatment in such difficult-to-reach places.Because several functional modules can be used simultaneously, the modular robot may perform rather complicated tasks that a single endoscopic capsule or an endoscopic device is not capable of conducting.For example, if more than two camera modules are employed, the surgeon can conduct tasks while observing the site from different directions.iii.Surgical tools of relatively large diameter can be brought into the body cavity. Conventionally, small surgical forceps that can pass through an endoscopic channel of a few millimeters have been used for endoluminal surgery.Conversely, surgical devices that have the same diameter as an endoscope can be used in the modular surgical system.Consequently, the force generated at the tip of the devices would be rather large, and the performance of the functional devices would be high.iv.The surgical system is more adaptive to the given environment and robust to failures. Accordingly, it is not necessary for the surgical robot to equip all modules that might be necessary in the body because the surgeons can decide whether to add modules with different functionalities, even during the surgical operation.After use, the modules can be detached and discarded if they are not necessary in the following procedures. Similarly, a module can be easily replaced with a new one in case of malfunction.As these advantages suggest, a modular surgical robot would be capable of achieving rather complicated tasks that have not been performed using existing endoluminal surgical devices.These advantages are valid for modular robots that work in any body cavity with a small entrance and exit.Moreover, this approach may be introduced to NOTES or Single Port Access surgery, in which surgical devices must reach the abdominal cavity through a small incision.In Section 3, several robotic modules are proposed, and the performance of these modules is reported to show the feasibility of the proposed surgical system. Fig. 2. Advantages of the modular approach in surgical applications www.intechopen.com Design and prototyping of the robotic modules Figure 3 shows the design and prototypes of the Structural Module and the Biopsy Module (Harada, K. 2009, Harada, K. 2010).The Structural Module has two degrees of freedom (±90° of bending and 360° of rotation).The Structural Module contains a Li-Po battery (20 mAh, LP2-FR, Plantraco Ltd., Canada), two brushless DC geared motors that are 4 mm in diameter and 17.4 mm in length (SBL04-0829PG337, Namiki Precision Jewel Co. Ltd., Japan) and a custom-made motor control board capable of wireless control (Susilo, E. 2009).The stall torque of the selected geared motor is 10.6 mNm and the speed is 112 rpm when controlled by the developed controller.The bending mechanism is composed of a worm and a spur gear (9:1 gear reduction), whereas the rotation mechanism is composed of two spur gears (no gear reduction).All gears (DIDEL SA, Switzerland) were made of nylon, and they were machined to be implemented in the small space of the capsule.Two permanent magnets (Q-05-1.5-01-N,Webcraft GMbH, Switzerland) were attached at each end of the module to help with self-alignment and modular docking.The module is 15.4 mm in diameter and 36.5 mm in length; it requires further miniaturization before clinical application.The casing of the prototype was made of acrylic plastic and fabricated by 3D rapid prototyping (Invison XT 3-D Modeler, 3D systems, Inc., USA).The total weight is 5.6 g.Assuming that the module would weigh 10 g with the metal chassis and gears, the maximum torque required for lifting two connected modules is 5.4 mNm for both the bending DOF and rotation DOF.Assuming that the gear transmission efficiency for the bending mechanism is 30%, the stall torque for the bending DOF is 28.6 mNm.On the other hand, the stall torque for the rotation DOF is 8.5 mNm when the transmission efficiency for the rotation mechanism is 80%.The torque was designed to have sufficient force for surgical operation, but the transmission efficiency of the miniaturized plastic gears was much smaller than the theoretical value as explained in the next subsection. • Controller The aforementioned brushless DC motor came with a dedicated motor driving board (SSD04, Namiki Precision Jewel Co., Ltd., 19.6 mm × 34.4 mm × 3 mm).This board only allows driving of one motor; hence, two boards are required for a robotic module with 2 DOFs.Because there was not sufficient space for the boards in the robotic module, a custom made high density control board was designed and developed in-house.This control board consisted of one CC2430 microcontroller (Texas Instrument, USA) as the main wireless controller and three sets of A3901 dual bridge motor drivers (Allegro MicroSystem, Inc., USA).The fabricated board is 9.6 mm in diameter, 2.5 mm in thickness and 0.37 g in weight, which is compatible with swallowing.The A3901 motor driver chip was originally intended for a brushed DC motor, but a software commutation algorithm was implemented to control a brushless DC motor as well.An IEEE 802.15.4 wireless personal area network (WPAN) was introduced as an embedded feature (radio peripheral) of the microcontroller.The implemented algorithm enables control of the selected brushless DC motor in Back Electro-Motive Force (BEMF) feedback mode or slow speed stepping mode.When the stepping mode is selected, the motor can be driven with a resolution of 0.178º.For the modular approach, each control board shall be equipped with a wired locating system for intra-modular communication in addition to the wireless communication.Aside from wireless networking, the wired locating system, which is not implemented in the presented design, would be useful for identification of the sequence of the docked modules in real time.The wired locating system is composed of three lines, one for serial multidrop communication, one for a peripheral locator and one as a ground reference.When the modules are firmly connected, the intra-modular communication can be switched from wireless to wired to save power while maintaining the predefined network addresses.When one module is detached intentionally or by mistake, it will switch back to wireless mode. • Battery The battery capacity carried by each module may differ from one to another (e.g. from 10 mAh to 50 mAh) depending on the available space inside the module.For the current design, a 20 mAh Li-Po battery was selected.Continuous driving of the selected motor on its maximum speed using a 20 mAh Li-Po battery was found to last up to 17 minutes.A module does not withdraw power continuously because the actuation mechanisms can maintain their position when there is no current to the motor owing to its high gear reduction (337:1).A module consumes power during actuation, but its power use is very low in stand-by mode. • Biopsy Module The Biopsy Module is a Functional Module that can be used to conduct diagnosis.The grasping mechanism has a worm and two spur gears, which allows wide opening of the grasping parts.The grasping parts can be hidden in the casing at the maximum opening to prevent tissue damage during ingestion.The motor and other components used for the Biopsy Module are the same as for the Structural Module.The brushless DC geared motors (SBL04-0829PG337, Namiki Precision Jewel Co. Ltd., Japan), the control board, a worm gear and two spur gears (9:1 gear reduction) were implemented in the Biopsy Module.A permanent magnet (Q-05-1.5-01-N,Webcraft GMbH, Switzerland) was placed at one side to be connected to another Structural Module.The Biopsy Module can generate a force of 7.1 N at its tip, and can also open the grasping parts to a width of 19 mm with an opening angle of 90 degrees.These values are much larger than those of conventional endoscopic forceps, which are 2-4 mm in diameter.As a demonstration, Figure 3 shows the Biopsy Module holding a coin weighing 7.5 g.In conventional endoscopy, forceps are inserted through endoscopic channels that are parallel to the direction of the endoscopic view, which often results in the forceps hiding the target.Conversely, the Biopsy Module can be positioned at any angle relative to the endoscopic view owing to the modular approach, thereby allowing adequate approach to the target. Performance of the Structural Module The mechanical performance of the bending and rotation DOFs of the Structural Module was measured in preliminary tests (Menciassi, A. 2010), and the results are summarized in Fig. 4. The bending angle was varied by up to ± 90° in steps of 10° three times in succession.The measured range of the bending angle was -86.0° to +76.3°, and the maximum error was 15.8°.The rotation angle was increased from 0° to 180° in steps of 45° three times in succession, and the measured range of the rotational angle was between 0° and 166.7° with a maximum error of 13.3°.The difference between the driven angle and the measured angle was due to backlash of the gears and the lack of precision and stiffness of the casing made by 3D rapid prototyping.Regardless of the errors and the hysteresis, the repeatability was sufficient for the intended application for both DOFs.These results indicate that the precision of each motion can be improved by changing the materials of the gears and the casing.Since the motor can be controlled with a resolution of 0.178°, very precise surgical tasks can be achieved using different manufacturing processes.In addition to the angle measurements, both bending and rotation torque were measured.The torque was measured by connecting cylindrical parts with permanent magnets at both ends until the bending/rotational motion stopped.The length and weight of each cylinder was designed in advance, and several types of cylinders were prepared.The measured bending torque was 6.5 mNm and the rotation torque was 2.2 mNm.The figure also shows one module lifting up two modules attached to its bending mechanism as a demonstration. The performance in terms of precision and generated torque, which are very important for reconfiguration and surgical tasks, was sufficient; however, the precision was limited owing to the aforementioned fabrication problems.The thin walls of the casing made of acrylic plastic were easily deformed, which caused friction between the parts.The casing made of metal or PEEK and tailor-made metal gears with high precision will improve the mechanism rigidity and performance, thus producing the optimal stability. Possible designs of robotic modules Figure 5 shows various designs of robotic modules that can be implemented in the modular surgical robot.The modules can be categorized into three types: structural modules, functional modules, and other modules.Structural modules are used to configure a robotic topology.Functional modules are used for diagnosis or intervention, while other modules can be added to enhance the performance and robustness of the robotic system.Obviously, an assembled robot made of different types of modules (i.e. a robot with high heterogeneity) may provide high dexterity, but the self-assembly in the stomach and control of the modules would become more difficult.To optimize the level of heterogeneity, self-assembly of the robotic modules must be developed so that the reconfiguration of the robotic topology following the self-assembly can be planned in advance.Employing pre-assembled modular arms or tethered modules can be another option to facilitate assembly in a body cavity; however, this would require further anesthesia, and it would hinder the promotion of massive screening. Reconfigurable master device 4.1 Design and prototyping of the reconfigurable master device One main advantage of using a modular approach in surgical applications is the adaptivity to the given environment as mentioned in Section 2.2.Wherever the robotic platform is deployed in the GI tract, the robotic topology can be changed based on preoperative plans or the in-situ situation to fit in any particular environment.This dynamic changing and reshaping of the robotic topology should be reflected on the user interface.Since it is possible for a robotic topology to have redundant DOFs, the master device for the modular surgical system needs to be able to handle the redundancy that is inherent to modular robots.Based on these considerations, we propose a reconfigurable master device that resembles the robotic platform (Fig. 6).When the assembled robot changes its topology, the master device follows the same configuration.The robotic module shown in Fig. 6 has a diameter of 15.4 mm, while a module of the reconfigurable master device has a diameter of 30 mm.The master modules can be easily assembled or disassembled using set screws, and it takes only a few seconds to connect one module to another.Each robotic module is equipped with two motors as described in the previous section; thus, each master module is equipped with two potentiometers (TW1103KA, Tyco Electronics) that are used as angular position sensors.Calculating the angular position of each joint of the reconfigurable master device is quite straightforward.A common reference voltage is sent from a data acquisition card to all potentiometers, after which the angular position can be calculated from the feedback readings.Owing to the identical configuration, the angle of each joint of the robotic modules can be easily determined, even if the topology has redundancy.The advantages of the proposed master device include intuitive manipulation.For example, the rotational movement of a structural module used to twist the arm is limited to ± 180°, and the master module also has this limitation.This helps surgeons intuitively understand the range of the motion and the reachable working space of the modules.Using a conventional master manipulator or an externa l c o n s o l e , i t i s p o s s i b l e t h a t t h e s l a v e manipulator cannot move owing to its mechanical constraints, while the master manipulator can still move.However, using the proposed master device, the surgeon can intuitively understand the mechanical constraints by manipulating the master device during practice/training.Furthermore, the position of the master arm can indicate where the robotic modules are, even if they are outside of the camera module's view.These characteristics increase the safety of the operation.This feature is important because the entire robotic system is placed inside the body.In other surgical robotic systems, the position or shape of the robotic arms is not important as they are placed outside of the body and can be seen during operation.Unlike other master devices, it is also possible for two or more surgeons to move the reconfigurable master device together at the same time using multi arms with redundant DOFs. Evaluation A simulation-based evaluation setup was selected to simplify the preliminary evaluation of the feasibility of the reconfigurable master device.The authors previously developed the Slave Simulator to evaluate workloads for a master-slave surgical robot (Kawamura, K. 2006).The Slave Simulator can show the motion of the slave robot in CG (Computer Graphics), while the master input device is controlled by an operator.Because the simulator can virtually change the parameters of the slave robot or its control, it is easy to evaluate the parameters as well as the operability of the master device.This Slave Simulator was appropriately modified for the ARES system.The modified Slave Simulator presents the CG models of the robotic modules to the operator.The dimension and DOFs of each module in CG were determined based on the design of the robotic modules.The angle of each joint is given by the signal from the potentiometers of the reconfigurable master device, and the slave modules in CG move in real time to reproduce the configuration of the master device.This Slave Simulator is capable of altering joint positions and the number of joints of the slave arms in CG so that the workspace of the reconfigurable master device can be reproduced in a virtual environment for several types of topologies.The simulator is composed of a 3D viewer that uses OpenGL and a physical calculation function.This function was implemented to detect a collision between the CG modules and an object placed in the workspace. To simplify the experiments to evaluate the feasibility of the proposed master device and usefulness of the developed simulator, only one arm of the reconfigurable master device was used.Three topologies that consist of one Biopsy Module and one or two Structural Module(s) were selected as illustrated in Fig. 7. Topology I consists of a Structural Module and a Biopsy Module, and the base is fixed so that the arm appears with an angle of 45 degrees.One Structural Module is added to Topology I to configure Topology II, and Topology III is identical to Topology II, but placed at 0 degrees.Both Topology II and Topology III have redundant DOFs.The projection of the workspace of each arm and the shared workspace are depicted in Fig. 8.A target object on which the arm works in the experiments must be placed in this shared area, which makes it easy to compare topologies. A bar was selected as the target object instead of a sphere because the height of the collision point is different for each topology when the object appears in the same position in the 2D plane. The experiment was designed so that a bar appears at random in the shared workspace.The bar represents a target area at which the Biopsy Module needs to collect tissue samples, and this experiment is a simple example to select one topology among three choices given that the arm can reach the target.We assumed that this choice may vary depending on the user, and this experiment was designed to determine if the reconfigurability of the master device, i.e. customization of the robot, provides advantages and improves performance.During the experiment, the operator of the reconfigurable master device could hear a beeping sound when the distal end of the arm (i.e. the grasping part of the biopsy module) touched the bar.The task designed for the experiments was to move the arm of the reconfigurable master device as quickly as possible, touch the bar in CG, and then maintain its position for three seconds.The plane in which the bar stands is shown in grids (Fig. 9), and the operator obtains 3D perception by observing these grids.The plane with the grids is the same for all topologies.The angle of the view was set so that the view is similar to that from the camera module in Fig. 6.Five subjects (a-e) participated in the experiments, none of whom were surgeons.Each subject was asked to freely move the master device to learn how to operate it; however, this practice was allowed for one minute before starting the experiments.Each subject started from Topology I, then tested Topology II and finally Topology III.The time needed to touch the bar and maintain it for three seconds was measured.This procedure was repeated ten times for each topology with a randomized position of the bar.During the procedure, the bar appeared at random; however, it always appeared in the shared workspace to ensure that the arm could reach it.After finishing the experiment, the subjects were asked to fill in a questionnaire (described below) for each topology.The subjects were also asked which topology they preferred.A NASA TLX questionnaire (NASA TLX (website)) was used to objectively and quantitatively evaluate the workload that the subjects felt during the experiments.This method has versatile uses, and we selected this method also because it was used to evaluate the workload in a tele-surgery environment (Kawamura, K. 2006).This method evaluates Metal Demand, Physical Demand, Temporal Demand, Performance, Effort and Frustration, and gives a score that represents the overall workload that the subject felt during the task. Results The time spent conducting the given task, the workload score evaluated using the NASA TLX questionnaire and the preference of the topology determined by the subjects are summarized in Table 1.For each item, a smaller value indicates a more favorable evaluation by the subject.Considering the results of the time and workload score, Topology II was most difficult.The difference between Topology I and III was interesting.Two of the subjects ((b) and (c)) preferred Topology I, which did not have a redundant DOF.Conversely, three of the subjects ((a), (d) and (e)) preferred Topology III because they could select the path to reach the target owing to the redundant DOF.The average scores of the NASA TLX parameters shown in Fig. 10 suggest that the Physical Demand workload was high for Topology I, while the Effort workload was high for Topology III. The two subjects who preferred Topology I rather than Topology III claimed that it was not easy to determine where the bar was located when Topology III was used owing to the lack of 3D perception.In addition, they reported that the bar seemed to be placed far from the base.However, the bar appeared randomly, but in the same area; therefore, the bar that appeared in the experiment that employed Topology III was not placed farther from the base when compared to the experiments that used Topology I or Topology II.Accordingly, these two subjects may have had difficulty obtaining 3D perception from the gridded plane. In Topology III, the arm was partially out of view in the initial position; thus, the operator needed to obtain 3D perception by seeing the grids.It is often said that most surgeons can obtain 3D perception even if they use a 2D camera, and our preliminary experimental results imply that this ability might differ by individual.Some people appear to obtain 3D perception primarily by seeing the relative positions between the target and the tool they move. Redundant DOFs may also be preferred by operators with better 3D perception capability. Although the experiments were preliminary, there must be other factors that accounted for the preference of the user.Indeed, it is likely that the preferable topology varies depending on the user, and the developed simulator would be useful to evaluate these variations.The proposed reconfigurable master device will enable individual surgeons to customize the robot and interface as they prefer. Conclusion A modular robot was proposed for endoluminal surgery.The design, prototyping and evaluation of the modules were reported.Although there are some issues related to the fabrication problems, the results of the performance tests show the feasibility of the modular surgical system.A reconfigurable master device has also been proposed, and its feasibility was evaluated by simulation-based experiments.The preliminary results showed that the preferred topology may vary depending on the user.Moreover, the reconfigurable master device would enable each surgeon to customize the surgical system according to his/her own preferences.Development of the robotic modules and the reconfigurable master device provided proof of concept of the modular robotic system for endoluminal surgery, suggesting that the modular approach has great potential for surgical applications. Fig. 1 . Fig. 1.Proposed procedures for the ARES system Fig. 3 . Fig. 3. Design and prototypes of the structural module (left) and the biopsy module (right) Fig. 5 . Fig. 5. Various designs of the robotic modules Fig. 7 . Fig. 7. Three topologies used in the experiments Fig. 8 . Fig. 8. Workspace of each topology and the shared workspace Fig. 10.NASA TLX parameters for three topologies Table 1 . Experimental results
v3-fos-license
2020-07-26T13:05:23.848Z
2020-07-22T00:00:00.000
220746658
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-3397/18/8/377/pdf", "pdf_hash": "cabc71c03cb0a31156d5378d2c0ac865dced2211", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42198", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "5a9c8d8de43510801af9a2725f8c0b418cd889ba", "year": 2020 }
pes2o/s2orc
Topical Application of Phlorotannins from Brown Seaweed Mitigates Radiation Dermatitis in a Mouse Model Radiation dermatitis (RD) is one of the most common side effects of radiotherapy; its symptoms progress from erythema to dry and moist desquamation, leading to the deterioration of the patients’ quality of life. Active metabolites in brown seaweed, including phlorotannins (PTNs), show anti-inflammatory activities; however, their medical use is limited. Here, we investigated the effects of PTNs in a mouse model of RD in vivo. X-rays (36 Gy) were delivered in three fractions to the hind legs of BALB/c mice. Macroscopic RD scoring revealed that PTNs significantly mitigated RD compared with the vehicle control. Histopathological analyses of skin tissues revealed that PTNs decreased epidermal and dermal thickness compared with the vehicle control. Western blotting indicated that PTNs augmented nuclear factor erythroid 2-related factor 2 (NRF2)/heme oxygenase-1 (HO-1) pathway activation but attenuated radiation-induced NF-κB (nuclear factor kappa-light-chain-enhancer of activated B cells) and inflammasome activation, suggesting the mitigation of acute inflammation in irradiated mouse skin. PTNs also facilitated fast recovery, as indicated by increased aquaporin 3 expression and decreased γH2AX (histone family member X) expression. Our results indicate that topical PTN application may alleviate RD symptoms by suppressing oxidative stress and inflammatory signaling and by promoting the healing process. Therefore, PTNs may show great potential as cosmeceuticals for patients with cancer suffering from radiation-induced inflammatory side effects such as RD. Introduction Radiation dermatitis (RD) is one of the most common side effects of radiotherapy (RT). During or following RT, up to 95% of patients with various cancers suffer from RD [1,2]. RD exhibits a wide spectrum of symptoms and severities in the acute phase, and few of the common symptoms include erythema, dry and moist desquamation, edema, pain, pigmentation, and/or ulceration. Furthermore, RD deteriorates the patients' quality of life [3]. Many researchers and clinicians have tried to reduce RD, but RD is still unsolved. For example, intensity-modulated RT (IMRT) can be considered as an option for less RD in breast cancer patients. Even with IMRT, however, about 30% of patients experienced desquamations [4]. Except for general skin care, the management of RD has not been established [5]. The key mechanism of photo-protection by PTNs is antioxidant or redox activity [19], which supports the intrinsic cellular defense system to balance oxidative stress and reduces DNA damages. In addition, PTNs show protective effects against ionizing radiation, such as γ-rays [20]. PTNs have been reported to protect intestinal stem cells from radiation-induced damage and to facilitate the recovery of hematopoietic cells via the suppression of reactive oxygen species (ROS) production and apoptotic signaling in mice subjected to RT [21][22][23][24]. Although the anti-inflammatory effects of PTNs have been reported, their efficacy in RD management remains to be tested. In the present study, we aimed to investigate the protective effect of PTNs in a mouse model with RD. Development of a Mouse Model of Radiation Dermatitis First, we developed a mouse model of RD by subjecting mouse skin to repeated irradiation of high-dose X-rays. Acute RD was scored according to the Common Terminology Criteria for Adverse Events grading criteria (Figure 1a,b) [25]. To determine the optimal radiation dose for developing the mouse model of RD, single doses of 9, 12, or 15 Gy were delivered for three consecutive days (total doses of 27, 36, or 45 Gy, respectively) to the skin of the right hind legs of anesthetized mice, and the skin reactions were monitored. RD score assessment revealed that the skin reactions first appeared 8 days after irradiation, peaked at day 17, and vanished thereafter (Figure 2a,b). The size of the radiation fractions affected the acute skin reactions. The dose of 9 Gy per fraction was much less effective and produced milder symptoms than the doses of 12 and 15 Gy per fraction (Figure 2b). There was only a slight difference in the skin reactions produced by the doses of 12 and 15 Gy per fraction. Histopathological analysis showed that epidermal and dermal thickness increased in a dose-dependent manner (Figure 2c-e). Furthermore, Masson's trichrome staining revealed that collagen deposition increased in a dose-dependent manner ( Figure 2c). Efficacy of Topical Phlorotannin Application against Radiation Dermatitis Using the mouse model of RD established by delivering three fractions of X-rays at a dose of 12 Gy per fraction, we evaluated the efficacy of topical PTN application against RD. The experimental scheme is depicted in Figure 3a. PTNs were dissolved at two different concentrations (0.05% and 0.5%) in sesame oil and topically applied on the day of irradiation. An rhEGF solution (0.005%) was used as a positive control. Upon visual assessment, both PTN treatments showed a trend of fast recovery from the point of maximum RD (Figure 3b,c). The maximum RD scores (peak values) with the 0.05% and 0.5% PTN treatments were lower than those with the vehicle control treatment (0% PTN versus 0.05% PTN, p < 0.001). Similarly, the RD scores on day 21 were significantly lower with the 0.05% and 0.5% PTN treatments than with the vehicle control treatment (all p < 0.001; Figure 3c). These results suggest that PTNs alleviate RD. The rhEGF also showed a similar trend to the PTNs, except that the maximum RD score with rhEGF treatment was similar to that with the vehicle control treatment. The lowest RD score was observed following the 0.05% PTN treatment. (d and e) Quantification data of the epidermis (d) and dermis (e) showed dose-dependent increases in their thickness. Data are shown as mean ± SD (n ≥ 80 per group). Difference was evaluated using a Kruskal-Wallis test, followed by Dunn's multiple comparison test. ** p < 0.01; *** p < 0.001. the 0.05% and 0.5% PTN treatments were lower than those with the vehicle control treatment (0% PTN versus 0.05% PTN, p < 0.001). Similarly, the RD scores on day 21 were significantly lower with the 0.05% and 0.5% PTN treatments than with the vehicle control treatment (all p < 0.001; Figure 3c). These results suggest that PTNs alleviate RD. The rhEGF also showed a similar trend to the PTNs, except that the maximum RD score with rhEGF treatment was similar to that with the vehicle control treatment. The lowest RD score was observed following the 0.05% PTN treatment. Reduction in Radiation-Induced Epidermal and Dermal Thickening Following Topical Phlorotannin Application Visual assessment indicated that PTN application mitigated RD in mice; therefore, histopathological analysis of the irradiated skin tissues was performed. Hematoxylin and eosin (H&E) staining showed that both the epidermis and dermis were thicker in the irradiated skin tissues than in the unirradiated ones on days 14 and 21 (Figure 4a,b). Quantification data showed that irradiation with X-rays at a dose of 12 Gy per day for three consecutive days significantly increased the epidermal (from 14.49 ± 5.33 to 181.8 ± 62.46 µm; p < 0.001) and dermal (from 128.8 ± 28.71 to 307.8 ± 72.72 µm; p < 0.001) thicknesses, whereas topical PTN application reduced this irradiation-induced epidermal and dermal thickening on day 14 (Figure 4c). The epidermal thickness was reduced to 118.5 and 95.40 µm and the dermal thickness was reduced to 203.1 and 214.2 µm with the 0.05% and 0.5% PTN treatments, respectively. Treatment with rhEGF significantly reduced the epidermal and dermal (127.7 ± 59.95 and 171.8 ± 51.09 µm, respectively; p < 0.001) thickness compared with the vehicle control treatment. However, on day 21, the thickness of both the layers was slightly reduced with the sham treatment (epidermis, 160.2 ± 70.42; dermis, 237.5 ± 61.52 µm), which was further reduced by PTN application (epidermis, 81.25 and 96.11 µm and dermis, 181.6 and 189.9 µm with 0.05% and 0.5% PTNs, respectively; both p < 0.001) but not by rhEGF application (Figure 4d). Infiltration of eosinophils dramatically increased in irradiated tissues, which was suppressed by topical treatment with both 0.05% PTNs and EGF ( Figure S1). Though visual assessments indicated that 0.05% PTNs might be better than 0.5% PTNs in terms of reducing RD scores, H&E data suggest there was no difference in the thickness of the two layers following treatment with the two concentrations of PNTs and that 0.5% PTNs might be more immunosuppressive than 0.05%. dermal (127.7 ± 59.95 and 171.8 ± 51.09 µm, respectively; p < 0.001) thickness compared with the vehicle control treatment. However, on day 21, the thickness of both the layers was slightly reduced with the sham treatment (epidermis, 160.2 ± 70.42; dermis, 237.5 ± 61.52 µm), which was further reduced by PTN application (epidermis, 81.25 and 96.11 µm and dermis, 181.6 and 189.9 µm with 0.05% and 0.5% PTNs, respectively; both p < 0.001) but not by rhEGF application (Figure 4d). Infiltration of eosinophils dramatically increased in irradiated tissues, which was suppressed by topical treatment with both 0.05% PTNs and EGF ( Figure S1). Though visual assessments indicated that 0.05% PTNs might be better than 0.5% PTNs in terms of reducing RD scores, H&E data suggest there was no difference in the thickness of the two layers following treatment with the two concentrations of PNTs and that 0.5% PTNs might be more immunosuppressive than 0.05%. ≥ 40). Difference was evaluated using one-way ANOVA followed by Bonferroni's multiple comparison test. ** p < 0.01; *** p < 0.001. Modulation of NRF2, NF-κB, and AQP3 Expression Following Topical Phlorotannin Application To elucidate the mechanism underlying the mitigating effects of PTNs on RD, we investigated nuclear factor erythroid 2-related factor 2 (NRF2) and nuclear factor-κB (NF-κB) signaling-the well-known signaling pathways related to oxidative stress and inflammation in the skin. Western blotting revealed that topical PTN application affected the expressions of NRF2, NF-κB, and their downstream targets (Figure 5a,b). On day 14 after irradiation, the expression of NRF2 and its downstream target heme oxygenase-1 (HO-1) was higher in the skin tissues topically treated with 0.05% and 0.5% PTNs than in the skin tissues in the sham and vehicle control groups (Figure 5a). On day 21, the expression levels of NRF2 and HO-1 in the PTN-treated skin tissues remained higher than those in the sham-treated tissues (Figure 5a). revealed that topical PTN application affected the expressions of NRF2, NF-B, and their downstream targets (Figure 5a,b). On day 14 after irradiation, the expression of NRF2 and its downstream target heme oxygenase-1 (HO-1) was higher in the skin tissues topically treated with 0.05% and 0.5% PTNs than in the skin tissues in the sham and vehicle control groups (Figure 5a). On day 21, the expression levels of NRF2 and HO-1 in the PTN-treated skin tissues remained higher than those in the shamtreated tissues (Figure 5a). The expression of NF-κB p65 in the skin tissues was increased following irradiation, which was suppressed by 0.05% and 0.5% PTN on day 14 (Figure 5b). However, PTN application induced NF-κB p65 expression in the irradiated skin tissues on day 21. The expression of cyclooxygenase 2 (COX2), which is downstream from the NF-κB signaling cascade, was also induced by radiation but suppressed by PTNs on day 14 (Figure 5b). The radiation-induced expression of COX2 was decreased on day 21. The radiation-induced expression of interleukin-1β (IL-1β) and ASC (an apoptosis-associated speck-like protein containing a caspase-recruitment domain), which are related to inflammasome activation, was suppressed by PTN treatment on day 14 (Figure 5b). On day 21, however, PTN application increased the expression of IL-1β and ASC in a dose-dependent manner. Aquaporin 3 (AQP3) expression was higher in the irradiated skin tissues than in the sham-treated tissues and further increased by 0.5% PTN on days 14 and 21 (Figure 5c). Radiation increased the phosphorylation of H2A histone family member X (γH2AX), a surrogate marker for DNA damage, which was decreased by PTN application on day 21, suggesting the rapid recovery of radiation-induced DNA damage with PTN treatment. Discussion In RD, the direct tissue damage caused by radiation, progression of the inflammatory response, and recovery process occur simultaneously [26]. When the skin tissue is irradiated, early damage response is initiated by highly radiosensitive cells, and the damaged epidermis may not be regenerated after repeated irradiation or prolonged radiation exposure [27]. The damaged cells release various cytokines and chemokines, which induce the inflammatory response, stimulate the growth of the surrounding blood vessels, and recruit immune cells [28]. Radiation-induced skin damage is mostly related to oxidative stress [26]. Ionizing radiation can generate ROS, which damage DNA or cellular structures, ultimately leading to the death of unrepaired cells. Antioxidant enzymes, such as superoxide dismutase, glutathione peroxidases, and thioredoxins, protect skin cells from radiation-induced oxidative stress [29]. Nevertheless, the detailed mechanism underlying RD is only partly understood. Moreover, a standard strategy for preventing or treating RD, except general skin care, remains to be established. Topical agents containing steroids or Aloe vera are often prescribed without evidence [30,31]. Therefore, an effective agent for the prevention and treatment of RD must be urgently developed. Several natural substances isolated from marine algae, including PTNs, have garnered increasing attention for their medical applications [19,[32][33][34][35], specifically in protecting the skin from ultraviolet radiation [33]. Although the radioprotective effects of PTNs have been tested in radiosensitive organs, including the intestine and bone marrow, these effects remain to be verified using a skin model [20]. In this context, we conducted the present study to evaluate the therapeutic efficacy of PTNs in the management of skin damage caused by RT in a mouse model. In the visual assessment, the PTN-treated groups showed lower RD scores than the control groups, even at a very low concentration, as evidenced by the lowest maximum RD scores obtained with 0.05% of concentration and significantly different time-course change from the control (Figure 3c, p < 0.001). Similarly, the radiation-induced increase in the thickness of the epidermis and dermis-two main layers of the skin structure-was significantly suppressed by PTN application compared to that with the control and rhEGF treatments. Together, the observed low RD scores and fast recovery indicate that PTNs likely exert a radioprotective or mitigating effect against RD. Recent studies have demonstrated that the NRF2 pathway is implicated in both inflammatory and oxidative stress responses [36], and its modulation may be effective for managing inflammatory diseases, including dermatitis [37][38][39]. Several chemicals that can stimulate NRF2 expression have been tested for their application as therapeutic agents for RD [40]. Among natural substances, PTNs stimulate the expression of NRF2 and its downstream genes [20,41]. Our results of the western blotting of the irradiated skin tissues showed that PTNs enhanced the radiation-induced activation of the NRF2 pathway in the acute phase of RD. The visual assessment revealed that the RD scores reached near maximum around day 15 and then decreased owing to the healing process. The expression of NRF2 and HO-1 was higher in the PTN-treated groups on day 14 but not on day 21, suggesting that NRF2 is involved in the PTN-mediated mitigation of RD in the early acute phase. In contrast, radiation induced high NF-κB p65 and COX2 expressions, which was suppressed by PTNs on day 14. The radiation-induced expression of inflammasomal proteins such as IL-1β and ASC was also suppressed by PTNs. On day 21, the suppression of inflammatory signaling was reversed in the PTN-treated groups. These data suggest that PTNs effectively suppress or delay the acute inflammatory response in the early phase of RD. Crosstalk occurs between the NRF2 and NF-κB pathways: NRF2 signaling inhibits the NF-κB pathway and vice versa [36,37]. Thus, PTNs may suppress the radiation-induced NF-κB pathway via NRF2 activation, thereby reducing the RD scores. AQP3 is a highly abundant aquaglyceroporin in the epidermis, which is involved in hydration and can thus participate in healing and epidermal homeostasis [42]. Mice lacking this protein show dry skin and delayed wound healing [42]. A previous study has shown that AQP3 is one of the targets of NRF2 in keratinocytes during the oxidative stress response [38]. Our data showed that increased AQP3 levels were accompanied by increased NRF2 levels in PTN-treated tissues, suggesting that AQP3 is involved in the NRF2-mediated skin healing following RT. A topical solution containing rhEGF was used as the positive control in this study. EGF is one of the best-characterized signaling growth factors, and increasing evidence suggests that EGF signaling is implicated in skin repair and inflammation [42]. Based on its potential roles, clinical trials to assess the therapeutic efficacy of EGF against diverse inflammatory diseases, including dermatitis, have been conducted. Although limited evidence is available as of now, several clinical studies have shown positive results regarding its mitigating effects on RD [6,8]. The present study demonstrated that the radioprotective effects of PTNs were comparable to those of rhEGF. However, rhEGF and PTNs likely utilize different mechanisms. EGF exerts its protective effects on skin tissues by facilitating epidermal proliferation and perturbating proinflammatory signaling, whereas PTNs likely protect cells from oxidative stress and inflammation by activating NRF2. Since EGF and PTNs were both effective against RD, combined treatment with these two metabolites is worth exploring. This study has some limitations. First, a high radiation dose (12 Gy per fraction for three consecutive days = 36 Gy) was delivered to mice hind legs to obtain a severe RD phenotype, which may not directly reflect the clinical scenario. The typical daily radiation dose is 2 Gy per fraction. Thus, mechanisms of dermatitis and particularly cell repair change with different fractionations. Second, the visual assessment of RD may not be sufficient to compare the efficacy of PTNs and rhEGF in alleviating RD in mice. Nevertheless, our findings suggest a novel function of PTNs in mitigating RD. PTNs show great potential for use as cosmeceuticals because of their bioactive properties, including anti-inflammatory, antioxidant, anti-wrinkling, and hair growth-promoting effects [14]. Based on our results, further experimental and clinical studies are warranted to optimize the cosmeceutical formulations of PTNs and to assess their efficacy and safety in patients undergoing RT. Animal Experiments and Irradiation Five-week-old female BALB/c mice were purchased from Orient Bio Animal Center (Seongnam, South Korea) and maintained under specific pathogen-free conditions under a 12-h light/12-h dark cycle. All procedures in the animal experiments were conducted in accordance with appropriate regulatory standards under the protocol reviewed and were approved by the Institutional Animal Care and Use Committee of Samsung Biomedical Research Institute at Samsung Medical Center in Seoul, South Korea (ID: 20180417001; approval date: May 8, 2018). X-ray irradiation was performed using a Varian Clinac 6EX linear accelerator (Varian, Medical Systems, Palo Alto, CA, USA) at Samsung Medical Center. Before irradiation, the mice were anesthetized via an intraperitoneal injection of 30 mg·kg −1 zolazepam/tiletamine and 10 mg·kg −1 xylazine. For the irradiation setup, all mice were placed in prone position and their whole right hind legs were fixed with tape in the irradiation field (32 cm × 7 cm) to ensure the position of irradiation area ( Figure S2). The legs were placed under a 2-cm-thick water-equivalent bolus with a source-to-surface distance of 100 cm and were irradiated with 6-MV X-rays at single doses of 9, 12, and 15 Gy per fraction for three consecutive days (corresponding to total radiation doses of 27, 36, and 45 Gy, respectively) at a dose rate of 3.96 Gy per minute. The absolute X-ray dose was calibrated according to the TG-51 protocol and was verified using a Gafchromic film, with an accuracy of 1%. Topical Phlorotannin Application PTNs were freshly prepared by dissolving them at concentrations of 0.05% and 0.5% (w/v) in sesame oil (S3547; Sigma-Aldrich, St. Louis, MO, USA). Easyef TM was used as a positive control. Mice hind legs were depilated using a topical cream. The irradiated skin area was covered with a PTN solution or Easyef TM , starting on the irradiation day. The experimental scheme is depicted in Figure 3a. Measurement of Skin Tissue Damage On days 14 and 21 after irradiation, the mice were euthanized using the gradual-fill method of CO 2 euthanasia and the irradiated skin tissues were harvested promptly. Skin biopsies were fixed in 10% neutral-buffered formalin for 24 h, embedded in paraffin wax, and serially sectioned (thickness, 5 µm). Standard H&E and Masson's trichrome staining protocols were used for the histological examination. Slide images were digitalized using Aperio ScanScope AT (Leica Biosystems, Buffalo Grove, IL, USA). The epidermal and dermal thickness was measured at 20 different sites in each section and averaged. Western Blotting Skin tissues were harvested on days 14 and 21 after irradiation and cut into small pieces. The frozen tissue samples were resuspended in radioimmunoprecipitation assay (RIPA) Lysis and Extraction Buffer (#89900; Thermo Fisher Scientific, Waltham, MA, USA) and lysed using TissueLyser II (QIAGEN, Hilden, Germany). The lysates were centrifuged at 13,500× g for 20 min at 4 • C, and the supernatants were collected. The protein concentration was determined using the Bio-Rad DC Protein Assay Kit II (Bio-Rad, Hercules, CA, USA). Equal amounts of proteins were separated via SDS-PAGE and transferred to a Protran TM nitrocellulose membrane (GE healthcare, Piscataway, NJ, USA). After blocking with 10% skimmed milk for 70 min at room temperature, the membranes were incubated with primary antibodies at 4 • C overnight, followed by HRP-conjugated secondary antibodies for 1 h at room temperature. The bands of interest were visualized using an enhanced chemiluminescence detection kit (GE healthcare) according to the manufacturer's instructions. Protein bands were quantified using ImageJ (National Institutes of Health, Bethesda, MA, USA). Statistics All data are presented as the mean ± standard deviation (SD) from three independent experiments. All statistical analyses were performed using GraphPad Prism 8.4.2 (San Diego, CA, USA). The normality of all datasets was evaluated using D'Agostino-Pearson omnibus normality test. The Brown-Forsythe test was used to test the assumption of equal variance in analysis of variance (ANOVA). For datasets with normal distribution, the comparison between groups was performed using ANOVA, followed by Bonferroni's multiple comparison test. When the datasets were not normally distributed, the comparison between groups was performed with a Kruskal-Wallis test, followed by Dunn's multiple test. p values < 0.05 were considered statistically significant. Conclusions The present study showed that topical PTN application alleviated RD symptoms by activating anti-inflammatory and antioxidative stress signaling in a mouse model of RD. Although RD is a common side effect in patients with cancer undergoing RT, no effective treatment regimens are available even today. Our results suggest that PTNs can be used as cosmeceuticals to prevent or alleviate skin damage during or following RT.
v3-fos-license
2021-10-19T15:13:46.802Z
2021-09-27T00:00:00.000
239070169
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9040/9/10/274/pdf", "pdf_hash": "e761835015d4df578d42be8c7b2ba9bc692e30a2", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42199", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "sha1": "d370d8786c0e6536461d8698a7f0e7e84f658c8d", "year": 2021 }
pes2o/s2orc
Potentiometric C 2 H 4 -Selective Detection on Solid-State Sensors Activated with Bifunctional Catalytic Nanoparticles : This work presents a solid-state ionic-based device to selectively detect C 2 H 4 in car exhaust gases. The sensor consists of 8YSZ as the electrolyte and two electrodes: Fe 0.7 Cr 1.3 O 3 /8YSZ and LSM/8YSZ. The main aim of this work is to optimize the catalytic behavior of the working electrode to C 2 H 4 and reduce cross-sensitivity toward CO and H 2 O. Several catalyst nanoparticles were infiltrated to tailor C 2 H 4 adsorption and electrochemical oxidation properties while diminishing adsorption and conversion of other gas components such as CO. The infiltrated metal catalysts were selected, taking into account both adsorption and redox properties. Infiltration of Ti or Al, followed by a second infiltration of Ni, enabled the selective detection of C 2 H 4 with low cross-sensitivity toward CO and H 2 O in a moist gas environment. Further insight into potentiometric C 2 H 4 sensing is achieved by electrochemical impedance analysis of the electrodes activated with bifunctional catalysts. Introduction Currently, the public health factor involving pollution in cities is a hot topic [1][2][3]. Legislation is becoming more restrictive with regard to both polluting emissions from road vehicles and car circulation in the innermost part of cities. This involves both existing and future automotive fleets and better monitoring of environmental performance (defeat devices, tampered antipollution systems, etc.). This is a concern for the European Commission, and it has been included in their work programs for smart, green integrated transport in H2020 during the 2018-2020 period. This is also tackled in the Horizon Europe work program for Climate, Energy, and Mobility for 2021-2022 [4]. Among different actions, pollution tolls and further emission controls are under consideration in cities to ban the most polluting cars within urban areas [5][6][7][8][9]. Therefore, more restrictive legislation is expected in the short term. However, legislators cannot lower emission limits because of the current lack of devices able to detect low contents of hydrocarbons selectively. Therefore, the availability of an economically attractive, reliable, and highly sensitive hydrocarbon sensor could help to establish this sensor in combustion engines but also in the monitoring of gas streams in other combustion or thermochemical processes. The elevated temperature of exhaust gases and exhaust gas content make potentiometric sensors the most appealing option. In the literature, zirconia and platinum are widely employed as electrolyte and reference electrodes, respectively [10][11][12][13][14][15][16][17]. As working electrodes, different oxides, from simple to more complex oxides [12,[17][18][19][20][21] such as spinels and perovskites, are used for the detection of several elements, e.g., hydrocarbons [10,15,[22][23][24][25][26], CO [19,[27][28][29], and NH3 [30,31]. The general problem is not only the use of expensive noble metals but also poor selectivity to the target gas and/or high cross-sensitivity toward other elements. Responses to single gases are compared without comparing the effect of other gases in a gas mixture as expected in real applications. In this work, a potentiometric hydrocarbon sensor is optimized by surface functionalization of its working electrode [32,33], avoiding the use of noble metals to enhance selectivity to ethylene. It consists of a solid ionic electrolyte (8YSZ) and two electrodes: LSM/8YSZ as the reference electrode and Fe0.7Cr1.3O3/8YSZ as the working electrode. Additionally, both electrodes are exposed to the same atmosphere. This simplifies the device and allows it to behave according to the mixed-potential theory, i.e., several reduction and oxidation reactions running simultaneously at each electrode, where one controls the kinetics. When equilibrium is achieved in each electrode, a difference in voltage that can be measured is generated [13,[34][35][36]. Therefore, reference and working electrodes must be selective to an oxygen cathodic reaction and to a target gas anodic reaction, respectively. Kinetics in each electrode must be controlled by these reactions. C2H4 is a major hydrocarbon in exhaust gas [37][38][39][40]; therefore, it is considered a target gas. Additionally, cross-sensitivity toward any elements common in exhaust gas such as CO, H2O, and polyaromatics (C11H10 and C14H10) should be kept low during dynamic operation. This work focuses on the catalytic functionalization of the working electrode (WE) in order to provide a specific response to C2H4 with low cross-sensitivity against CO and H2O, i.e., fostering C2H4 electrochemical oxidation kinetics. This is accomplished by means of electrode infiltration with distinct nanosized catalysts. The device consists of a common reference electrode on one face of the solid electrolyte and four different working electrodes on the other face of the electrolyte. Nickel dispersion on the WE has been reported to boost electrochemical C2H4 oxidation [41]. Several catalytic materials are selected in this work because of their redox activity or adsorption properties related to hydrocarbons and CO: Al [42], Ti [43], Ni [41,44], Ru [45][46][47], Pd [48,49], Nb [50], and Ba. Some of these specific elements were selected because of their Lewis acidity or basicity (for the alkali elements) of related cationic species [48]. Thus, several nanosized catalysts, including binary combinations, are incorporated in the sensor WE to selectively improve electrochemical C2H4 oxidation. The open-circuit voltage generated between both electrodes is measured for concentration pulses of C2H4 and CO in both dry and humidified conditions. Moreover, electrochemical impedance spectroscopy is carried out for a better understanding of the processes taking place at the activated electrodes. Sample Preparation Both La0.9Sr0.1MnO3 perovskite (LSM) and Fe0.7Cr1.3O3 were synthesized by means of a sol-gel chemical route. Commercial nitrates from Sigma-Aldrich were used as precursors. Citric acid (Sigma-Aldrich, St. Louis, MO, USA) was added to the stoichiometric water-based solution to prevent partial segregation of the metallic elements. Addition of ethylene glycol generated polymerization in a 1:2:4 ratio (nitrate precursors, citric acid chelating agent, and ethylene glycol, respectively). Two-step thermal decomposition (200 °C and 600 °C) led to the generation of nanosized crystalline phases. Such powders were ground in a ball mill and later sintered at 1350 °C for 10 h to produce the desired crystalline phase (ICDD 00-035-1112 for Fe0.7Cr1.3O3). Both Fe0.7Cr1.3O3 and LSM were mixed with 8YSZ (Tosoh) in a 1:1 vol. ratio to avoid delamination with the electrolyte and to obtain a mixed ionic-electronic material, respectively. Electronic conductivity is offered by LSM (and selective O2 activation), while ionic conductivity is provided by 8YSZ. Then, these mixtures were ball milled, mixed with an organic binder, and passed through a triple roll mill to produce inks for screen printing. Fabrication of the Sensor Device A four-working-electrode (WE) multidevice was constructed to measure up to four different WE compositions simultaneously, as shown in Figure 1. The electrolyte acts as a support, and it consists of a dense 50 mm diameter disk made of 8YSZ (Tosoh). The disk was uniaxially pressed and then calcined in two steps: (1) 1000 °C for 4 h to machine holes for later wiring and (2) 1450 °C for 10 h to densify the electrolyte Electrodes were screen printed in a rectangular shape: four Fe0.7Cr1.3O3/8YSZ electrodes on one side of the electrolyte and only one LSM/8SZ on the other side. This configuration is convenient because it consists of a reference electrode for oxygen, which is common, and four different working electrodes for testing different catalytic materials to functionalize the electrode by nanoparticle infiltration. Replacing Pt, which is usually employed as RE [12,13,[51][52][53][54][55][56][57], with LSM/8SZ results in a larger triple-phase boundary (TPB). This means an increment in the contact point between electronic and ionic materials and the gas. A screen-printed gold layer was used as the current collector (900 °C for 2 h) on top of the electrodes, while silver paste was employed to assure the attachment of the lead wires to the electrodes. The working electrode is infiltrated with several nanocatalysts: Ti, Al, Nb, Ba, and Pd. Nitrate precursor solutions of the aforementioned elements were dropped onto the WE. The solution filled the pores through the capillarity, ensuring full coverage of the electrode surface. The device was then exposed to a thermal treatment to eliminate the organic fraction. First, it was treated with argon at 550 °C for 4 h. Subsequently, the gas composition was changed to 5% H2 in Ar, and the device was exposed again at 550 °C for another 4 h. The stability of the LSM electrode was confirmed after thermal treatment in these gas atmospheres. After testing each of the aforementioned elements, a second Ni infiltration was performed for each, following the same procedure described previously. Nickel has already been reported to increase device selectivity to C2H4 [41]. Sample Characterization The obtained crystalline phases were identified through X-ray diffraction analysis (XRD) using a PANalytical Cubix fast diffractometer (CuKα1 radiation (λ = 1.5406 Å) and an X′Celerator detector in Bragg-Brentano configuration). X′Pert Highscore Plus was employed to analyze the patterns measured in the 2θ range from 10° to 90°. The cross-sections of the devices were studied by SEM and energy-dispersive X-ray spectroscopy (EDS) using a ZEISS Ultra55 field-emission scanning electron microscope. Regarding electrochemical characterization, the voltage was measured (Keithley 3706) as the potential difference generated between both electrodes (no current applied) at 550 °C and different C2H4 and CO concentrations. The flow of gases was controlled by means of mass flow controllers, and the total gas flow was set to 550 mL/min (with 6% O2 and balanced with argon). The sensor response (Vcell, mV) was corrected, taking into consideration the background gas consisting of 6% O2/Ar, and it was defined as: When the sample was stabilized at 550 °C, either C2H4 or CO concentration pulses were performed from 50 ppm (used as base gas) to 100, 150, and 200 ppm. In order to determine the cross-sensitivity, this procedure was repeated at a fixed concentration of 200 ppm opposing gas. Finally, impedance sweeps from 0.03 Hz to 1 MHz were measured (Autolab PGSTAT204 with an FRA32M module) for both C2H4 and CO at 200 ppm. Microstructural Characterization XRD patterns of Fe0.7Cr1.3O3 and LSM ( Figure S1) confirm that the phases for both materials were formed as desired, i.e., no diffraction peaks were detected for other phases or the precursor. Postmortem SEM characterization of the device was carried out to determine the dispersion of the nanoparticles over the electrode grains. In every pore, the distribution of Fe0.7Cr1.3O3 and 8YSZ grains is homogeneous, although the grain size is different, i.e., Fe0.7Cr1.3O3 grains are larger ( Figure 2). Layer thickness observed for the WE and RE is 34 and 13 µ m, respectively. A deeper look at the working electrode shows a good distribution of the nanoparticles all over the electrode for all the performed infiltrations ( Figure 2). Oxide nanoparticles are equally attached to both Fe0.7Cr1.3O3 and 8YSZ grains, and therefore, the surface active area for electrochemical sensing is enlarged. Two nanoparticle size ranges can be observed for each infiltration. Nanoparticles formed upon infiltration of Ti, Nb, Al, Ba, and Pd are smaller than Ni-based nanoparticles. Energy-dispersive X-ray spectroscopy (EDX) analysis and subsequent comparison with a device infiltrated with only nickel confirm that the largest nanoparticles are made of nickel [41]. Thus, the smaller nanoparticles must be the second element infiltrated into the electrode (EDX was limited and could not identify the main element of these small nanoparticles due to the limits of this technique). The nanoparticles are well distributed and constant along the electrode, except at a lower extent for Ba and Pd. Moreover, no presence of nanoparticles is observed in the reference electrode, confirming that the infiltration was selectively carried out in the WE. Thus, any improvement in sensor performance in comparison to the bare sensor can be attributed to the infiltration in the working electrode. The poor distribution of Ba and Pd may cause low activity of these electrodes to C2H4 when compared to other elements such as Ti or Al. Electrochemical Characterization In an exhaust-gas-like atmosphere, where several pollutants such as hydrocarbons, CO, NOx, O2, etc., can be present, several oxidation and reduction reactions can take place on both the WE and RE. The kinetics of one of these reactions will prevail on each electrode, controlling the electrode, and the difference in voltage between both electrodes will provide the final device response. Ideally, the oxidation of the reducing agent will take place in the WE when achieving the equilibrium (Equation (1) or (2)), while O2 is reduced in the RE (Equation (3)). The electrode must be porous to facilitate the diffusion of the gaseous analyte to the contact points with both electronic and ionic conductors (TPB). Both anodic and cathodic reactions are coupled by the oxygen ion diffusion through the 8YSZ electrolyte. This kind of sensor follows the so-called theory of mixed potential, and therefore, the response of the device is kinetically controlled [13,35,[58][59][60]. A zero current is imposed, and a mixed potential is established in each electrode (Equation (1) or (2) for the anodic reaction and Equation (3) for the cathodic reaction) when the steady state is reached. The final voltage of the cell is given by this built-up mixed-potential difference. Additionally, a heterogeneous catalytic conversion process could take place at the electrodes. The analytes could react with locally adsorbed O2 (Equations (4)-(6)), the electrochemical reaction not being favored [14,[61][62][63]. The reaction network on the electrodes can be described by the following reactions. Potentiometric Characterization As previously reported, the (catalytically nonactivated) bare sensor response is not specific to C2H4 [41]. The device is exposed to concentration pulses of both pure CO and C2H4 from 50 to 200 ppm for 20 min. Additionally, the device response is measured for the same concentration pulses of one analyte but with a fixed concentration of 200 ppm of the other, as shown in Figure 3. This counter plot helps to summarize the sensor response to both C2H4 and CO. In these plots, the x-and y-directions are CO and C2H4 concentrations, respectively. The colormap indicates the voltage response offered by the sensor when exposed to a given concentration of analytes. Thus, a device selective to C2H4 must show an increase within the y-direction, with its response in the x-direction remaining constant. Figures 3 and 4 display how the response to both C2H4 and CO is similar, and therefore, the C2H4 electrochemical reaction is not favored. As observed in Figure 3, the lines of constant potential are diagonal, increasing from left to right. This confirms the lack of selectivity to C2H4. Figure 4 shows the transient response for different scenarios with C2H4 and CO, confirming similar responses. This indicates a high cross-sensitivity toward CO, and therefore, the device is unable to measure C2H4 in an exhaust-gas-like atmosphere. Therefore, the working electrode should be catalytically activated to promote the electrochemical reaction of C2H4. Several catalyst nanoparticles are infiltrated into each channel to achieve this promotion of the C2H4 reaction rate. As several channels are measured, the response is normalized to enable comparison. After the infiltration of nanoparticles in dry conditions, it was found that Ti and Al led to an increase in the electrochemical reaction of C2H4 ( Figure 5). Thus, this reaction is kinetically favored. It should be noted that the response to C2H4 is not affected by the addition of 200 ppm of CO. Thus, the device is able to detect C2H4 even in the presence of CO. On the other hand, Nb, Ba, and Pd infiltration provide a poorer response in dry conditions. Despite improving the response in comparison to the bare sensor, the cross-sensitivity toward CO is too high, albeit for C2H4 detection purposes ( Figure 5). Figure 5. Device performance as a function of C2H4 and CO concentration after infiltration of the WE. The first column shows the sensor response for the first element infiltrated in dry conditions, the second column depicts the sensor response after an additional infiltration with Ni in dry conditions, and the last column shows the response after the second infiltration with Ni in wet conditions. Each row indicates the first element infiltrated. Standard errors of sensor response for each infiltrated element are: 0.02 mV for Ti, 0.03 mV for Al, 0.07 mV for Nb, 0.07 mV for Nb, 0.003 mV for Ba, and 0.01 mV for Pd. The subsequent infiltration with Ni enhanced the sensor detection of C2H4 even more for Ti and Al devices ( Figure 5). Additionally, this second infiltration improved the performance of the Nb-infiltrated device: the C2H4 electrochemical reaction is promoted, providing a response dependent on C2H4 but not on CO ( Figure 5). Unfortunately, crosssensitivity toward CO of Ba and Pd devices is not improved adequately for sensing purposes ( Figure 5), as previously attributed to the poor dispersion of Ba and Pd in the electrode (Figure 2). Ti and Al devices are still selective to C2H4 even when the device is exposed to a wet gas stream (3% vol. H2O). As previously reported, H2O usually negatively affects the sensor response, i.e., increasing the cross-sensitivity toward CO [32,[64][65][66]. However, the catalytically activated sensor is still selective to C2H4 under humid atmospheres. The low cross-sensitivity toward CO and H2O makes this configuration promising for hydrocarbon detection in atmospheres with several pollutants. The improvement in sensing capability might be due to the improved electrocatalytic properties of the material itself. Nonetheless, in the case of Nb, the addition of water affects the device performance, and thus, the device is no longer selective to C2H4. Electrochemical Impedance Spectroscopy Analysis Electrochemical impedance spectroscopy was performed for each one the five infiltrated elements, exposing the device to 200 ppm of pure CO and C2H4. The EIS measurement procedure is as follows: first the bare sensor, then with the addition of the first element, and finally for the first element plus Ni in dry and wet conditions. Figure 6 displays Nyquist plots from 0.03 Hz to 1 MHz. The sensors infiltrated with Ti, Al, Ba, and Pd present two-arc contributions. The respective Bode plots are depicted in Figure S2. The equivalent circuit proposed consists of two parallel combinations of resistance-constant phase elements (R-CPE) connected in series. However, the Nb-infiltrated sensor has a three-arc contribution that was fitted to an equivalent circuit consisting of three parallel combinations of R-CPE. This study is focused on Ti, Al, and Nb, as they exhibited the best sensing properties. The arc shape at high frequencies in the Nyquist plot is similar in all cases, and this indicates the similar behavior of oxide ion transport through crystalline grain, as expected from the identical electrolyte and electrode backbone structure. Figure 6. EIS results for the devices infiltrated with Ti, Al, and Nb for the bare sensor (dots), first infiltration (triangle), additional infiltration with Ni in dry conditions (inverse triangle), and additional infiltration with nickel in wet conditions (hexagon). C2H4 response is depicted in red on the left side, and CO response is shown in green on the right side. Inset indicates the equivalent circuit fitting the response: two resistance-constant phase-parallel elements for Ti and Al, while for Nb, there is an additional resistance-constant phase-parallel element. In general terms, the first arc contribution (appearing at higher frequencies and with C = 10 5 -10 6 F as in Figures 6 and S3) at 550 °C may be due to the electrode-8YSZ electrolyte interface [67][68][69]. When the device is exposed to C2H4 and CO, this resistance remains almost constant for the bare sensors, as well as when the device is infiltrated, firstly with Ti, Al, and Nb and secondly with Ni (see Figure 6). Accordingly, the ionic mobility remains almost unaffected, i.e., ion diffusion in the bulk is not affected by the addition of the nanocatalysts. The second arc contribution appearing at lower frequencies (10 3 -10 4 F, as depicted in Figure 6) is related to electrocatalytic processes taking place at the surface of the electrode and depending on the type of infiltration performed. The effect of this lower-frequency contribution can also be observed in the Bode plots in the range from 0.03 to 145 Hz (Figure S2). In the case of the bare sensor, the polarization resistance is almost the same for both C2H4 and CO, as shown for example in Figure 6. This agrees with the potentiometric characterization ( Figure 5) and can explain the lack of selectivity to C2H4. For both Al and Ti, their infiltration led to an increase in the CO contribution, while the C2H4 contribution decreased (as observed in Figures 6 and 7). This agrees with the potentiometric study and can explain the better performance. Electrochemical reaction to C2H4 is catalytically promoted. The second infiltration with nickel decreases both contributions, although the decrease for C2H4 is higher. Therefore, the C2H4 resistance is lesser than CO, as observed in Figure 7, and this explains again the better performance of the device for both configurations in dry conditions. This is also backed by the evolution of the low-frequency contribution observed in the Bode plots in Figure S2. Fitting results for the first infiltration with each element (blue and green) and second infiltration with Ni (red and brown). Resistances related to interface and electrode processes are calculated. On the left side, C2H4 response is depicted, while on the right side, the response to CO is depicted. In the case of Nb, an additional electrode contribution appears (gray and yellow). The maximum chi-square values obtained in the fitting of each element are: 1.5 × 10 −4 for Ti, 1.7 × 10 −4 for Al, and 4.59 × 10 −6 for Nb. As aforementioned, H2O can have a negative influence on the sensor performance affecting the output voltage and cross-sensitivity. Therefore, it is sensible to the effect of H2O on the sensor response. The addition of 3% H2O to the gas flow causes a further decrease in both contributions. However, the C2H4 contribution is smaller than that of CO. This agrees with the performed potentiometric study, and it confirms that the C2H4 reaction is more favored. Thus, according to the potentiometric study and the electrochemical impedance, both Ti and Al plus Ni are strong candidates for C2H4 detection, as they promote the electrochemical reaction of C2H4 and even partial heterogeneous catalysis of CO. On the other hand, Nb infiltration behaves differently. According to the potentiometric study, the device has a strong cross-sensitivity toward CO (see Figures 6 and 7). The behavior shows a first contribution similar for both analytes and a second that is smaller for CO. Conversely, the addition of Ni reversed this behavior, i.e., the first contribution is increased for both analytes, but the increment is much higher for CO. In the case of the second contribution, it is reduced for both analytes. Although the reduction of the second electrode element is higher for CO, the first electrode contribution has the greatest influence. This causes the C2H4 total electrode contribution for Nb plus Ni infiltration to be lesser than that of CO, and therefore, its electrochemical reaction is promoted, as observed in the potentiometric analysis in Figure 5. The addition of 3 vol. % H2O provides a similar contribution for both analytes, and therefore, this explains the lack of selectivity to C2H4 in the potentiometric study, both analytes are promoted equally under humid atmospheres. On the other hand, Ba and Pd offer a similar response when infiltrated, even when a second infiltration with Ni is carried out in dry and wet conditions, as can be observed in Figure S3. The potentiometric study already stated that the C2H4 reaction was not promoted, and therefore, the cross-sensitivity toward CO is not ameliorated. Conclusions The catalytic functionalization of a potentiometric sensor enabled enhancing selectivity toward C2H4 while keeping low cross-sensitivity toward CO and H2O. The sensor is composed of Fe0.7Cr1.3O3/8YSZ and LSM/8YSZ as working and reference electrodes, respectively, and 8YSZ as solid-state electrolyte. Potentiometric characterization showed that electrode surface decoration with Ti or Al resulted in improved sensor performance. The device is selective toward C2H4, with low cross-sensitivity toward CO. Moreover, a second infiltration with Ni to (i) largely improved sensor performance in dry conditions and (ii) provided a device able to selectively measure C2H4 in moist conditions. The nanocatalysts promoted the selective C2H4 electrochemical oxidation even in the presence of CO. Consequently, these materials are potential candidates for C2H4 detection. On the other hand, infiltration with Nb, Pd, and Ba, despite improving the bare sensor signal, failed to provide an adequate sensor response for the purpose of hydrocarbon sensing. However, a Nb-infiltrated device after a second infiltration with Ni ameliorated its performance in dry conditions, although in moist conditions, the performance is worse. Electrochemical impedance spectroscopy analysis revealed that the resistance of the electrode-electrolyte interface (C = 10 5 -10 6 F) was unaffected practically by the catalyst infiltrations. Conversely, low-frequency resistances associated with catalytic processes occurring on the electrode surface varied upon catalyst infiltration. For Ti and Al infiltrations, either having a second Ni infiltration or not, the polarization resistance to CO was higher than C2H4. Thus, the better performance of C2H4 with low cross-sensitivity toward CO may be explained by such a difference. Incorporation of nanoparticles onto the working electrode was confirmed by means of FESEM analysis, revealing that Ti and Al are homogeneously distributed. Accordingly, the sequential infiltration with bifunctional nanocatalysts enabled modifying the kinetics of electrode surface catalytic reactions, as inferred from impedance spectroscopy analysis at low frequencies. This surface functionalization enabled boosting the electrochemical oxidation of C2H4, and this process became the main contributor to the working electrode response. Specifically, the reported sensor configurations comprising Ti or Al plus Ni nanoparticles are suitable for applications in conditions similar to those of exhaust gases from combustion or other thermochemical processes. Supplementary Materials: The following are available online at www.mdpi.com/article/10.3390/chemosensors9100274/s1, Figure S1: X-ray diffraction pattern of both LSM and Fe0.7Cr1.3O3 powders at room temperature; Figure S2: Bode plots for devices infiltrated with Ti, Al, and Nb for the bare sensor (dots), first infiltration (triangle), additional infiltration with Ni in dry conditions (inverse triangle), and additional infiltration with nickel in wet conditions (hexagon). C2H4 response is depicted in red on the left side, and CO response is shown in green on the right side. The Bode plots correspond to the Nyquist plots depicted in Figure 6; Figure S3: EIS results for the devices infiltrated with Ba and Pd for the bare sensor (dots), first infiltration (triangle), additional infiltration with nickel in dry conditions (inverse triangle), and additional infiltration with nickel in wet conditions (hexagon). C2H4 response is depicted in red on the left side, and CO response is shown in green on the right side
v3-fos-license
2018-12-29T19:20:04.966Z
2014-03-01T00:00:00.000
59608344
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jace.12750", "pdf_hash": "76d337837c73b9d7da1fd3920f92fe387ea490c1", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42200", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "76d337837c73b9d7da1fd3920f92fe387ea490c1", "year": 2014 }
pes2o/s2orc
Simulation of Impedance Spectra for a Full Three-Dimensional Ceramic Microstructure Using a Finite Element Model A method of characterizing electrically heterogeneous electroceramics for a full three-dimensional collection of randomly shaped grains is presented. Finite element modeling, solving Maxwell's equations in space and time is used to simulate impedance spectroscopy (IS) data. This technique overcomes several deficiencies associated with previous methods used to simulate IS data and allows comprehensive treatment of a full three-dimensional granular representation of ceramic microstructure without the requirement for equivalent circuits based on the Brickwork layer model (BLM) or the introduction of constant phase elements to describe any nonideality of the IS response. This is applied to a full three-dimensional ceramic microstructure with varying grain size and electrical properties to generate IS plots that highlight limitations of the BLM in data analysis. I. Introduction and Background I MPEDANCE spectroscopy (IS) is widely employed to deconvolute the intrinsic (bulk) and/or extrinsic (grain boundary, electrode effects, etc.) contributions to the electrical properties of electroceramics by measuring the impedance response over a frequency spectrum, 1,2 commonly from mHz to MHz. Since the late 1960s, extracting information from IS data, such as grain core (bulk) and grain-boundary capacitance and resistance values, has been done using some form of appropriate equivalent electrical circuit. 3 This usually consists of an arrangement of resistors and capacitors, connected in series and/or in parallel to model the IS response of the polycrystalline ceramic under investigation and to provide insight into the intrinsic and extrinsic properties. Identification of the correct form of the equivalent circuit is required for meaningful analysis of the system. 1,4 This is based on the likely physical processes that occur in the material and often requires some level of intuition. In many cases, to a first approximation, the grain-core (bulk) response is described in an equivalent circuit by a parallel combination of a resistor and a capacitor (RC). This combination results in an ideal arc in the complex impedance and electric modulus plane plots, Z* and M*, respectively, and an ideal Debye peak in spectroscopic plots of the imaginary components of impedance, Z″, and electric modulus, M″, with a full-width half maximum (FWHM) of 1.14 decades on a logarithmic frequency scale. 5 Due to heterogeneities associated with defects, impurities, and complex conduction processes, such an ideal response for the grain core is seldom obtained. This leads to a nonideal Debye-like response, i.e., a depressed arc in Z* and M* plots and a nonideal Debye response in Z″ and M″ spectra with a FWHM >1.14 decades. Such responses cannot be treated accurately using a simple RC circuit 4,6 and normally requires the addition of a constant phase element (CPE) to the equivalent circuit. One of the first attempts to correlate the microstructure of an electroceramic with a mathematical combination of resistors and capacitors was proposed in the late 1960s by Bauerle. 3 This attributed the response of the ceramic to two parallel RC elements connected in series; one assigned to the grain core and the other to the grain boundary. This successfully represented an ion-conducting ceramic and modeled the dual-arc Z* plots obtained from experimental IS data. This simple model was further developed into a three-dimensional resistive boundary layer model 7 and later into the wellknown brick layer model (BLM) in the early 1980s. 8,9 The BLM is a general representation of a ceramic using the analogy of bricks surrounded by mortar to represent grain cores surrounded by grain boundaries. Nafe 10 in the mid-1980s developed this further to allow the possibility of current flow around the grain core, through the grain core or a combination of both, by summing pathways (where appropriate) in parallel. Using these approximations it is possible to convert bulk data such as resistances and capacitances into intrinsic material properties such as conductivity and permittivity for the grain core. However, due to the unknown geometry of the grain boundaries, this method is generally considered to be unreliable to extract grain-boundary conductivity and permittivity values. The BLM method has also been incorporated into a finite difference pixel-based simulation to calculate the current distribution. [11][12][13] Here, a pixel consists of six orthogonal nested cubes, each being assigned an RC element with the properties of a grain core or grain boundary. These pixels form the points on which the conduction path can be calculated with the nested cubes allowing a 3D interconnectivity of the microstructure to be constructed that the previous BLM methods could not provide. This not only allows the treatment of current pathways, thus replicating IS data, but also permits the BLM to be used for grain-core volume fractions from zero to unity with no breakdown of the calculation. These BLM methods, however, all have two intrinsic limitations. First, they simulate grains as cubes or regular shapes. Studies by Kidner et al. 13 varying the imposed shape of simulated grains have shown that the cubic grain approximation is only applicable to micrometer-sized grains in ceramics and is no longer valid for nano-sized grains. Second, the pathway the models predict through the sample is dependent on the nested cube connections. As this is limited to six per pixel it cannot fully represent the complex conduction paths that are possible in a ceramics with irregular grain shapes. An alternative approach to simulate the electrical response of an electroceramic is to use effective medium theory. 14,15 This is based on Maxwell's concept of an effective medium describing the ceramic as a collection of similarly shaped, coated spheres. Each one represents a grain core, which is coated with a shell to describe the grain boundary. The spheres are packed, either filling or partially filling an effective medium, which is then given the material properties of the grain core and grain boundary. The system can then be solved for the conductivity as a function of volume fraction of the grain cores. This method has been successfully used to determine the electrical response of heterogeneous ceramics and to extract values for grain-core and grain-boundary conductivity and permittivity. One major drawback is the model does not resemble the real microstructure of a ceramic. 13 Simulating the IS response of an electroceramic using finite element modeling can overcome the deficiencies associated with the methods discussed earlier. The finite element method (FEM) is a powerful tool widely used for numerical modeling and simulation in many areas of science and engineering. The idea of using the FEM to model IS data is not new, and has been successful in describing highly resistive grain boundaries [16][17][18] ; however, this approach has been limited to two-dimensional models and a comprehensive treatment of granular three-dimensional samples is still lacking. Here, we present a finite element package, developed inhouse to simulate IS data for electroceramics using realistic microstructures. We then apply this FEM to various electrical microstructures to simulate IS spectra. Using an appropriate equivalent circuit to extract the resistance and capacitance of the electroactive grain and grain-boundary components, we apply a bricklayer method to estimate the corresponding conductivity and permittivity of the components. These values are then compared with the input data of the simulations to highlight the appropriate and limiting conditions of this method for data analysis. II. Three-Dimensional Finite Element Approach When an alternating voltage is applied across a material it generates a time-varying electric field, causing the propagation of charge carriers such as electrons, holes, or ions that generate a current through the material. The temporal evolution of an electrical field can be established by solving Maxwell's equations in time and space. Here, we highlight how this can be achieved within a finite element framework. We assume the material properties to be isotropic, linear, and time independent allowing the problem to be simplified. We also assume that, in the mHz-to-MHz frequency range, inductive effects are negligible when compared with the capacitive behavior. This allows the relationship of the timedependent electric displacement to the electric field to be simplified to where E ! ðr; tÞ is the local electric field and D ! ðr; tÞ is the electric displacement at time t and position r. As no time dispersion is considered, the electric permittivity e(r) is a function only of position. Using the differential form of Maxwell's continuity equation: where j ! is the current density and q is the charge density. The current density can be written as: where r is the conductivity, j ! c is the differential form of Ohms law, and j ! d the displacement current density. Assuming the isotropic case, such that D ! ðr; tÞ ¼ eðrÞ E ! ðr; tÞ, the current density can be written such that impedance, Z, can be written as: with a real conductivity, r(t), and imaginary susceptibility, ixe(r), where i = √À1 and x is the angular frequency. Using the differential form of Gauss's law that: where u is the electric potential and e o is the permittivity of free space. Combining Eqs. (4) and (5), we transform Maxwell's Eq (2) into r j ! ðr; tÞ ¼ Àr ðrðrÞruðr; tÞ þ eðrÞ o ot r/ðr; tÞÞ ¼ 0 Using a time-domain finite element method (TDFEM) allows us to approximate the electric potential, u(r,t), in Eq. (6) as a function of space and time. This permits the current density to be calculated by integrating over the whole sample, and thus in turn allows simulation of the IS response of an electroceramic. Implementing Dirichlet boundary conditions at the electrode-air interface fixes the electric potential. We assume that the displacement currents crossing the free surface of the material are zero by using Neumann boundary conditions. A powerful aspect of this approach is that it allows the complete microstructure of the electrochemical system including contacts, grain boundaries, and grain cores to be created, meshed, and analyzed for their influence on IS data. Each grain and grain-boundary phase can be assigned its own unique time constant (i.e., intrinsic conductivity and permittivity) and we can therefore create heterogeneity within the ceramic microstructure. This model can then be calculated using the FEM package, without the requirement of an equivalent circuit consisting of a combination of R, C, and CPE elements. It should be noted that while the electrical response of the system can be solved using this package, it cannot account for any change due to a chemical or charge transfer process. III. Results and Discussion (1) Model Setup For our model design we base our technique on previous granular structure generation for magnetic simulations. 19 We first distribute an array of seed points, representing the centers of the grain cores within a cube. A Voronoi tessellation is then performed to generate a three-dimensional structure, filling the volume of the box. The arrangement of the seed points defines the structure of the system so that, if the points are distributed upon a regular grid and tessellated, a regular arrangement of cubic bricks is generated. Postprocessing on the Voronoi tessellation is then performed, eliminating any extremely small surfaces, to allow the structure to be discretized with tetrahedron elements with no complications. If required, the volumes can be shrunk toward their center point from thin gaps between individual volumes. These are then filled with prism elements that can represent very thin volume regions and can be assigned their own distinct material properties. This method allows for much higher volume ratios of differing thicknesses to be calculated. This then allows a volume ratio of the grain-core domain to the grain-boundary domain to be assigned, which we denote here as V gc:gb . Applying material parameters of conductivity and permittivity to these regions, we solve using our FEM package to simulate the frequency response of a defined sample. Typically, we consider a frequency range 0.01 Hz-1 GHz using a potential of 100 V applied on a contact material with conductivity of 10 k/Sm. (2) Response of Idealized Microstructures To verify the capabilities of the FEM package we designed, simulated, and compared two simple structures; a simple layered structure shown in Fig. 1(a), and an encased structure depicted in Fig. 1(b). Each model is based on a cube with lateral dimensions of 1 lm and meshed using a combination of 250 000 tetrahedron and prism elements. We first consider a simple layered system. A cube is divided into two distinct layers where the conductivity, r, is selected so that each region represents a grain core (gc) or grain boundary (gb) with r gc = 100 l/Sm and r gb = 0.1 l/Sm, respectively. The permittivity is held constant with a relative permittivity of e r = 100 for both the core and boundary regions. The thicknesses of the gc and gb are chosen to be identical, forming equal volumes of each material and V gc:gb = 1 as shown in Fig. 1(a). Here, the boundary makes up 50% of the total height of the cube. A general analytical formulation can be written to describe this system using a BLM model with two RC elements in series, such that the total resistance R and capacitance C of the system are given by the combined total thickness, l, and cross-sectional area, A, of the different phases: Using the intrinsic dimensions and material properties, the analytical values of resistance and capacitance for the graincore and -boundary phases are calculated to be R gc = 5 GΩ and C gc = 1.77 fF, and R gb = 5 TΩ and C gb = 1.77 fF, respectively. Using the FEM package we can solve this model to generate IS data that can be plotted and analyzed in all four immittance formalisms allowing values of resistance and capacitance to be extracted. The simulated IS data are shown here in the form of a Z* plot, Fig. 2(a), with the associated M″ spectroscopic plot in Fig. 2(b). To extract the values of resistance and capacitance for both the grain core and boundary, we apply an equivalent circuit fit of two parallel RC elements connected in series. The extracted values using this method give R gc = 5 GΩ and C gc = 1.75 fF and R gb = 4.99 GΩ and C gb = 1.76 fF for the grain-core and boundary phases, respectively. This shows excellent agreement with the values predicted analytically. Using the FEM IS data, the Z* arc for the layered structure also has no measureable depression angle and therefore exhibits a near-ideal Debye-like response with a measured FWHM of 1.15 decades in the corresponding M″ spectroscopic plot. It should also be noted that the two peaks in the M″ spectroscopic plot are of equal height indicating equal volume fractions of the two phases as expected from the BLM where the permittivity of the grain-core and -boundary phases is assumed to be the same. We now compare this to an encased structure, where the grain-boundary material surrounds the grain core as shown in Fig. 1(b). We maintain the same volume ratio of V gc: gb = 1. To achieve this requires a grain-core cube of length of 0.794 lm to be encased by a boundary layer with a thickness of 0.103 lm. This gives a total lateral thickness of the boundary as 0.206 lm or 20.6% of the total thickness of the system. From the IS data in Figs. 2(a) and (b) a number of differences are observed. Although the volume ratio of the two materials is maintained, the grain-boundary response in the Z* plot is reduced from 5 to 2.25 TΩ. The M″ spectrum shows a drop in the peak height of the grain-boundary response and an increase in the peak height of the grain-core response indicating a volume ratio more in the order of V gc:gb = 5 than the true fraction. There is also an associated increase in the FWHM of the M″ Debye peak for the grain-core response to 1.23 decades signifying an apparent electrical homogeneity in the sample. We extend this study to increased volume ratios where for simplicity we focus on two other ratios, that of V gc:gb = 10 and V gc:gb = 100 with dimensions shown in Table I. As the volume fraction is increased to V gc:gb = 10 the Z* arc associated with the grain-boundary response remains less than half that of the layered structure, Fig. 2(c). The M″ Debye peak associated with the grain-core response, Fig. 2(d), however, decreases in FWHM to 1.18 decades indicating a more homogenous grain-core electrical response than for V gc:gb = 1, while the M″ peak height becomes comparable with that obtained from the layered structure. The trend continues for V gc:gb = 100 where although the Z* arc associated with the grain-boundary response remains lower for the encased model, Fig. 2(e), the grain-core M″ spectroscopic response shown in Fig. 2(f) is comparable for both models and is that of a near-ideal Debye-like response. To explain this apparent electrical heterogeneity at low volume ratios we use the power of the FEM model to plot current density through the various models, as shown in Fig. 3. Figures 3(a)-(c) show the current density through the layered structures for V gc:gb = 1, 10, and 100, respectively. The current density is homogenous through each layer, indicating a linear flow of current through each phase, and as such resulting in the Debye-like responses in the simulated IS data. This does not occur for the encased structure. Figure 3(d) highlights that the current does not have to pass through the highly resistive grain boundaries whose normals are orthogonal to the current flow to reach the lower contact. These areas can be avoided, resulting in a near-zero current density within these regions. This leads not only to an increase in the current density through the grain-core regions (by over a factor of 3) but also in electrical heterogeneous behavior of the current flow. The IS data of the grain core thus results in an apparent electrically heterogeneous response with a FWHM in the M″ spectrum that exceeds 1.14 decades, even though the grain-core properties are homogeneous. This is also true at larger volume ratios, Figs. 3(e) and (f). These effects, however, are reduced as the grain boundary contributes less to the system as whole and thus smaller departures of the grain-core response from an ideal Debye-like response are observed. To observe the effect of this change in microstructure on the extraction of the material properties we use the simulated IS data and the known lateral dimensions of the grain-core and -boundary regions to allow for the effects of sample geometry to obtain the intrinsic material properties. The parameters extracted from the layered structures for conductivity and relative permittivity (shown in Fig. 4 for a range of volume ratios) are all within 1% of the input parameters for both grain core and boundary. When the encased structure is analyzed in a similar way there is a strong dependency on the volume ratio. The resistance and capacitance of the core and boundary regions at large volume ratios agree with the true value; however, as the volume of the boundary region is increased, not only is the grain-boundary resistance underestimated by up to~50% but the grain-core resistance is also underestimated by up to~40%, Figs. 4(a) and (b). The permittivity for both the grain core and boundary is underestimated by 10%, Figs. 4(c) and (d). These results highlight how a small change in microstructure can significantly influence IS data and give rise to potential issues with extracting material properties using the BLM. (3) Response of Complex Microstructures Although complex phenomena are observed in these simple structures, the idealized brick-shaped grains do not represent the true complexity of ceramic microstructures. In the BLM, the connectivity between neighboring brick-shaped grains is exactly 6 facing and 12 edge paths between adjacent grains. Realistic grains do not obey this rule precisely due to their complex structural geometry and so the number of grain boundaries that are transversed changes, which therefore affects the impedance spectra. We now expand our analysis from a simple two-component system, to systems that incorporate 512 individually addressable grains and associated grain boundaries arranged in an 8 9 8 9 8 configuration. To highlight the significance of microstructure we considered three designs. The first sys- tem uses a simple layered structure of two materials to form a specific volume ratio. The second is based on the encased structure, featuring a regular brick-mortar system as shown in Fig. 5(a). The second builds upon the complexity of this using irregular shaped grains created by a Voronoi tessellation of random seed points. We maintain the average volume of each region at 1 lm 3 , however introduce a distribution with a standard deviation of 0.5 lm 3 . The typical volume spread in the Voronoi model is shown in Fig. 5(c). These individual volumes are then shrunk down around their individual centers to form the desired volume ratio, where the remaining volume is discretized to account for the grain boundaries. Due to the relatively low number of grains in each model compared with experimental samples, the results are averaged over 10 simulations. Simulated IS data of the layered, brick-mortar, and Voronoi models for volume ratios of V gc:gb = 1, 10, and 100, respectively, are shown in Fig. 6. As with the simple study, the layered structure shows a much larger grain-boundary response in the Z* plot due to the current path being forced through the whole layer. A reduction in the magnitude of the grain-boundary Z* arc is again observed when the grainboundary material surrounds the grain core. In all cases shown here, the reduction is of the order of a third, corresponding to the fraction of grain-boundary regions that are avoided. The Z* response of the grain boundary for the complex microstructure of the Voronoi model also decreases by a further 5%, whereas the core remains relatively unaffected. Figure 6(b) highlights the effect on the M″ spectrum, where the peak height of the grain-core response increases with the complexity of the microstructure and therefore the current pathways. As the volume ratio increases from V gc:gb = 1 to 100, the M″ peak associated with the grain core for the brick-mortar and Voronoi models begins to converge to that of the layered structure, indicating, as before, a more ideal Debye-like response. As shown in Fig. 7, at high volume ratios the grain-core response in the brick-mortar system shows a near-ideal Debye response where the FWHM is 1.15 decades. This is in contrast to the Voronoi model structure which still indicates significant nonideality with a FWHM of 1.20 decades. As the grain-boundary region becomes larger, the M″ Debye peak FWHM increases to 1.25 and 1.32 for the brick-mortar and Voronoi models, respectively. Current density plots at low frequency are shown in Figs. 8(a), (b), and (c) for V gc:gb = 1, 10, and 100, respectively. It is clear from these that a combination of irregular shaped grains with higher resistance grain boundaries influences the associated current flow around these areas, resulting in a large nonlinear response of the current density in the grain cores. At each volume ratio, an inserted image of a typical grain exhibiting significant nonlinearity of the current is shown. At equal fractions of grain core and boundary, the current density through a single electrically homogenous grain can be as large as 90% of the total. A similar but reduced effect (60% and 45% of the total current) is also observed at larger volume fractions (V gc:gb = 10 and 100, respectively). We can use this FEM to predict the effect of microstructure on IS data and therefore comment on the confidence of using the BLM for such systems. We follow the standard experimental procedure to analyze the results of the model. First, we create a cross section of the model, similar to the images in Fig. 8, and use a line-scan method to estimate the percentage thickness of grain boundary and grain-core thickness for the system. Various slices through the model were used, finding that on average these values agreed well with Table I for the various volume fractions. Using these values we correct for the geometry and extract the conductivity and permittivity of the grain-core and -boundary components. As shown in Fig. 9, the microstructure affects both the grain-core and -boundary response. At high grain-boundary volumes (boundary accounts for 20% of the thickness) a deviation of over 60% from the expected (input) values is obtained for the conductivity and over 10% for the relative permittivity. As before, this converges to the expected grain-core material properties as the volume fraction of grain core is increased. At values of V gc: gb > 20, the extracted values are all within 10% of the expected values. Thus, the use of the BLM method is shown to predict the correct values to within 10% for homogenous granular structures, when the grain boundary contributes 1% of the total thickness of the sample. Care should be taken with the grain-boundary material properties; however, as even at this ratio, the extracted properties are overesti- show the grain-core and -boundary permittivity, respectively. A line for the eye is overlaid for each data set. mated by~12% for the conductivity and~20% for the permittivity. IV. Conclusions A fast and efficient FEM framework has been developed and used to allow a comprehensive study of IS data for threedimensional heterogeneous ceramics. In the model presented here we incorporate contacts, grain boundaries, and grain cores to replicate the microstructure of realistic ceramics; however, the flexibility of this code allows us to simulate virtually any microstructure such as a porous, nano-sized, and multiple-phase electroceramics. We show that an electrically homogenous grain core can give rise to an apparently heterogeneous IS response due highly resistive grain-boundary regions. Using the BLM (based on input materials parameters where the grain-boundary resistivity is four orders of magnitude larger than the grain-core resistivity but the permittivity of the two phases is the same) with its associated equivalent circuit to extract material properties (conductivity and permittivity) from IS data can lead to potential discrepancies of up to 60% of their true values based only on changes in microstructure.
v3-fos-license
2021-11-21T16:08:09.958Z
2021-11-19T00:00:00.000
244443515
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://appliednetsci.springeropen.com/track/pdf/10.1007/s41109-021-00433-z", "pdf_hash": "7b7b141c04a82354b13352270161a86c582ae4c5", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42201", "s2fieldsofstudy": [ "Business" ], "sha1": "5bc4ccd807fe26050ecf3a74ad13aa3548b7568c", "year": 2021 }
pes2o/s2orc
Local interactions and homophily effects in actor collaboration networks for urban resilience governance Introduction Collaboration among diverse actors is critical for effective resilience planning and management of interdependent infrastructure systems (IISs) (Li et al. 2019, 2021). In the context of this study, resilience is defined as “the capacity of human and infrastructure systems to prepare and plan for, absorb, recover from, or more successfully adapt to actual or potential adverse events (National Research Council 2012).” This definition highlights the importance of human systems affecting urban resilience that involve actors from diverse urban sectors (e.g., transportation, emergency response, environmental conservation, and flood control) with diverse priorities, resources, and responsibilities. Abstract For example, actors from transportation sectors would focus on the improvement of roadway networks, while actors from flood control and environmental conservation may focus on flood mitigation and natural resource preservation. Urban resilience improvement is a collective action problem, and therefore needs to account for complex interactions and collaboration among diverse actors (Norris et al. 2008). Existing studies highlight the importance of actor collaboration for planning (Godschalk 2003;Woodruff 2018), emergency response (Chen and Ji 2021;Eisenberg et al. 2020;Kapucu 2005;Li and Ji 2021), and recovery (Aldrich 2011;Berke et al. 1993;Gajewski et al. 2011;Rajput et al. 2020) before, during and after urban disruptions. In the context of resilience planning and management of IISs, inadequate collaboration and coordination among diverse actors in the planning process exacerbates a lack of institutional connectedness and would lead to contradictions and inconsistencies among networks of plans (e.g., land use, hazard mitigation, and environmental conservation) and increase social and physical vulnerabilities to urban disruptions (Berke et al. 2015Malecha et al. 2018). For example, inconsistencies among land use approaches and hazard mitigation plans would allow urban growth in hazard-prone areas (Godschalk 2003). Existing studies related to disaster management and environmental governance have explored factors that form the collaboration and social ties among diverse actors (Kapucu and Van Wart 2006;Nohrstedt and Bodin 2019). There is empirical evidence that actors with cognitive, organizational, and geographical proximity tend to form collaborations and social ties in inter-organizational networks (Balland 2012;Broekel and Hartog 2013). Matinheikki et al. (2016Matinheikki et al. ( , 2017 found that actors with shared values tend to establish collaborations in a construction project. Hamilton et al. (2018) found that actors tend to engage in within-level (e.g., regional, local, and state) linkages in environmental governance compared with cross-level linkages. Studies regarding social network analysis demonstrated homophily phenomenon that implies actors with similar attributes tend to establish ties with each other (Gerber et al. 2013;Kossinets and Watts 2009;Shalizi and Thomas 2011). On the other hand, the heterophily phenomenon also exists; studies have shown that actors with dissimilar attributes tend to form social ties (Barranco et al. 2019;Kimura and Hayakawa 2008;Lozares et al. 2014;Rivera et al. 2010). The theory of structural holes in social networks suggests that actors seeking to advance their positions and to broaden their influence tend to form ties with those with different resources and skills (Burt 2004;Lazega and Burt 1995). McAllister et al. (2015) also argued that the links in networks related to urban governance were shaped based on the choices that actors make either to increase bonding capital, to reinforce shared norms and trusts, or to increase bridging capitals, linking with exotic resources. Asikainen et al. (2020) found that triadic closure (i.e., a structural property representing ties among three actors) and choice homophily are two important mechanisms for the evolution of social networks (e.g., communication networks), and that these two mechanisms are dependent upon each other. Although multiple existing studies explored the mechanisms that form the collaboration and social ties in different fields, such as organizational teams, very few studies investigated the drivers for collaboration in actor collaboration networks for resilience planning and management of IISs. Also, most collective action studies in the context of disaster management and environmental governance focus primarily on the structural properties of actors' social networks and have paid limited attention to local interactions (based on examining motifs as topological signatures) and homophily effect (based on assessment of actor node attributes). The examination of these two mechanisms is essential for understanding and improving essential coordination in actors' networks for resilience planning and management of IISs. In this study, therefore, our goal is to examine two important mechanisms for actor collaborations: local interactions and homophily effects in resilience planning and management of IISs. In this paper, we define local interactions as the stakeholder interactions on a small scale which can be examined by subgraphs or motifs in complex networks (Asikainen et al. 2020;Robins and Alexander 2004;Vázquez et al. 2004). We mapped actor collaboration networks for hazard mitigation before Hurricane Harvey based on a stakeholder survey administered in Harris County, Texas. The stakeholder survey captured collaboration among actors in various urban sectors (e.g., transportation, emergency response, flood control, environmental conservation, and community development) involved in hazard mitigation efforts. Also, the survey examined preferences of actors towards different types of flood risk reduction policies (e.g., land use approach, monetary policy, and engineering policies). Based on the mapped collaboration networks, we adopted network motif analysis and exponential random graph models (ERGMs) to examine the drivers for actor collaboration formation. We elaborate on the network motif analysis and ERGMs in the following sections. Study context and data collection During Hurricane Harvey, a Category 4 hurricane that made landfall on the Texas Gulf Coast in 2017, flooding due to release of water from Addicks and Barker reservoirs inflicted property and infrastructure damage in Harris County totalling 125 billion, particular in the Houston area. The release of water was necessitated to avoid even more severe damage if the impounded water would have breached the dams (NOAA & NHC 2018). Houston is a flood-prone city: Hurricane Harvey is only one in the long history of hurricane events in the Houston area. From 1935 to 2017, ten major flooding events occurred in the Houston area. Just before Hurricane Harvey, Memorial Day Floods in 2015 and Tax Day Floods in 2016 wreaked havoc in Houston, and caused 16 casualties and more than $1 billion in losses (Berke 2019). After Hurricane Harvey, we administered a stakeholder survey that focused on the Harris County area in Texas. The intent of the survey was to collect, among other things, essential data regarding actor collaboration for hazard mitigation and resilience planning of IISs, as well as actor preferences to different flood risk reduction policies. To map the actor collaboration network, we identified 95 influential actors involved in resilience planning from different urban sectors, including community development (CD), flood control (FC), transportation (TT), environmental conservation (EC), and emergency response (ER). These actors were listed in the survey roster as the actors that the survey respondents may have collaborated with. The survey question we asked the respondents to collect the collaboration data is included in the supplementary material. Furthermore, we developed flood risk reduction policy actions to investigate preferences of actors from different urban sectors. The developed risk reduction policy actions included land use policies, engineering policies, and monetary policies. We identified these policies based on the strategies for urban flood resilience improvement discussed in existing literature (Berke and Smith 2009;Brody et al. 2013Brody et al. , 2009Burby 1998;Burby et al. 1999;Godschalk 2003). Table 1 lists the policy actions in the survey. Please see the supplementary material for survey questions to identify respondents' preferences to the developed policy actions. On January 31, 2018, we conducted a pilot test of the stakeholder survey to collect feedback on the first-round survey instrument. For the pilot test, we randomly selected a group of 15 individuals from an initial list of selected organizations. We identified an initial list of organizations from different urban sectors, such as Harris County Flood Control District, City of Houston Floodplain Management Office, Texas Department of Transportation, Urban Land Institute, and The Nature Conservancy. We then used a snowball sampling method to expand the initial list by asking respondents to recommend relevant individuals and organizations to participate in the survey. Four respondents completed the pilot test, concluded on February 12, 2018. We refined the survey instrument based on the feedback received in the pilot test. The stakeholder survey was officially launched on February 15, 2018 and closed on April 10, 2018. We sent out a total of 795 invitations in 25 waves. We selected organizations involved in resilience planning from different urban sectors, both within and outside government, and at different scales (e.g., local, county, regional and state). We selected respondents within organizations that were in positions of management and planning and thus were informed about planning and were influential in their organizations. Finally, 198 individual respondents, representing 160 different departments of 109 organizations, (approximately 30% response rate) completed the survey. Network models We mapped the collaboration among diverse actors involved in hazard mitigation and resilience planning of IISs based on the survey results. We also mapped actor collaboration networks at different collaboration frequency levels, such as daily and weekly collaboration networks. The mapped networks are bipartite networks with two node sets: one comprises actors in the survey roster; the other, survey respondents. The edges in the mapped network represent collaborations among the actors for hazard mitigation and resilience planning of IISs. Figure 1 illustrates the way to map the actor collaboration network. Considering that monthly collaboration was the most representative answer, our analysis focused on the monthly collaboration network. We assigned the actor preferences to flood risk reduction policy actions as attributes to the nodes of the mapped actor collaboration network. Each node could have one of three preferences states for each policy action: Oppose, Neutral and Support. In the data processing process, we grouped the survey results of "Strongly oppose" and "Oppose" and "Strongly support" and "Support. " Furthermore, we divided survey respondents into five urban sectors based on the organizations and departments they represented: community development (CD), flood control (FC), transportation (TT), environmental conservation (EC) and emergency response (ER) Farahmand et al. 2020;Li et al. 2019Li et al. , 2020c. Table 2 illustrates examples of classified urban sectors. The urban sectors of actors were also assigned to each node as one of the node attributes in the mapped collaboration network to examine the homophily effect. Table 3 summaries the node attributes that we accounted for in the mapped collaboration network. Methodology The examination of the local interactions and homophily effects that form the social ties and contribute to the evolution of social networks are usually regarded as a bottom-up process (Boyd and Jonas 2001). As such, network motif analysis and ERGMs are suitable approaches for revealing the network configurations that encode the importation information related to tie and collaboration formation. Hence, we adopted network motif analysis for the examination of local interactions and ERGMs for the assessment of homophily effects in the actor network in the context of resilience planning and management of IISs in Harris County. Network motif analysis Network motifs are defined as the network structural elements in complex networks that have significantly larger counts compared with the random networks (Milo et al. 2002). Compared with the global network measures, network motifs reveal the patterns of local interactions, thus playing an important role in understanding the hidden mechanisms behind complex networks. Network motifs have been widely studied in social, neurobiology, biochemistry, financial, and engineering networks. To name a few studies, Dey et al. (2019) showed that distributions of network motifs (i.e., the patterns of local interactions) are strongly connected with the robustness of systems (e.g., power-grid networks, transportation networks). Saracco et al. (2016) detected the early-warning signs of the financial crisis through analyzing the motifs of the bipartite world trade networks. Schneider et al. (2013) studied the motifs of human mobility network and unraveled the mobility patterns. Gorochowski et al. (2018) studied organizations of 12 basic motif clusters in natural and engineered networks. The results showed that the organizations of motif clusters were different between networks of various domains. Robins and Alexander (2004) examined seven bipartite network configurations to study the small-world effects and distance in corporate interlocking networks. These examples highlight the growing use and capability of network motif analysis to study local interactions and hidden mechanisms that contribute to the robustness, organization, and functionality of complex networks. In this study, we focused seven basic network configurations of bipartite networks without network projections, because studies showed that network projections may lose important information of bipartite networks (Robins and Alexander 2004;Zhou et al. 2007). Figure 2 illustrates seven network configurations of bipartite networks in which the blue square and the red circle represent two-node sets. Table 4 shows the relative statistics and interpretations of the network configurations. As illustrated in Table 4, Robins and Alexander (2004) introduced two new configurations, three trails and cycles, to study the local structures of bipartite networks. It is worth noting that these two configurations would lose the information of local interactions if we conducted network projections (three trails will become one edge and cycles will become one weighted edge). Therefore, it is essential to include these two network configurations for bipartite networks. Robins and Alexander (2004) argued that three trails could reflect the global connectivity of the bipartite network and cycles represent local closures in the bipartite network. For the bipartite networks with similar sizes and densities, more three trails and fewer cycles will increase the levels of connectivity and shorten the average path of the network, while more cycles and fewer three trails indicate stronger localized closeness. The bipartite clustering coefficient, 4 × C 4 /L 3 , could quantify the length of the average path and the strength of local interactions in the bipartite network. Network motif analysis also involves comparing the numbers of network configurations in the examined network with those in random networks. In this research, we generated random bipartite networks with the same degree distributions and compared them with the examined network (Saracco et al. 2015). The configuration model that generated random graphs had fixed node degree distribution was regarded as one of the most insightful null models in monopartite networks (Chung and Lu 2002). We extended the configuration model to bipartite networks (Saracco et al. 2015). In this Fig. 2 Seven network configurations of bipartite networks: R and P represent two node sets of bipartite networks (Roster actors and Participants respectively in this study); blue squares represent node set R; red circles represent node set P Local closures in bipartite networks analysis, we used sequential importance sampling to simulate bipartite networks with fixed degree distributions (Admiraal and Handcock 2008;Blitzstein and Diaconis 2011). Although network motif analysis is a powerful method to investigate local interactions and reveal hidden mechanism behind complex networks for collaboration, it does not fully account for node attributes. Therefore, we adopted ERGMs to investigate the extent to which the node attributes affect the ties in the actor collaboration network. Exponential random graph models (ERGMs) ERGMs are a family of statistical models that could fit local structures or network configurations to model the network formations using maximum likelihood estimations (Wang et al. 2009). In a defined network space Y that includes all possible networks with n nodes, a random network Y ∈ Y , where Y ij = 0 or 1 depending on whether the pair of nodes (i, j) are connected or not, then the probability of Y could be determined based on the counts of a set of network configurations. The general form of ERGMs could be written as follows: where S i y represents any user-defined network statistics measured on the network Y, and θ i is associated parameters to be estimated. k(θ ) is the normalizing constant to ensure the legitimacy of the defined probability distribution. Here, we provide a illustrative model inspired by Bomiriha (2014) for the general readers. For an undirected friendship network in which edges represent mutual friendships and the network has probability p 1 between students living in the same dormitory and probability p 2 between students living in different dormitories. Then the ERGM model for investigating p 1 and p 2 could be written as follows: The first set of statistics in Eq. (2) represent the number of edges; the second set of statistics is the number of edges connecting nodes living in the same dormitory. Based on this model, we can easily derive p 1 equals to e θ 1 +θ 2 /(1 + e θ 1 +θ 2 ) and p 2 equals to e θ 1 /(1 + e θ 1 ) . Furthermore, the coefficient θ 2 could show the homophily (with θ 2 > 0 ) or heterophily (with θ 2 < 0 ) effect in the studied friendship network. More in-depth discussion regarding the theory of ERGMs could be found in Robins et al. (1999Robins et al. ( , 2007 and Wang et al. 's works specifically for bipartite networks (2009). ERGMs provide a powerful tool for generating quantitative evidence for the tie formation process related to network configurations and node attributes. The existing literature has adopted ERGMs to study the dynamics and mechanisms of social tie formations behind different kinds of networks, such as collaborative networks (Nohrstedt and Bodin 2019), partnership networks for urban development (McAllister et al. 2015), inter-organizational knowledge sharing networks (Broekel and Hartog 2013), Facebook friendship networks (Traud et al. 2011(Traud et al. , 2012Wimmer and Lewis 2010), and hospital networks of patient transfers (Lomi and Pallotti 2012). In this paper, we focus on the examination of the homophily effect in the actor collaboration network in resilience planning and management of interdependent infrastructure systems. Homophily in the bipartite networks (2) P Y = y ∝ exp θ 1 i<j y ij + θ 2 i<j y ij I{i and j lives in the same dormitory} is represented by two neighbors with the same attributes connected to the same node (illustrated in Fig. 3) because they cannot directly connect with each other (Bomiriha 2014). We adopted network statistics developed by Bomiriha (2014) to model homophily for bipartite networks. Equation 3 illustrates included network statistics. In Eq. 3, edges represent network statistics of edges in the mapped bipartite network. Nodematch (urban sector CD) represents network statistics of two survey respondents in the same urban sector, community development (CD), collaborating with the same actor in the survey roster. Likewise, nodematch (P1) represents network statistics of two survey respondents both supporting policy action P1 collaborating with the same actor in the survey roster. The detailed calculations of network statistics (i.e., nodematch) could refer to the R package: ergm (Hunter et al. 2008). The parameters in Eq. 3 were estimated by Monte Carlo maximum likelihood estimation. Therefore, the parameters θ 2 ∼ θ 22 could show the homophily effect with positive values and the heterophily effect with negative values. Results The network motif analysis shows that the actor collaboration network has strong local interactions. Figure 4 illustrates the network configurations in the observed network and those in the simulated 1000 random models. Table 4 shows the detailed statistics of network configurations in the observed network as well as mean values and standard deviations in the random models. From Fig. 4 and Table 5, we can find that the observed actor collaboration network has significantly fewer three trails (Z-score: − 15.4) and more cycles (Z-score: + 6.51) compared with the simulated random models. Also, the local clustering coefficient of the observed actor collaboration network is significantly higher (Z-score: + 15.83) than the simulated random models. Apparently, the algorithm that we applied, Networksis package in R, fixed the number of edges, two stars, and three stars to generate the random models with same degree distributions (Admiraal and Handcock 2008). The results of the motif analysis indicate that: (1) there are hidden mechanisms and additional social processes to form the collaborations among actors due to significantly different counts of three trails and cycles compared with the random models; and (2) the observed actor collaboration network has a long average path length and strong local interactions due to its fewer three trails, more cycles, and higher θ 1 edges + θ 2 * nodematch(urban sector CD) + · · · + θ 6 * nodematch(TT ) + θ 7 * nodematch(P1) + · · · + θ 22 * nodematch(P16)    Fig. 3 Homophily and heterophily effect in bipartite networks. Squares represent the node set of actors in survey rosters; circles represent the node set of survey participants; node colors represent different node attributes clustering coefficient compared with the random models. The results imply that the formations of the actor collaborations are due to strong local interactions, such as collaborations in the same urban sectors or collaborations among actors with same policy preferences. Also, collaborations outside the local clusters are limited due to their long average network path length. The ERGMs could help in further investigations of the factors affecting the actor collaboration. The ERGMs demonstrate both significant homophily effects and heterophily effects for actor collaboration in resilience planning and management of IISs. The results show the significant homophily effects within the transportation sector, significant heterophily effects within the emergency response sector, and varied homophily and heterophily effects due to different flood risk reduction policy actions. This finding implies that: (1) the actors in the transportation sector are less likely to build collaboration ties with actors from other urban sectors; and (2) emergency response actors are likely to form collaboration ties with actors of other sectors. Table 5 shows the estimated coefficients of variables in ERGMs. We include the Markov Chain Monte Carlo (MCMC) diagnostic plots in the supplementary information. The plots were obtained from randomly generated networks from the fitted models. The MCMC diagnostic plots showed evidence of random variation and approximately normal-shaped distributions centered at zero, which are consistent with good performance in model fitting (Bomiriha 2014). We can observe from Table 6 that the probability of edges is e −2.8619 = 0.057 , excluding all the homophily effects in the table, which is lower than the density of the observed network: 0.0756. This result implies that the structure of the observed network is shaped by homophily effect, which is consistent with the results of network motif analysis that the network showed a strong local interaction effect (actors of the same sector are more likely to collaborate with each other). Also, we found that actors from the emergency response sector (ER) showed significant heterophily effects. When an actor from ER collaborates with an actor in the survey roster, another actor from the emergency response sector would have reduced probability ( e −2.8619−0.9879 = 0.021) to collaborate with the same actor in the survey roster. This result is consistent with the real situation that actors from the emergency response sector usually collaborate with actors from other sectors (e.g., flood control and transportation sectors) for hazard mitigation during disasters. Furthermore, the actors from the transportation sector (TT) showed significant homophily effect. When an actor from the transportation sector collaborates with the actor in the survey roster, another actor from the transportation sector would have increased probability ( e −2.8619+1.2971 = 0.209 ) to connect with the same actor in the survey roster. This result shows strong local interactions in the transportation sector. The results are also consistent with our former studies regarding actor collaboration within and across different urban sectors for hazard mitigation and resilience planning of IISs (Li et al. 2019). Actors from the transportation sector showed the highest withinsector collaboration, while actors from the emergency response had highest across-sector collaborations. However, we cannot see significant homophily effects in other urban sectors, such as the community development (CD), environmental conservation (EC) and flood control (FC) sectors. This result may imply that the formation of collaboration is not purely due to the organizational proximity. We also found significant heterophily effects in some flood risk reduction policy actions including P1 (Limit new development), P3 (Strengthen infrastructure), P7 (Build levees), P10 (Improve stormwater system), P12 (Temporarily prohibit development after disasters), and P14 (Limit development of public facilities). The actors have preferences to these policy actions had significantly reduced probability to collaborate with the same actors in the survey roster. Based on the structural hole theory, this heterophily effect may suggest collaboration among these actors was sought to increase bridging capitals, to seek exotic resources and skills to advance their positions, and to broaden the influence in the network (Burt 2004;Lazega and Burt 1995;McAllister et al. 2015). We also found significant homophily effects in some flood risk reduction policy actions, including P2 (Elevate buildings), P8 (Build reservoirs/retention ponds), P9 (Protect wetlands/ open space), P15 (Limit rebuilding in frequent flooding areas), and P16 (Buy out or acquire property). The actors indicating preferences to these policy actions had a significantly increased probability to collaborate with the same actors in the survey roster. The intent of collaboration among these actors was to increase the bonding capital and to reinforce shared norms and trusts (McAllister et al. , 2015). Discussion The results did not indicate that the urban sectors of actors were a pure driver to form the collaborations among actors. Actors from the flood control, environmental conservation, and community development sectors did not show significant homophily effects in formation of ties. The results indicated that actors from emergency response sectors had significant collaboration with actors from other urban sectors. Previous studies showed that emergency response actors, such as Houston Fire Department, Harris County Office of Emergency Management, and Texas Department of Public Safety, collaborated with actors from other sectors, including environmental conservation, community development, and transportation sectors, for first response and recovery during and after disasters (Li et al. 2019). Existing studies also highlighted the importance of collaboration among actors from diverse sectors for effective emergency response and disaster recovery (Aldrich 2012;Campanella 2006;Gajewski et al. 2011). The results also showed strong within-sector collaborations for actors from the transportation sector. The transportation sector in Texas has great and wide-ranging authority and is a leading voice in infrastructure development driven by real estate development. Transportation planning in Texas, however, lacks resilience metrics for the long run. Furthermore, the transportation sector has its own planning and environmental affair divisions, which may contribute to its limited collaboration with other urban sectors. The results of network motif analysis showed that the collaboration network has a long average path length and strong local closeness, which also implied that actors from the transportation sector have strong local interactions but limited collaboration with actors from other sectors. A lack of collaboration with actors from the flood control sector, however, may lead to urban growth without compatible investments on flood control infrastructures. Also, insufficient collaboration between flood control and transportation sectors may lead to infrastructure development in hazard-prone areas. The results of network motif analysis and homophily effects of actors from urban sectors in ERGMs are consistent with the planning background in the Houston area. Houston repeatedly suffers from extensive damage due to major flood events (Boburg and Reinhard 2017;Patterson 2017). One major reason is rapid urban growth without holistic planning for flood risks. On one hand, Houston plans growth primarily by developing major institutional projects, building expansive infrastructure networks, and encouraging neighborhood-level planning through super neighborhood organizations (Neuman and Smith 2010). Also, Houston adds density bonuses to encourage development in the urban core (Fulton 2020). Although these policies support population growth (Masterson et al. 2014;Qian 2010), they also exacerbate flooding vulnerability (Zhang et al. 2018). On the other hand, Houston mitigates flood risk with projects such as the Bayou Greenways Initiative to protect and enhance the network of connected open spaces along bayous (Blackburn 2020), development of structural surge infrastructure, and coastal ecosystem enhancement along Galveston Bay (Blackburn 2017), construction and restoration of detention ponds, supporting home buyouts (Harris County Flood Control District 2017), and retrofitting critical flood control infrastructures through the Hazard Mitigation Plan (Harris County Flood Control District 2017). Planning in Houston, however, is driven largely by the real estate development serving the desire for economic growth. Houston lacks a compatible planning crosswalk between urban growth and the investment on flood control infrastructure, which requires the involvement and collaboration of diverse stakeholders from urban sectors and scales. The findings of this study showed the need for a greater cross-sector collaboration to expand local interactions, as well as the important roles certain actors could play to span boundaries and bridge ties among actors of various sectors with similar and dissimilar preferences to flood risk reduction policy actions. Furthermore, we found both significant homophily and heterophily effects in actor preferences to flood risk reduction policy actions in ERGMs. The results indicated mixed mechanisms for collaboration among actors. The heterophily effect indicates that a part of actor collaboration was to increase the bridging capitals, to seek exotic resources and skills to advance the positions, and to broaden the influence in the network. The involved actors usually play a brokage role in the collaboration network, helping connect different actors from diverse urban sectors. Based on network measures, such as betweenness centrality, we can identify these actors in the collaboration network (Li et al. 2020c). The homophily effect indicates that a part of collaboration was to increase bonding capitals, reinforcing shared norms and trusts. The involved actors usually are in the core of networks or local clusters. We can identify these actors in the collaboration network through core-periphery analysis and community detection (Li et al. 2020a;. The ERGMs provide insights into the mechanisms for collaboration among diverse actors, helping to develop strategies to increase network cohesion and to improve collaboration among actors from diverse urban sectors. The results of the study highlight some resilience characteristics embedded in human systems for urban resilience governance. The first is multi-scale governance (Paterson et al. 2017;Wagenaar and Wilkinson 2015). Urban resilience requires multi-level collaborations across complex boundaries at social, physical, and ecological dimensions (Boyd and Juhola 2015;Li et al. 2020b). Also, resilience planning is the outcome of interdependent plans at different scales (e.g., city, regional, state, and federal). In a study of resilience practitioners in 20 cities, Fastiggi et al. (2021) pointed out that external collaborations, such as multidisciplinary consultants, advisory committees, resilience consortiums, and peer networks, would be of great help in improving multi-governance for urban resilience governance. Another resilience characteristic is the knowledge co-production and trust (van der Jagt et al. 2017). Existing literature stressed the importance of diverse stakeholder engagement to improve knowledge co-product and trust in urban resilience governance (Graversgaard et al. 2017;Nutters and Pinto da Silva 2012;Watson et al. 2018;Wiesmeth 2018). The inclusion of diverse stakeholders across various urban sectors would improve the collective understanding of complex systems, solve conflicts, and enhance shared values. Furthermore, given that existing studies usually examined these resilience characteristics separately, Dong et al. (2020) proposed the institutional connectedness for effective urban resilience governance, accounting for three synergistic areas embedded in human systems: the actor collaboration of actor networks, the plan integration of networks of plans, and the shared norm and values. Our study provides a new way to examine the actors' network and their attributes simultaneously. The level of local interactions could shed lights on the need for external collaborations, and ERGMs provides insights into policies and norms for actor collaborations. Furthermore, institutional connectedness stresses shared norms among actors to increase network cohesion and actor collaborations for resilience governance. In our study, we found that the heterophily effect is also an important factor for tie formation in actor collaboration networks. The result is consistent with those from existing studies that highlighted the heterophily effect for the tie formation in different types of social networks (Barranco et al. 2019;Kimura and Hayakawa 2008;Lozares et al. 2014). Concluding remarks In this paper, we examined two important mechanisms, local interactions and homophily effects for actor collaboration in resilience planning and management of IISs. We conducted a stakeholder survey to collect data regarding actor collaboration for resilience planning of IISs and actor preferences to a list of flood risk reduction policy actions. We mapped the bipartite network and adopted network motif analysis and ERGMs to investigate network configurations and related node attributes, which encode important information of collaboration among actors. The paper has both theoretical and practical contributions: (1) we combined network motif analysis and ERGMs models which both focus on the network configurations and a bottom-up process in the formation of social networks. The results of network motif analysis and ERGMs have different focuses and could be complementary to each other. (2) The study could provide empirical evidence regarding drivers of collaboration among diverse actors in resilience planning and management of IISs. These results could help develop strategies to foster collaboration among actors from diverse urban sectors involved in the process of resilience planning and management of IISs. This study and its findings complement the existing literature related to actor collaborative network analysis in collective action problems related to disaster management and environmental governance by the examination of two mechanisms contributing to network formation and evolution: local interactions and the homophily effect. Many of the existing studies primarily focused on topological properties of actor networks but did not fully account for actor node attributes. The combined analysis of network structure and node attributes (i.e., sectors and policy preferences of actors) and findings provide deeper insights into the institutional connectedness of human systems that influence urban resilience. In addition, this study contributes to the field of urban resilience planning and management of IISs by advancing the empirical understanding of actors' network properties and the underlying mechanisms that govern the creation of ties/links in actor collaboration networks. The study has some limitations. First, we did not consider dynamic network evolutions in this paper due to the lack of longitudinal data. Future study could collect actor collaboration data after Hurricane Harvey to investigate the extent to which local interactions and homophily effects affect the network evolution after the disaster like Hurricane Harvey in the collaboration network. Second, we found significant homophily and heterophily effects for preferences to different risk reduction policy actions; however, we did not explore whether the policy actions led to the homophily or heterophily effects. Future studies could explore the reason based on the essential knowledge of public policies. Third, we applied an algorithm to generate random networks with fixed degree distributions. The algorithm fixed the counts of edges, two stars, and three stars, which lost some information of the network motif analysis. Although Saracco et al. (2015) noted that higher-order network motifs (e.g., three trails and cycles) encode much more network information compared with the lower-order network motifs, future studies could test and apply different algorithms to examine the significance of network motifs.
v3-fos-license
2018-04-03T04:56:39.744Z
2009-04-01T00:00:00.000
25673853
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1099/vir.0.008169-0", "pdf_hash": "7ee9fa97a1ffcc6241504c4bdb1dd2700f67b985", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42202", "s2fieldsofstudy": [ "Biology" ], "sha1": "461ff18a032df950517d59e7f98ec5cb94d96071", "year": 2009 }
pes2o/s2orc
Smallpox virus plaque phenotypes : genetic , geographical and case fatality relationships Received 22 October 2008 Accepted 16 December 2008 Smallpox (infection with Orthopoxvirus variola) remains a feared illness more than 25 years after its eradication. Historically, case-fatality rates (CFRs) varied between outbreaks (,1 to ~40 %), the reasons for which are incompletely understood. The extracellular enveloped virus (EEV) form of orthopoxvirus progeny is hypothesized to disseminate infection. Investigations with the closely related Orthopoxvirus vaccinia have associated increased comet formation (EEV production) with increased mouse mortality (pathogenicity). Other vaccinia virus genetic manipulations which affect EEV production inconsistently support this association. However, antisera against vaccinia virus envelope protect mice from lethal challenge, further supporting a critical role for EEV in pathogenicity. Here, we show that the increased comet formation phenotypes of a diverse collection of variola viruses associate with strain phylogeny and geographical origin, but not with increased outbreak-related CFRs; within clades, there may be an association of plaque size with CFR. The mechanisms for variola virus pathogenicity probably involves multiple host and pathogen factors. Smallpox (infection with Orthopoxvirus variola) remains a feared illness more than 25 years after its eradication.Historically, case-fatality rates (CFRs) varied between outbreaks (,1 to ~40 %), the reasons for which are incompletely understood.The extracellular enveloped virus (EEV) form of orthopoxvirus progeny is hypothesized to disseminate infection.Investigations with the closely related Orthopoxvirus vaccinia have associated increased comet formation (EEV production) with increased mouse mortality (pathogenicity).Other vaccinia virus genetic manipulations which affect EEV production inconsistently support this association.However, antisera against vaccinia virus envelope protect mice from lethal challenge, further supporting a critical role for EEV in pathogenicity.Here, we show that the increased comet formation phenotypes of a diverse collection of variola viruses associate with strain phylogeny and geographical origin, but not with increased outbreak-related CFRs; within clades, there may be an association of plaque size with CFR.The mechanisms for variola virus pathogenicity probably involves multiple host and pathogen factors. Associations between extracellular enveloped virus (EEV) production and pathogenicity have long been hypothesized for orthopoxviruses.Several forms of enveloped orthopoxvirus progeny are antigenically distinct (Appleyard et al., 1971;Smith et al., 2002).Membrane wrapping of intracellular mature virus (IMV) creates intracellular enveloped mature virus (IEV), which is then transported to the cellular membrane via microtubules to form cell-associated enveloped virus (CEV).Release from the cellular membrane yields EEV (Smith et al., 2002).Envelope structure and incorporation of host complement control proteins into the EEV/CEV envelope [potentially facilitating evasion of host complement neutralization (Appleyard et al., 1971;Vanderplasschen et al., 1998)] may facilitate virus dissemination within the host. In a subset of vaccinia strains, increased EEV production in vitro, as monitored by comet formation, is associated with increased mortality within the mouse intranasal infection model (Payne, 1980).Within this model, passive transfer of antisera against inactivated EEV protects 90 % of mice against lethal vaccinia virus infection, while antisera against inactivated IMV does not provide protection (Payne, 1980).These observations support the hypothesis that increased EEV/CEV production results in dissemination of infection within the host and greater pathogenicity.Vaccinia strain Western Reserve (WR), a notable exception, is highly pathogenic yet forms few comets (Payne, 1980).Sequence comparison of four virulence genes demonstrates that WR is phylogenetically distinct from other vaccinia strains (Trindade et al., 2007). Currently, most studies to evaluate EEV/CEV production and associated virulence use vaccinia virus-WR with genetic modifications in individual envelope proteins.Deletion/ alteration of several genes (F12, A33, A36 and B5) causes decreased EEV/CEV production and reduced virulence in mouse infection models (Engelstad & Smith, 1993;Gurt et al., 2006;Parkinson & Smith, 1994;Wolffe et al., 1993;Zhang et al., 2000).However, other mutations within A33, A34 or B5 enhance vaccinia virus-WR EEV/CEV production/release but decrease virulence (Gurt et al., 2006;Katz et al., 2002Katz et al., , 2003;;McIntosh & Smith, 1996).Envelope proteins are highly conserved in orthopoxviruses (Aldaz-Carroll et al., 2005;Engelstad et al., 1992;Engelstad & Smith, 1993) and highly recognized by the host immune system.B5 is an immuno-dominant antigen in variola virus-infected humans (Davies et al., 2007) and a-B5 antibodies are primarily responsible for vaccinia virus EEV/ CEV neutralization by vaccinia immune globulin (Bell Published online ahead of print on 26th January 2009 as DOI 10.1099/ vir.0.008169-0. A supplementary figure showing amino acid alignment of variola virus homologues of vaccinia virus IEV and EEV proteins is available with the online version of this paper.et al., 2004;Putz et al., 2006), the recommended treatment for post-vaccination complications (Rotz et al., 2001).The role that variola virus EEV plays in disease pathogenesis is complex and may better be understood holistically within the whole virus context.Here, we evaluated variola virus's comet and plaque phenotypes in tissue culture, quantified EEV and cell associated virus (CAV) accumulation and assessed their association with virus phylogeny and outbreak-associated case fatality rates (CFRs) as a measure of pathogenicity. The comet-forming ability of orthopoxviruses, such as vaccinia virus, relates to EEV production (Payne, 1980;Wolffe et al., 1993).Variola isolates (n525) chosen from diverse geographical regions and years of isolation were evaluated for in vitro comet formation within a liquid overlay and for plaque size within a semi-solid overlay.All variola virus infections were conducted under Biosafety Level 4 conditions, where BSC-40 cell monolayers were infected with each variola strain [isolation and propagation carried out as described previously (Esposito et al., 2006;Li et al., 2007)].Each virus was diluted in RPMI 1640 medium+2 % fetal bovine serum to achieve ~20-50 p.f.u. per well.After 1 h incubation (35.5 u C, 6 % CO 2 ), the inoculum was removed and monolayers were washed twice with medium.Medium or medium+16 carboxymethylcellulose was added to liquid overlay and semi-solid overlay wells, respectively.After incubation for 4 days, overlays were removed and cells were fixed with 10 % formalin in PBS for 20 min.The fix solution was removed and plates were treated with 4.4610 4 Gy gamma irradiation.Comets and plaques were visualized by immunohistochemical staining with polyclonal rabbit a-variola antibody as previously described (Yang et al., 2005). Visual classification was based upon subjective interpretation of the magnitude of comets formed by each variola strain, from most (Visual Group 1) to least (Visual Group 3) prominent comets (Fig. 1).As a more quantitative measure of comet-forming ability, the percentage of plaques forming comets was calculated for each variola strain by counting total plaque number (primary plaques of similar size) and number of plaques forming comets (defined as a directional flow of more than four satellite plaques emanating from a primary plaque) per well within the photograph ( 63.9 magnification).The mean count reflected three different individuals' counts of duplicate wells; all plaques within 1 cm of the well boundary on the photographs were excluded due to potential interference in comet formation.Variola isolates demonstrated diverse comet-formation phenotypes, which were reproducibly seen for each isolate in duplicate wells.Although the variation in comet-forming plaques could be due to microheterogeneity within each isolate, there were no majority 'subpopulations' identified during sequencing (Esposito et al., 2006).Variola isolates with nearly identical nucleotide sequences (Esposito et al., 2006) displayed similar comet phenotypes [Figs 1 and 2, e.g.ETH72_16 (Group 2, 93 %) and ETH72_17 (Group 2, 89 %)].Reproducibility of comet quantification was within 20 % coefficient of variation (based on mean and SD of counts) for any given isolate except UNK46_harv and SOM77_ali, which had 67 and 21 % of plaques forming comets, respectively, grouping them with the lowest comet-producing strains. In vitro comet phenotype was compared to geographical distribution, phylogeny (Li et al., 2007) and reported CFRs Fig. 2. Maximum-likelihood phylogram of 47 variola virus strains (Li et al., 2007), including geographical location of isolation (number of isolates given in parentheses), reported CFRs (Esposito et al., 2006) and plaque/comet phenotype.(Esposito et al., 2006).Subjective visual classifications and quantification of comet-forming ability segregated variola virus strains in a manner correlating with phylogeny and geographical origin (Fig. 2).Non-parametric analysis (Mann-Whitney exact two-tailed t-test) of the mean proportion of comet-forming plaques between variola virus primary clades demonstrated a significant difference (P50.0002,SAS Version 9.1) between clade I (n518) and clade II (n57) [80.19±17.95% (SD) and 96.67±2.93 % (SD), respectively], supporting the hypothesis that there is a phenotypic trait that complements genomic differences (Figs 1 and 2).Increased comet production was seen with primary clade II isolates [from West Africa and Brazil (Alastrim minor)].Of the seven primary clade II strains, almost all were categorized by both methods as the highest comet producers (Fig. 2), six had .95% plaques producing comets.The one exception, BRZ66_gar (92 %), was a laboratory-derived strain with an unknown number of passages through tissue culture cells; laboratory manipulations could possibly have caused adaptations that might explain some phenotypic differences compared with other primary clade II strains.Within primary clade I, only two strains [IND64_vel4 (Group 1, 96 %) and BSH74_sol (Group 2, 95 %)] of 18 displayed ¢95 % comet production and only four strains displayed between 90 and 95 % production.The independent and reproducible categorization of the vast majority of primary clade II strains as the highest comet producers, and the lack of prominent comet phenotypes within primary clade I, increases confidence in the relationship of higher EEV production/release with variola virus primary clade II.The Alastrim isolates within this clade, historically considered to be the least pathogenic strains as demonstrated by clinical outcome, have reported CFRs ,1 %.Highly pathogenic Asian and Oriental variola isolates (e.g.NEP73_175, KOR47_lee, IND64_vel4 and BSH74_sol) did not uniformly express EEV, as measured by comet production (Figs 1 and 2).Thus, increased variola virus EEV production alone does not relate to increased mortality (CFR). For plaque size analysis, three of the largest primary plaques (of similar size with uniform circular shape) for each isolate were measured from random locations within each of the photographs of duplicate wells (65.8 magnification).The measured plaques were .1 cm from the well boundary on the photographs to ensure there was no interference in plaque formation.The mean plaque size (in mm; 65.8 magnification) for each isolate was determined from six plaque measurements.Consistent with observations from vaccinia strains where increased EEV production/release does not relate to increased plaque size (Blasco & Moss, 1992;Katz et al., 2002;Payne, 1980), clade II variola virus strains with robust comets displayed an equivalent maximal plaque size as clade I isolates (6.7-9.2 mm and 5.3-9.7 mm, respectively, at 65.8 magnification) (Figs 1 and 2).However, within clades, mean plaque size was significantly larger for clade I isolates from Asia, Bangladesh, the Orient or the Middle East (n58) (mean±SD, 8.46±0.92mm) versus the lower CFR of non-West-African isolates (n58) (mean±SD, 7.56± 1.07 mm) (P-value¡0.0001,two-tailed t-test).Similarly, clade II isolates from West Africa (n54) (mean±SD, 8.54±0.59mm) demonstrated significantly larger plaque size and higher CFRs than Brazil/Alastrim isolates (n53) (mean±SD, 7.06±0.64mm) (P-value¡0.0001,two-tailed t-test).Therefore, an increase in the mean plaque size of variola virus was not associated with increased EEV (comet) production but, within primary clades, did relate to increased CFRs. The prominent comet phenotype for primary clade II isolates implied that these 'less-virulent' strains produce/ release more EEV.To test this hypothesis, growth kinetic assays were performed to quantify CAV and EEV production by variola strains from the three comet formation groups that were isolated from diverse geographical locations and in a range of years.Cell monolayers (BSC-40) were infected with each variola strain as described above, at ~5 p.f.u. per cell.After adsorption for 1 h (35.5 uC, 6 % CO 2 ), the inoculum was removed and monolayers were washed twice before addition of growth media.Supernatants containing released EEV were collected at each time point [2, 6, 12, 18, 24, 36, 48 and 72 h post-infection (p.i.)] and titres on E-6 cells were determined as previously described (Yang et al., 2005), in the presence of antibody (J2D5, 1 : 1000 dilution) to neutralize any contaminating IMV.Media was added to each of the duplicate wells to harvest CAV by scraping monolayers into the media and freezing the lysate at 270 u C. Titres for all CAV were determined at the same time after freeze-thawing the lysate and sonicating on ice for 1.5 min.To titrate, inoculum was serially diluted in media and plated onto confluent E-6 cell monolayers.After 1 h incubation (35.5 u C, 6 % CO 2 ), inoculum was removed and medium was added to each well.Incubation continued for 4 days until the cells were stained with 26 crystal violet. Accumulation of CAV (Fig. 3a) was higher in strains NEP73_175, SOM77_ali and SUD47_jub (~2-3 log increase) than in strains BSH74_sol, BRZ66_39 and SLN68_258 (~1-1.5 log increase).The number of released EEV was maximal for all strains at ~18-24 h p.i. (Fig. 3b).The highest levels of EEV were produced by BRZ66_39 and SLN68_258, despite these strains producing ,2 log increase in CAV (Fig. 3a and b).The highest EEV production correlated with clade II variola virus strains; differences in EEV production between intermediate and low comet producers were more difficult to discern during analysis of growth kinetics.Evaluation of the ratio of EEV to CAV at maximal EEV release (24 h p.i.) confirmed that EEV production is directly related to the qualitative comet formation ability of variola virus (Fig. 3c).Higher EEV production/release and comet formation is associated with the less pathogenic variola virus primary clade II. Relatively small alterations in EEV/CEV envelope proteins influence vaccinia virus pathogenicity with variable effects on immunogenicity within animal models (Gurt et al., 2006;Katz et al., 2003).A single amino acid (aa) substitution within B5 or a 35 aa truncation of A33 dramatically increases EEV production in vitro but reduces virulence upon mouse intranasal infection (Katz et al., 2003).We compared variola virus sequences of each homologue to vaccinia virus-characterized EEV and IEV predicted proteins.Analysis of 47 variola virus sequences (Esposito et al., 2006) demonstrated that both B5 and A33 aa sequences were conserved between primary clades, suggesting that EEV production differences do not result from genetic alterations in these EEV/CEV proteins.Further analysis of homologues to IEV and EEV envelope proteins (A34, A36, A56, F12 and F13) showed clade-specific differences only within A56 (haemagglutinin) and F12 (see Supplementary Fig. S1, available in JGV Online).No conserved aa changes that related to plaque size differences seen between subclades within variola virus clade I were found within these homologues, but four conserved aa changes within F12 differentiated variola virus clade II subclades, perhaps relating to increased plaque size of the West African isolates.The F12 protein is associated with IEV and is required for egress to the cell surface (Herrero-Martı ´nez et al., 2005;van Eijl et al., 2002).In the absence of F12, the integrity of vaccinia virus IMV and IEV are preserved, but the virus produces small plaques, decreased CEV (.99 %) and EEV (sevenfold less) and diminished virulence (Herrero-Martı ´nez et al., 2005;Zhang et al., 2000).Although haemagglutinin is not necessary for normal EEV production (Sanderson et al., 1998), mutations in the C-terminal region affect kinetics of IMV trafficking through the envelope formation process (Shida, 1986).Potentially, the few clade-specific and subcladespecific aa alterations within F12 or the insertion and point mutation in the near-transmembrane region of haemagglutinin may be associated with increased EEV production/ release by primary clade II variola virus isolates and increased plaque size of West African isolates (Supplementary Fig. S1). Previous research has focused upon the effect that single gene mutations have upon vaccinia virus EEV production.Our study assessed differences in variola virus comet phenotypes, EEV production, plaque morphology and their relation to phylogeny and outbreak CFRs as a measure of pathogenicity.Clade II isolates, historically considered 'less virulent', were classified by two independent methods as the highest EEV and comet-producing strains.EEV production alone does not correlate with pathogenicity; other viral and host factors, such as the host immune and inflammatory response (Stanford et al., 2007), are likely to be related to variola virus pathogenicity and subsequently CFRs.A virus-related, biologically beneficial consequence of increased EEV production/ release by clade II variola virus isolates, which have decreased virulence, may be to promote virus dissemination between human hosts.A role of EEV in virus transmission was suggested through observations of vaccinia virus strain IHDJ intranasal mouse infection; EEV and CEV accumulate in nasal epithelia 5 days p.i. (Payne, 1980;Payne & Kristensson, 1985).It may be that increased EEV production provided for efficient respiratory spread between hosts but also diminished variola virus pathogenicity due to EEV/CEV-specific host immune recognition and clearance. The lack of a temporal phylogenetic relationship and presence of a geographical phylogenetic relationship within this collection of variola isolates argue against a pandemic sweep of disease (Li et al., 2007).The strong relationship between variola strain phylogeny, geographical origin and in vitro phenotype (comet-formation ability, EEV production/release and plaque size) could represent regional geographical evolution of variola virus with its human host.The biological properties associated with smallpox human pathogenesis are not completely known; further characterization of in vitro phenotypes of variola virus clades may lead to identification of those factors involved in pathogenicity and subsequently to novel targets for antiviral therapies.Finally, the relationship between comet formation (EEV production) and immunogenic protection or potential involvement in transmission should be explored further. Fig. 1 . Fig. 1.Comet phenotype (upper wells) and plaque size (bottom wells) of 25 variola strains.Strains were classified qualitatively from most (Visual Group 1) to least (Visual Group 3) robust/prominent comets and the percentage of total plaques forming comets are shown on upper wells.The mean maximal plaque sizes (¾5.8 magnification) are shown in the corners of the lower wells. Fig. 3 . Fig. 3.The CAV (a) and EEV (b) titres of six variola strains were calculated at different times p.i. Data shown are means±SD of data from duplicate wells.(c) EEV/CAV ratio at the time of maximal EEV release (24 h p.i.).
v3-fos-license
2018-12-11T20:15:19.442Z
2015-05-27T00:00:00.000
55423595
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/2222F7553297.pdf", "pdf_hash": "bcd26b0bb5cf49841215d02c2e9ca39748655635", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42203", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "bcd26b0bb5cf49841215d02c2e9ca39748655635", "year": 2015 }
pes2o/s2orc
Evaluation of antioxidant and antimutagenic potential of Justicia adhatoda leaves extract In this study, the ethanolic extract of Justicia adhatoda (Acanthaceae) leaves was prepared by successive extraction procedure in increasing polarity order. Moreover, there are no antimutagenic evaluation reports found. In the present study our aim was to determine the antioxidant and antimutagenic potential of different fractions of ethanolic extract of J. adhatoda. Ultra high performance liquid chromatography (UHPLC) analysis revealed the presence of polyphenolic compounds and flavonoids which might be responsible for bioprotective activity. Among the five fractions (hexane, chloroform, ethyl acetate, n-butanol and aqueous), n-butanol and ethyl acetate exhibited significant antioxidant activity with minimum IC50 value (< 105.33 μg/ml) whereas, hexane, chloroform and aqueous fractions exhibits excellent antimutagenic potential against 2-aminofluorine for S. typhimurium TA98 and TA100 strains in the presence of S9 mix. These results indicate that these fractions need further research into its potential chemoprevention effects. INTRODUCTION Reactive oxygen species (ROS), including superoxide anion radical, hydrogen peroxide, hydroxyl radicals and reactive nitrogen species (RNS) cause damage to DNA by oxidation, methylation and deamination (Wiseman and Halliwell, 1996).This may lead to the occurrence of various dreadful diseases like cancer.Moreover, the accumulation of ROS has been postulated to be implicated in the aging process (Beckman and Ames, 1998).An abundance of data indicates that human diet plays an important role in the cause and prevention of various types of cancers and cardiovascular diseases (Doll and Peto, 1981;Ames et al., 1995;Willett, 1995;Surh, 2003).Natural plant products have been a rich source of conventional medicine for the treatment of many forms of cancer (Cragg and Newman, 2005).Thus, many investigators have mapped out a variety of naturally occurring phytochemicals with antioxidant properties (Ramos et al., 2003;Gonzalez-Avila et al., 2003;Jayaprakasha et al., 2007;Singh et al., 2009).Some of these antioxidants are being identified as anticarcinogens (Ames, 1983).Antioxidants are substances that have ability to scavenge reactive oxygen species (ROS) and *Corresponding author.E-mail: sarojarora.gndu@gmail.com. Tel: +91-183-2451048, +91-9417285485. Fax: +91-183-2258819, 2258820.Author(s) agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License delay or prevent the oxidation of cellular substrates.Due to serious side effects associated with chemotherapies and radiotherapies, an alternative medication from natural sources has been observed (Lee, 2010).These natural plant products act by protecting or even reverting several types of human cancer and degenerative diseases (Chanarat, 1992;Feig et al., 1994;Kohlmeier et al., 1995;Yen and Chan, 1995). Justicia adhatoda is a well-known plant used in Ayurvedic and Unani medicine (Claeson et al., 2000).A wide range of phytochemical constituents such as vasicine, vasicinone etc have been isolated from J. adhatoda which possess activities like antitussive, abortifacient, antimicrobial, cardiovascular protection, anticholinesterase, anti-inflammatory and other important activities (Singh et al., 2011). There are very few reports in literature which indicate the role of this plant in radical scavenging capacity for the DPPH radical (Srinivasan et al., 2013).Moreover, this plant has also not explained the phenolic composition and antimutagenic potential of the fractions/extract.Therefore, the present study was undertaken to evaluate the antioxidant and antimutagenic potential by using different assays. Sample preparation Leaves of J. adhatoda were collected from Bibi Kaulan Botanical garden, Guru Nanak Dev University, Amritsar.The plant was identified and submitted in herbarium where a voucher of specimen (Accession no.7034 dated 27 th April 2014) in Department of Botanical and Environmental Sciences, GNDU, Amritsar.The fresh leaves of J. adhatoda were washed with tap water and then dried at room temperature.Dried leaves were crushed and extracted as per the procedure given in Figure 1. Phytochemical analysis All the fractions/extract were analysed for the presence of phenolic content, and estimated by the method of Yu et al. (2002).In this method, the phenolic compounds in the extract undergo reaction with phosphomolybdic acid in the presence of Folin-Ciocalteu reagent and give a blue coloured complex in alkaline medium.The total phenolic content of the extracts is measured in terms of gallic acid equivalents (GAE) which was expressed in terms of content as mg GAE/g dried weight of extract.Stock solution of extract that is, 1000 µg/ml was prepared.500 µl of Folin-Ciocalteu was added to 100 µl of extract, followed by the addition of 1.5 ml of 20% of sodium carbonate.The final volume was made to 5 ml with distilled water after 2 h of incubation at room temperature.The absorbance was measured at 765 nm using spectrophotometer (Systronics PC based double beam 2202).Similarly, the total flavonoid content was determined using method as per the procedure given by Kim et al. (2003).An aliquot (1 ml) of extract solution was mixed with 4 ml of water and 0.3 ml of NaNO2 (5%).After the incubation of 5 min, 0.3 ml of AlCl3 was added and it was again incubated for the next 6 min.The incubation was followed by addition of 2 ml of NaOH.The final volume was made to 10 ml by addition of water.Absorbance was recorded at 510 nm.Total flavonoid content (TFC) was expressed as mg Rutin equivalent/g dried weight of extract/fractions. Ultra performance liquid chromatography (UPLC) All the six extracts/fractions of J. adhatoda were subjected to UPLC in order to identify the presence of various polyphenolic compounds like gallic acid, catechin, chlorogenic acid, umbelliferone and so on.For UPLC analysis, the dried extracts/fractions were dissolved in HPLC grade methanol (1.0 mg/ml), filtered through sterile 0.22 µm Millipore filter and subjected to qualitative and quantitative analysis by using Nexera UHPLC (Shimadzu) system. Preparation of standard phenol solution The standard phenolic stock solutions was prepared by dissolving 1 mg of each standard compounds like gallic acid, catechin, chlorogenic acid, epicatechin, caffeic acid, umbelliferone, coumaric acid, rutin, ellagic acid, tert-butyl hydroquinone, quercetin and kaempferol in 1 ml of methanol. Apparatus and chromatographic conditions The UPLC analysis was performed on Nexera UHPLC (Shimadzu) system.The system was equipped with LC-30 AD quaternary gradient pump, SPD-M20 A diode array detector (DAD), CBM-20 A Communication Bus Module, CTO-10 AS VP column oven, DGU-10 A5 prominence degasser, and SIL-30 AC Nexera auto sampler.Detection wavelength used was 280 nm.The column used was an Enable C-18 column (150×4.6×5µm particle size) equipped with a 0.2 µm filter.The flow rate for all the samples was 1 ml/min, the column oven temperature was 27°C and the full loop injection volume was 10 µl. The hydrogen donating or radical scavenging ability of six fractions of J. adhatoda was measured by reduction in stable DPPH radicals spectrophotometrically as described by Dudonne et al. (2009) with minor modifications.Briefly, 0.1 mM DPPH solution in methanol was prepared and 2 ml of this solution was added to 300 µl extract solution at different concentration (0-1000 µg/ml).The absorbance was measured at 517 nm after 30 min incubation.The change in colored product from purple to yellow with increase in extract concentration was read spectrometrically at 517 nm.Rutin was used as the reference compound.Radical scavenging activity was expressed as the percentage free radical scavenging by the sample and calculated by using the following formula: Where AControl = absorbance of the control and ASample = absorbance in the presence of sample. The reduction potential of fractions was measured by ability to reduce ferricyanide ion that is, [Fe (CN)6] 3-to ferrocyanide ion that is [Fe (CN)6] 4-by using protocol as mentioned in Kannan et al. (2010) with some modifications.The extract (0.75 ml) at various concentrations (0,20,40,60,80,100,200,400,600,800, 1000 µg/ml) was mixed with 0.75 ml of phosphate buffer (0.2 M, pH 6.6) and 0.75 ml of potassium hexacyanoferrate (K3Fe(CN)6) (1%, w/v), followed by incubation at 50°C in a water bath for 20 min.The reaction was stopped by adding 0.75 ml of trichloroacetic acid (TCA) solution (10%) and then centrifuged at 800 g for 10 min.1.5 ml of the supernatant was mixed with 1.5 ml of distilled water and 0.1 ml of ferric chloride (FeCl3) solution (0.1%, w/v) was added and kept at room temperature for 10 min.The absorbance was read at 700 nm.The higher absorbance of the reaction mixture indicates greater reducing power. Where, AControl = absorbance of the control and ASample = absorbance in the presence of sample. The cupric ions (Cu 2+ ) reducing ability of J. adhatoda extracts was measured by the method of Apak et al. (2007) with slight modification as described by Gulcin (2010).In this assay, Cu (II)-Nc complex is reduced to the highly coloured Cu (I)-Nc chelate by the oxidation of given antioxidant.In this assay, Copper (II) chloride, Neocuproine and Ammonium acetate (NH4Ac) buffer solutions (1 ml each) were mixed. The extract solution (x ml) and H2O (1.1-x) ml were added to the initial mixture so as to make the final volume 4.1 ml.The tubes were stoppered and after 30 min, the absorbance at 450 nm was recorded against a reagent blank.Rutin was taken as standard which act as a positive control.Increased absorbance of the reaction mixture indicates increased reduction ability. The reducing activity on superoxide anion (O2 −* ) was measured by modified PMS-NADH system explained by Li et al. (2014).The superoxide anions were generated non-enzymatically in a phenazine methosulphate-NADH system, and assayed by development of blue coloured formazan dye upon reduction of nitro blue tetrazolium.1 ml of NBT solution (144 µM in 100 mM phosphate buffer , pH 7.4), 1 ml of reduced NADH (677 µM in100 mM phosphate buffer, pH 7.4) and 1 ml of sample extract were mixed and the reaction was started with adding 1 ml of PMS solution (60 µM PMS in100 mM phosphate buffer, pH 7.4).The reaction mixture was incubated at 25°C for 5 min and the absorbance of coloured complex was measured at 560 nm.The inhibition percentage was calculated using the formula: Where Acontrol = absorbance of the control and Asample = absorbance in the presence of sample The method of Re et al. (1999) was adopted for ABTS radical cation assay with slight modifications.The stock solutions included 7 mM ABTS solution and 140 mM potassium persulfate solution.Both solutions were added in such a proportion so as to make final concentration of 2.45 mM ABTS + solution and allowed them to react for 12-16 h at 30°C in the dark.The solution was then diluted by mixing ABTS + solution with methanol to obtain an absorbance of 0.706 ± 0.001 units at 734 nm using the spectrophotometer.Plant extracts (0.1 ml) were allowed to react with 1 ml of the ABTS solution and the absorbance was taken at 734 nm using the spectrophotometer.The ABTS + scavenging capacity of the extract was calculated as: Where, Acontrol = absorbance of the control and Asample = absorbance in the presence of sample. In vitro antimutagenic assay (Ames assay) The Salmonella histidine point mutation assay of Maron and Ames (1983) was used to test the antimutagenic activity of the extracts/fractions, with some modifications as described by Aqil et al. (2008).The Ames tests and S9 mix protocol (Maron and Ames, 1983) were performed on both bacterial strains (TA98 and TA100) to determine the effect of J. adhatoda extracts on 2-Aminofluorene (2-AF), sodium azide (NaN3) and 4-nitro-o-phenylediamine (NPD) induced mutagenicity.The samples were dissolved in DMSO while making the different concentrations namely 100, 250, 500, 1000 and 2500 µg/0.1 ml. In brief, 0.5 ml of S9 mixture or phosphate buffer was distributed in sterilized capped tubes in an ice bath, then 0.1 ml of mutagen, 0.1 ml of plant extract and 0.1 ml of bacterial culture were added.After mixing it gently; 2 ml of top agar (0.6% agar, 0.5% NaCl, 0.5 mM L-histidine and 0.5 mM D-biotine) were added to each tube and poured immediately on the minimal agar plates.The procedure for the pre-incubation was similar with co-incubation, except that bacterial strain+ extract+mutagen+S9 mix were incubated for 30 min prior to the addition of top agar.The plates were incubated at 37°C for 48 h and the revertant colonies were counted on protocol colony counter. Among the three mutagens, NaN3 and NPD are direct-acting mutagens which affect the genetic material directly, leading to the structural damage, on the other hand 2-AF act on DNA in an indirectly manner.The inhibition rate of mutagenicity (%) was calculated by using equation from Ong et al. (1986) Where x, is the number of revertants induced by mutagen alone (Positive control), y is the number of revertants induced by mutagen in the presence of extract (co-incubation or pre-incubation) and z is the number of revertents in the presence of extract alone (Negative control). Statistical analysis The results were expressed as mean ± Standard error (SE) of three independent experiments.The one-way ANOVA test was used to analyze the result and P<0.05 was considered significant. RESULTS AND DISCUSSION Phenolic compounds known to possess high antioxidant activity are commonly found in fruits, vegetables and herbs (Mustafa et al., 2010).Results of the study show the plant extracts are rich in phenolic compounds which varied from 27.267 to 182.6 mg GAE/g (Table 1).The total phenolic contents of ethyl acetate and n-butanol fraction of J. adhatoda were found to be maximum at 182.6 and 105.26 mg GAE/g (y=0.001x+0.037;R acetate and n-butanol fraction were 299.6 and 95.6 mg RE/g (0.001x+0.045;R 2 =0.993) respectively.Many studies reveal that antioxidant activity of phenolic compounds are due to their redox properties, which allow them to act as reducing agents, singlet oxygen quencher, hydrogen donators and chelating agents of metal ions (Rice-Evans et al., 1995;Mustafa et al., 2010).These phytochemical compounds are known to provide support for bioactive properties of plant, and thus they are responsible for the antioxidant properties of J. adhatoda.A significant relationship between antioxidant potential and total phenolic content was found, indicating that phenolic compounds might be the major contributors to the antioxidant potential. All the extracts/fractions were further examined for their specific phenolic composition by the UHPLC to evaluate the presence of phenolic acids and flavonoids (Figure 2).The quantity of phenolic compounds ranged from 1.92 to 37.63 µg/g of gallic acid, 0.6 to 17.4 µg/g of catechin, 0.4 to 133.1 µg/g of ellagic acid, 0.765 to 160.1 µg/g and so on (Figure 3). The DPPH scavenging activities of different fractions/ extract of leaves of J. adhatoda are shown in Figure 4a.The ethyl acetate and n-butanol fraction showed highest DPPH radical scavenging activity of 91.86 and 90.44%, respectively, whereas other fractions/extract leaves exhibited comparatively less inhibition.This revealed that both fractions have the highest free radical scavenging activity probably due to presence of high polyphenolic compound.A similar kind of studies conducted by Rao et al. (2013) demonstrated the significant DPPH scavenging activity of methanolic extract of J. adhatoda with an IC 50 value 105.33 µg/ml. Figure 4b depicts the reducing power of five extracts/fractions of J. adhatoda leaves in comparison to standard compound (Rutin).It was found that extract/ fractions have tendency to reduce Fe (III) to Fe (II).It was noted that among the different extract and fractions, ethyl acetate and butanol fraction of leaves exhibited the maximum reducing power of 84.36 and 50.99%, respectively. In CUPRAC assay, Cu 2+ gets reduced to blue coloured Cu + chelate by antioxidants which showed maximum absorbance at 450 nm.In this assay, a higher absorbance indicates higher antioxidant activity.Figure 4c shows the maximum absorbance of 1.083 and 1.076 nm of ethyl acetate and n-butanol fraction at 200 µg/ml concentration respectively whereas, standard (Gallic acid) used shows less absorbance at same concentration. The potential to scavenge O 2 -* radical from PMS-NADH coupling system by the fractions of J. adhatoda was evaluated by superoxide anion radical scavenging assay.The decrease in absorbance at 560 nm with increase in fraction concentration indicated the consumption of superoxide anion in the reaction mixture.The results as shown in Figure 4d indicate that among the six fractions/extracts, n-butanol and ethyl acetate fractions were more effective in scavenging the superoxide radicals with 77.51 and 85.82%, inhibition respectively.However, at the same concentration, the standard (rutin) showed 64.84% scavenging ability.These free radicals are found to be very hazardous for the health.To overcome the dire effect of these free radicals various synthetic antioxidants like butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) are used as a daily supplements.But the recent studies have reported that these synthetic antioxidants provoke tumors in the stomach of rodents (Grice, 1988). Our results clearly indicate that the n-butanol and ethyl acetate fractions were efficiently active against free radicals in a concentration-dependant manner.Figure 4e represents the ABTS + scavenging ability of different extracts/fractions of J. adhatoda leaves in comparison to standard (Gallic acid).It was found that ethyl acetate and n-butanol exhibited maximum inhibitory percentage of 91.29 and 92.48%, respectively, whereas Gallic acid scavenged 93.86% radicals at the same concentration.The ABTS radical cation (ABTS+.) is formed due to the reaction between potassium persulfate and 2,2'-azinobis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) and it exhibited maximum absorbance at 700 nm.However, hydrogen donating property of phenolic compounds converts these colored species to colourless entities by causing their reduction. Antimutagenic assay The antimutagenic activity of different fractions of J. adhatoda was determined by using S. Typhimurium strains (TA98 and TA100) in the absence or presence of metabolic activation.Among the six fractions (Hexane, chloroform, ethyl acetate, n-butanol, aqueous and ethanolic extract), ethyl acetate and n-butanol fractions were able to prevent frame shift and base pair substitution mutation in the presence of metabolic activation in TA98 and TA100 strain respectively (Słoczyńska et al., 2014). On the other hand, the fractions had a weak inhibitory effect on direct-acting mutagens (NPD and NaN 3 ) in the absence of metabolic activation (Figures 5 and 7).It was interpreted from the results that the fractions were found to be more effective and exhibited significant percent inhibition in case of pre-incubation mode with metabolic activation as compared to co-incubation without metabolic activation.In TA98 strain, among the six fractions tested, aqueous, chloroform and hexane showed 96.42, 96.03 and 96.42%, inhibition respectively at highest concentration (2500 µg/0.1ml per plate) in pre-incubation with metabolic activation mode of experimen-tation (Figure 6) whereas, the same fractions showed 41.62, 62.7 and 31.56%inhibition of mutagenicity of sodium azide at the same experimental conditions without metabolic activation. However, the extent of inhibition with TA100 strain was slightly less as compared to TA98 strain.In TA100 strain, all the six fractions showed decrease in the number of revertants colonies against indirect acting 2-AF mutagen with percent inhibition ranges from 90.89% to 99.29% at 2500 µg/0.1 ml in pre-incubation (with metabolic activation) as shown in Figure 8.All the six fractions were found to inhibit the mutagenicity produced by 2-AF (with metabolic activation) but the results were not significant with the direct acting mutagens. From the results obtained in the antioxidant and antimutagenic assay, it was seen that the fractions/ extract which show significant antioxidant activity did not show antimutagenic potential and vice-versa.Słoczyńska et al. (2014) reported that some antimutagenic compound do not possess antioxidant properties of their own but can be converted into derivative that show high antioxidant activity.Such phenomenon was well demonstrated by Parvathy et al. (2010) in which very high antimutagenic activity was observed in amino acid conjugates of curcumin than the curcumin itself. Conclusion The fractions obtained from J. adhatoda acquire excellent antioxidant and antimutagenic activities.It was seen that fractions are strong and effective enough as a scavenger of free radical, superoxide radical and hydrogen peroxide radicals.Furthermore, most of the fractions showed significant antimutagenic activity with metabolic activation.The observed bioactive activity may be due to group of phenolic compounds present in different fractions. Figure 1 . Figure 1.Schematic diagram for preparation of crude extract of Justicia adhatoda by alcoholic extraction method. Dried at Room temperature (30-35ºC) and crushed into powdered form Ccccccccc After 24 h Extracted thrice with n-hexane Solvent was removed under vaccum using a rotary evaporator Coarsely powdered plant material (1 kg) is packed in percolator and dipped in ethanol for overnight Fresh leaves (3 kg weigh) Ethanol extract is collected and distilled under reduced pressure in a pre-weighed round bottomed flask Fresh solvent was added and again the same procedure is repeated Figure 2 . Figure 2. UHPLC analysis showing the concentration of polyphenolic compounds in the different fractions/extract of Justicia adhatoda. Figure 4 . Figure 4. Inhibitory effect of five extracts/fractions of Justicia adhatoda at 200 µg/ml concentration by (a) DPPH assay; (b) reducing power assay; (c) Cuprac assay; (d) superoxide anion scavenging assay and (e) ABTS + radical cation decolorization assay develop and apply natural antioxidant so that they can scavenge the free radicals without retarding the function of biological system. Figure 5 . Figure 5. Antimutagenic potential of: (a) Ethanolic extract; (b) hexane fraction; (c) chloroform fraction; (d) ethyl acetate fraction; (e) nbutanol fraction and (f) aqueous fraction on Salmonella typhimurium strain TA98 without S9 against sodium azide.Data shown are mean ± SE of experiment performed in triplicate.Means followed by same letters are not significantly different using HSD mul tiple comparison test.The results were found to be statistically significant at p ≤ 0.05. Figure 6 . Figure 6.Antimutagenic potential of: (a) Ethanolic extract; (b) hexane fraction; (c) chloroform fraction; (d) ethyl acetate fraction; (e) n-butanol fraction and (f) aqueous fraction on Salmonella typhimurium typhimurium strain TA98 with S9 against 2-aminofluorene (2-AF).Data shown are Mean ± SE of experiment performed in triplicate.Means followed by same letters are not significantly different using HSD multiple comparison test.The results were found to be statistically significant at p ≤ 0.05. Figure 7 . Figure 7. Antimutagenic potential of: (a) Ethanolic extract; (b) hexane fraction; (c) chloroform fraction; (d) ethyl acetate fraction; (e) nbutanol fraction and (f) aqueous fraction on Salmonella typhimurium typhimurium strain TA100 without S9 against 4-nitro-ophenylenediamine (NPD).Data shown are Mean ± SE of experiment performed in triplicate.Means followed by same letters are not significantly different using HSD multiple comparison test.The results were found to be statistically significant at p ≤ 0.05. Figure 8 . Figure 8. Antimutagenic potential of: (a) Ethanolic extract; (b) hexane fraction; (c) chloroform fraction; (d) ethyl acetate fraction; (e) n-butanol fraction and (f) Aqueous fraction on Salmonella typhimurium typhimurium strain TA100 with S9 against 2-aminofluorene (2-AF).Data shown are Mean ± SE of experiment performed in triplicate.Means followed by same letters are not significantly different using HSD Multiple comparison test.The results were found to be statistically significant at p ≤ 0.05. Table 1 . Total phenolic and flavonoid content of fractions of Justicia adhatoda.
v3-fos-license
2022-06-04T13:38:22.784Z
2022-06-04T00:00:00.000
249318990
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10143-022-01806-3.pdf", "pdf_hash": "cc04536e19aa2f7bdb4e33fd65bd73166423a22c", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42204", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e91babf3df3a994f9722efd206a92c84b6998fe8", "year": 2022 }
pes2o/s2orc
Radiation therapy for atypical and anaplastic meningiomas: an overview of current results and controversial issues Meningiomas are the most common intracranial tumors. Most meningiomas are WHO grade 1 tumors whereas less than one-quarter of all meningiomas are classified as atypical (WHO grade 2) and anaplastic (WHO grade 3) tumors, based on local invasiveness and cellular features of atypia. Surgical resection remains the cornerstone of meningioma therapy and represents the definitive treatment for the majority of patients; however, grade 2 and grade 3 meningiomas display more aggressive behavior and are difficult to treat. Several retrospective series have shown the efficacy and safety of postoperative adjuvant external beam radiation therapy (RT) for patients with atypical and anaplastic meningiomas. More recently, two phase II prospective trials by the Radiation Therapy Oncology Group (RTOG 0539) and the European Organisation for Research and Treatment of Cancer (EORTC 2042) have confirmed the potential benefits of fractionated RT for patients with intermediate and high-risk meningiomas; however, several issues remain a matter of debate. Controversial topics include the timing of radiation treatment in patients with totally resected atypical meningiomas, the optimal radiation technique, dose and fractionation, and treatment planning/target delineation. Ongoing randomized trials are evaluating the efficacy of early adjuvant RT over observation in patients undergoing gross total resection. Introduction Meningiomas are the most common primary intracranial tumors and account for more than one-third of all brain tumors [1]. Based on local invasiveness and cellular features of atypia, meningiomas are histologically characterized as benign (grade 1), atypical (grade 2), or anaplastic (grade 3) tumors by the latest World Health Organization (WHO) classification scheme [2]; accordingly, the proportion of meningiomas that have been classified as atypical and anaplastic meningiomas is around 20-25% and 1-3%, respectively [3]. For both, surgical resection is the first choice of treatment; however, a significant proportion of tumors display a more aggressive behavior associated with an approximately 6-8-fold increased risk of recurrence and a significantly higher risk of dying of tumor progression compared to benign meningiomas [4,5]. Beyond surgery, external beam radiation therapy (RT) has been usually recommended to increase local control after resection of grade 2 and 3 tumors [6]. The evidence supporting this treatment recommendation largely comes from systematic reviews including retrospective series [7][8][9] and two recent nonrandomized observational prospective trials conducted by the Radiation Therapy Oncology Group (RTOG 0539) [4,10] and the European Organisation for Research and Treatment of Cancer (EORTC 22042) [11]; however, several issues remains a matter of debate, including the timing of the treatment (early versus delayed postoperative RT), the optimal radiation technique, and types of radiation dose and fractionation. One of the most controversial topics is the superiority of early adjuvant RT over observation in reducing the risk of tumor recurrence after gross total surgical resection in patients with atypical meningiomas. In addition, there is concern regarding potential risks of RT-related toxicity, which include but are not limited to neurocognitive impairment, hypopituitarism, and the development of a second tumor. Hopefully, these important questions will be answered by two prospective controlled phase III trials where patients were randomized to receive adjuvant RT or observation after surgical resection of an atypical meningioma: the recently closed ROAM/EORTC 1308 trial [12] and the ongoing NRG-BN003 (Clini calTr ials. gov Identifier: NCT03180268) trial. In this review, we discuss some of the most recent advances in radiation treatment of patients with atypical and anaplastic meningiomas, as well as evidence supporting its use in the context of different clinical situations. The safety and efficacy of different radiation approaches and techniques were also examined. Histopathologic classification The systematic adoption of the histopathologic criteria provided by the 2016 update of the WHO classification of brain tumors has markedly increased the relative proportion of atypical and anaplastic meningiomas [13]. Both tumors exhibit a much greater recurrence rate compared to benign meningiomas, which negatively impacts survival. As confirmed by the latest WHO classification, tumors with low mitotic rate (less than 4 per 10 high power fields (HPF) are generally classified as benign, WHO grade 1 tumors. For grade 2 atypical meningiomas, brain invasion or a mitotic count of 4-19 per HPF are a sufficient criterion for the diagnosis [2]. Atypical meningiomas can also be diagnosed in presence of 3 or more of the following properties: sheetlike growth, spontaneous necrosis, high cellularity, prominent nucleoli, and small cells with high nuclear-cytoplasmic ratio. Grade 3 anaplastic meningiomas are characterized by elevated mitotic activity (20 or more per HPF) or frank anaplasia. In addition, specific histologic subtypes such as clear cell or chordoid cell meningiomas are classified as grade 2, and rhabdoid or papillary meningiomas as grade 3 tumors. A new finding of WHO 2021 classification is the inclusion of several molecular biomarkers associated with the classification and grading of meningiomas, e.g., SMARCE1 in clear cell subtype, BAP1 in rhabdoid and papillary subtypes, KLF4/TRAF7 in secretory subtype mutations, and TERT promoter mutation and/or homozygous deletion of CDKN2A/B in anaplastic meningiomas. When these criteria are applied, up to 3 and to 25% of all meningiomas are atypical or anaplastic. Radiation techniques Assuming that RT is of value in improving tumor control, new advanced radiation techniques can provide excellent target dose coverage, precise target localization, and accurate dose delivery [14]. For large postoperative resection cavity and/or residual tumors, sophisticated techniques using intensity-modulated radiotherapy (IMRT) or volumetric modulated arc therapy (VMAT) allow highly conformal dose distribution and should be preferred over three-dimensional (3D) conformal RT. Stereotactic radiation techniques, given as either radiosurgery (SRS) or hypofractionated radiotherapy (SRT), have been employed in patients with residual or recurrent atypical and anaplastic meningiomas [15][16][17][18][19][20][21][22][23][24][25][26][27][28]. The main advantage of stereotactic techniques is their ability to achieve a steep dose fall-off at the edge of the target volume lowering the radiation dose to surrounding brain structures, then limiting the potential toxicity of treatments. Current stereotactic techniques include Gamma Knife (Elekta Instruments AB, Stockholm, Sweden) and linear accelerator (LINAC)-based SRS systems, such as CyberKnife (Accuray, Sunnyvale, CA, USA) or Novalis (NTx) (BrainLAB AG, Feldkirchen, Germany). Patients receiving Gamma Knife SRS are traditionally placed in a rigid stereotactic frame with a submillimetric target accuracy while those treated with LINAC-based SRS systems are usually immobilized in a high precision frameless stereotactic mask fixation system. A submillimeter accuracy of patient positioning in the treatment room is achieved using modern image-guided radiation therapy (IGRT) technologies, such as orthogonal x-rays (ExacTrac®Xray 6D system) or cone-beam CT (CBCT) [29]. Although dosimetric characteristics of these SRS systems can be different, no comparative studies have demonstrated the clinical superiority of one technique over another in patients with brain tumors in terms of local control and treatment-related toxicity. Protons have been employed for skull base tumors either as fractionated RT or as SRS [14]. A radiobiological advantage of protons over photons is that they deposit most of their energy at the end of their range, with very little exit dose beyond the target volume. This narrow region of energy deposition is known as the Bragg peak and it may allow for a lower integral dose delivered to the surrounding normal tissues with protons as compared with photons. Because of the limited number of published series and their retrospective nature (see chapter below), current clinical data do not allow any definitive conclusion about the superiority of proton-based over photon-based techniques in terms of effectiveness and long-term toxicity. Imaging and tumor delineation For resected tumors, the treatment planning is based on postoperative MRI, although preoperative MRI may provide useful information on the initial extent of disease and persistent postoperative brain infiltration. The gross tumor volume (GTV) delineation is based on the resection cavity plus any residual tumor using pre-and postcontrast T1-weighted postoperative magnetic resonance imaging (MRI) sequences, without the inclusion of the perilesional edema [30]. Additional images that can help to improve target delineation include T2-weighted highresolution gradient and fast spin-echo sequences with and without fat suppression, and fluid-attenuated inversion recovery (FLAIR) sequences which can help to assess the extent of peritumoral edema and dural tail abnormalities [11,31]. In selected cases, PET imaging mainly with DOTATOC-tracers or DOTANOC-tracers has shown to improve target volume definition, e.g. patients with large tumors infiltrating the parapharyngeal soft tissues or for those located in the bony structures which are difficult to be distinguished on MRI and CT [32,33]. The clinical target volume (CTV), defined as the volume of tissue that contains any microscopic disease and potential paths of microscopic spread, comprises the preoperative tumor bed and a geometrical expansion of 10 mm around the GTV, which may be reduced to 5 mm around anatomic barriers, such as non-infiltrated bone or non-infiltrated brain. The CTV can be extended along the dura up to 20 mm to encompass thickened dural tail or clearly involved hyperostotic bone, especially in the area of adjacent reactive dura. Depending upon the localization method and reproducibility, an institution-specific margin of 0.3-0.5 cm is usually added to the CTV to generate the planning target volume (PTV). For planning purposes, MRI scans are subsequently fused with thin-slice non-contrast-enhanced CT scans. Of note, CT scans may have a complementary role in the imaging of skull base, specifically showing the pattern of bone involvement, e.g. hyperostosis and osteolysis, as well identifying intratumoral calcification better than MRI [34]. Results of two prospective phase II trials have been recently published by the RTOG and the EORTC [4,11,53]. The first report of The NRG Oncology/RTOG 0539 trial reported the initial outcome for 48 patients with intermediate-risk meningiomas, i.e., recurrent WHO grade 1 or newly diagnosed WHO grade 2 tumors after gross total resection, who were treated with IMRT or 3D conformal RT using doses of 54 Gy given in 30 fractions [4]. The estimated 3-year progression-free survival, overall survival, and local failure rates were 93.8%, 96%, and 4.1%, respectively. Clinical outcomes were similar between patients with recurrent benign meningiomas and atypical meningiomas receiving gross total resection. Adverse events were limited to grade 1 and grade 2 only. In a second report from the same trial, Rogers et al. [10] reported the clinical outcome for 53 patients with a high-risk meningioma, defined by new or recurrent anaplastic or recurrent atypical meningioma of any resection extent, or new atypical tumor after subtotal resection; treatment consisted of IMRT using simultaneous integrated boost, with the higher-dose volume receiving 60 Gy and lower-dose volume receiving 54 Gy in the same 30 fractions. At a median follow-up of four years, 3-year progressionfree survival was 58.8%, local control 68.9%, and overall survival 78.6%. Combined acute and late adverse events occurred in about 40% of patients and were limited to grades 1 to 3, except for a single necrosis-related grade 5 event. In the EORTC 22042-26042 phase II study, fifty-six patients with newly diagnosed WHO grade 2 meningioma who underwent gross total resection received adjuvant fractionated RT with a dose of 60 Gy delivered in 2 Gy per fraction [57]. Five patients did not receive the planned radiation dose: three patients prematurely stopped RT due to grade 3 cerebrospinal fluid leakage (unrelated to RT), vomiting, and epidermitis on scar, and two patients received 70 Gy instead of the planned 60 Gy. The estimated 3-year progressionfree survival, overall survival, and local failure were 88.7%, 98.2%, and 14.3%, respectively, with a late toxicity of grade 3 or more observed in about 14% of patients. The effectiveness of postoperative adjuvant RT in patients with atypical meningiomas has been evaluated in several retrospective series [5, 35, 36, 38-41, 43-46, 48-53, 56, 59, 61] (Table 1). A recent meta-analysis of 17 studies published between January 200 and January 2019 and including 2008 patients who have undergone gross total resection of atypical meningiomas showed a significant improvement in 5-year local control and progression-free survival rates for those receiving adjuvant RT [9]. Local control, progressionfree survival, and overall survival rates were 82.2%, 84.1%, and 79%, respectively, for patients treated with adjuvant RT, and 71%, 71.9%, and 81.5%, respectively, for those not receiving the treatment. Lee et al. [22] reported the outcome of 179 patients who underwent surveillance versus 51 patients who received postoperative adjuvant RT with photons (39%) or protons (57%) after resection of an atypical In another series of 108 patients with an atypical meningioma who underwent gross total resection at the University of California from 1993 to 2004, Aghi et al. [35] observed actuarial tumor recurrence rates of 41% at 5 years and 48% at 10 years. Adjuvant RT was associated with a trend toward decreased local recurrence (p=0.1) in eight patients who underwent gross total resection. Better progression-free survival rates in patients receiving postoperative RT compared with those who did not have been observed in few other retrospective studies [35,36,41,50,52,55]. In contrast, some other studies showed no significant advantages in terms of either overall survival or progressionfree survival for patients undergoing adjuvant RT [45,59,62,70]. In a series of 158 patients with atypical meningiomas treated at the University of Wisconsin between 2000 and 2010, Yoon et al. [59] did not observe any beneficial impact of adjuvant RT on disease-free survival, irrespective of the extent of resection; survival rates were 89% for patients receiving gross total resection and 83% for those having subtotal resection. In another retrospective series of 133 patients treated between 2001 and 2010 in 3 different UK centres, Jenkinson et al. [45] reported similar outcomes for patients who received surgery with or without postoperative RT. Following gross total resection, 5-year overall survival and progression-free survival rates were 77.0% and 82%, respectively, in patients who received early adjuvant RT, and 75.7% and 79.3%, respectively, in patients who did not receive adjuvant treatment. Stessin et al. [70] published a Surveillance, Epidemiology, and End-Results (SEER)-based analysis of 657 patients who were diagnosed with atypical and anaplastic meningiomas in the period 1988-2007. Amongst a total of 244 who received adjuvant RT, the treatment was not associated with survival benefit even after Overall, most studies indicate that adjuvant RT improves progression-free survival in patients with atypical meningiomas. The rate of tumor progression following subtotal resection is higher than that seen following gross total resection; however, the superiority of adjuvant RT over observation for totally excised atypical meningiomas in terms of overall survival remains a controversial issue. Although several studies showed a trend toward clinical benefit with adjuvant RT after gross total resection, the small number of patients evaluated, different WHO criteria for defining atypical meningiomas over the last decades, and the retrospective nature of published studies preclude any meaningful conclusion on whether adjuvant RT improves outcomes over nonirradiated patients. In this regard, the ongoing phase III randomized NRG-BN-003 trial and the recently closed ROAM/EORTC 1308 trial comparing surgery plus adjuvant RT with surgery alone in grade 2 meningioma following gross total resection will help answer the important clinical question on the efficacy of early postoperative RT. The primary outcome measure is progression-free survival (i.e., time to MRI evidence of tumor recurrence) and secondary outcome measures include radiation treatment-related toxicity, the quality of life, neurocognitive function, time to second-line treatment, and overall survival. Importantly, secondary analysis of trials will help to identify molecular features that will predict most benefit for patients receiving adjuvant RT. The results of this potentially practice-changing trial will be available in 2025. Champeaux et al. [72] reported a multicenter retrospective study of 178 patients treated between 1989 and 2017 for a anaplastic meningioma at six different international institutions. Median overall survival time and 5-year survival rates were 2.9 years and 27.9%, respectively; age <65 years, gross total resection, and adjuvant RT that emerged as independent prognostic factors for survival. Dziuk et al. [69] reported the outcome of 38 patients with an anaplastic meningioma who received (n=19) or did not receive (n=19) adjuvant RT. Adjuvant irradiation following gross total resection increased the 5-year progression-free survival rates from 15 to 80% (p=0.002). In contrast, recurrence rates after incomplete resection were similar between groups (100% vs 80%), with no survivors at 60 months. In another series of 24 patients with anaplastic meningiomas, Yang et al. [58] observed better overall survival and progression-free survival times in 17 patients who received adjuvant RT as compared with 7 patients who did not; however, the reported 5-year overall survival and progression-free survival rates were dismal in both groups, being 35% and 29%, respectively. In contrast, other series failed to demonstrate a significant improvement in overall survival and progression-free survival times in patients receiving adjuvant RT [71,79]. In a retrospective cohort of patients with atypical meningioma extracted from the National Cancer Database (NCDB) and diagnosed between 2004 and 2015, Alhourani et al. [71] evaluated the outcome of those patients with at least 10 years of follow-up after surgery and postoperative RT. The adjuvant treatment was associated with significantly improved local control; however, the median survival time was not significantly different (32.8 months for adjuvant RT vs. 38.5 months for no RT; p = 0.57, log-rank test). In summary, anaplastic meningiomas are highly likely to recur regardless of resection status. In most of the retrospective published studies, adjuvant RT is associated with improved progression-free survival and overall survival; however, no prospective studies have compared surgery plus adjuvant RT versus surgery alone and definitive conclusions on the superiority of RT over observation cannot be drawn. Regarding the radiation techniques, fractionated RT given as adjuvant treatment is the most used type of irradiation, whereas SRS is usually reserved for small-to-moderate recurrent tumors. Radiosurgery Adjuvant treatment for resected atypical and anaplastic meningiomas is typically delivered as fractionated RT, although SRS has been increasingly used either as adjuvant treatment or, more frequently, as salvage treatment for recurrent tumors [15-28, 81, 82, 84-86]. A summary of selected published series for atypical and anaplastic meningiomas is shown in Table 3. Kowalchuk et al. [22] have recently reported the results of a large retrospective multicentric study of 233 atypical meningiomas treated with SRS. For high-risk grade 2 meningiomas, as defined by the RTOG 0539 study, the 3-year progression-free survival was 53.9%, being similar to the rate of 58.8% reported in the RTOG study. Hanakita et al. [18] reported 2-year and 5-year recurrence rates of 61% and 84%, respectively, in 22 patients treated with salvage SRS. Analysis of prognostic factors showed that a tumor volume < 6 ml, margin doses > 18 Gy, and a Karnofsky Performance Status score of ≥ 90 were associated with a better outcome. Attia et al. [15] reported the clinical outcomes of 24 patients who received Gamma Knife SRS as either primary or salvage treatment for patients with atypical meningiomas using a median marginal dose of 14 Gy. With a median follow-up time of 42.5 months, local control rates at 2 and 5 years were 51% and 44%, respectively. Eight recurrences were in-field, four were marginal failures, and two were distant failures. In another retrospective series of 44 patients who received Gamma Knife SRS early after surgery or at tumor recurrence, Zhang et al. [28] showed 5-year actuarial local control and overall survival rates of 51% and 87%, respectively, at a median follow-up time of 51 months. Serious neurological complications occurred in 7.5% of patients. Similar results have been reported by others (Table 3). A few studies have evaluated the efficacy of SRS for patients with anaplastic meningiomas [17,21,24,25,27]. In an international, multicenter, retrospective study of 271 patients with atypical (n=233) and anaplastic meningioma (n=38) treated with Gamma Knife SRS with a median dose of about 15 Gy, Shepard et al. [25] reported progressionfree and overall survival rates of 33.6% and 77.0%, respectively, at 5 years. For patients with anaplastic meningiomas, increased age and reduced KPS (HR 0.95, p = 0.04) were associated with shorter OS. In another small series of 29 patients who received post-operative SRS with a mean margin dose of 14 Gy, Kondziolka et al. [21] reported progression-free survival rates of 17% at 15 months and 9% at 60 months. In contrast, El-Khatib et al. [17] reported higher rates of progression-free survival, 57% at 3 years and 43% at 10 years, for 7 patients with anaplastic meningiomas receiving Gamma Knife SRS with a margin dose of 14 Gy. Hypofractionated SRT, typically 24-30 Gy given in 3 to 5 fractions, has also been employed as an alternative to single-fraction SRS for brain tumors, generally for larger or critically located tumors, e.g., involving the anterior optic apparatus, or the sagittal sinus [14]. Presently, hypofractionated SRT data specific to atypical meningioma is limited. The reported local control reported in few series has been essentially equivalent to single-fraction SRS, possibly with a lower risk of side effects [28,89]. Vernimmen et al. [89] reported the outcome of stereotactic hypofractionated proton beam RT in 18 patients with skull base meningiomas. With a median follow-up of 31 months, 88% of tumors remained under control, even though large tumors up to 63 ml were treated. Overall, data from the literature suggest that SRS is a feasible and safe treatment for patients with atypical and anaplastic meningiomas, especially for relatively small recurrent tumors less than 3 cm in size. Based of the scarcity of published data, its superiority over fractionated RT as well as its efficacy for patients with anaplastic meningiomas remains unsustained. Hypofractionated SRT may represent an alternative to single-fraction SRS for larger tumors or in the proximity of critical areas with the aim of limiting the potential treatment-related toxicity. Proton beam RT Several studies have reported the outcome of proton beam and carbon ion therapy for atypical and anaplastic meningiomas [63,64,67,80,87,[90][91][92]. In a recent systematic review, Coggins et al. [51] reported the results of ion RT in maintaining local control in atypical and anaplastic meningiomas. With a mean follow-up time ranging from 60 to 145 months, mean local control rates following proton beam therapy were 59.6% at 5 years, accounting for a total of 82 patients included in 6 studies. Across the studies reporting on carbon ion RT, local control was 54% at 12 months and 33% at 24 months. A summary of studies reporting clinical outcomes of patients with atypical and anaplastic meningioma following proton and carbon ion RT is summarized in Table 4 [63,64,67,80,87,[90][91][92]. With regard to the limited number of studies and patients, proton and carbon ion therapy maintain comparable rates of local control to conventional photon therapy. Prospective trials remain necessary to quantify the efficacy of ion beam RT versus conventional photon therapy in terms of local control, overall survival, and treatment-related toxicity rates. The NCT01166321 phase II open-label trial is currently recruiting patients with atypical meningiomas undergoing partial resection (Simpson 4 and 5) treated with carbon ion boost in combination with photon RT. Other clinical trials have been recently activated or are currently recruiting patients in order to test the efficacy of carbon ion therapy in atypical meningioma (NCT01166321) and proton dose escalation in atypical and anaplastic meningiomas (NCT02978677). Radiation dose and timing of RT Radiation dose and timing of RT represent important variables for the clinical outcome of atypical meningiomas. Conventionally fractionated RT with total doses of 54-60 Gy given in 1.8-2.0 Gy fractions is usually utilized in the majority of published series. A few studies employing doses ≥ 60 Gy showed improved local control [10,35,57,89], whereas doses of 54-57 Gy [42,59] or less than 54 Gy [2,42,52] were apparently associated with low or no benefits. As with atypical meningioma, higher RT doses appear to improve local tumor control for patients with anaplastic histology [69,74]. For patients receiving SRS, single doses of 14-18 Gy are typically used in the majority of radiation centers with similar local control (Table 3); in contrast, doses ≤ 12 Gy have been associated with inferior local control rates [77]. Kano et al. [19] used SRS as salvage therapy for recurrent tumors after surgical failure and showed a dosedependent improvement of 5-year progression-free survival for patients with both atypical and anaplastic meningiomas. Survival rates increased from 29.4 to 63.1% when recurrent tumors received a marginal radiation dose exceeding 20 Gy compared to 15 Gy. Differently from single-fraction SRS, no studies have evaluated the impact of different hypofractionated schedules for grade 2 or 3 meningiomas. In summary, higher doses given in conventional fractionation seem to provide better overall outcomes compared with lower doses; however, no controlled prospective studies have directly compared different doses and significant survival advantages observed with higher doses remain to be confirmed in controlled studies. Similarly, RT modalities have not been compared in well-designed studies to provide evidence of the superiority of one treatment modality over the others. With regard to the timing of RT for atypical meningiomas, postoperative RT seems more effective when administered adjuvantly rather than at recurrence, and most authors recommend this approach [35-37, 42, 47, 48, 52, 60, 89]. In the study of Lee et al. [22], adjuvant RT was associated with a longer time of tumor progression compared with salvage RT. For patients with unresectable and symptomatic meningioma or with imminent risk of symptomatology in case of further progression, there is a general consensus that RT should be initiated as soon as possible [6]. Interestingly, Islim et al. [93] developed a prognostic model to guide personalized monitoring of incidental asymptomatic meningioma patients. By combining data on patient characteristics (age, performance status, and co-morbidities) and MRI features, including tumor hyperintensity, peritumoral edema, proximity to neurovascular structures, and size, they proposed an individualized monitoring strategy for patients with low, medium, or high risk for tumor progression, developing a calculator which is freely available (https:// www. impact-menin gioma. com). After gross total resection, the 5-year and 10-year progression-free survival rates were 94% for both in the adjuvant RT group versus 42% and 36%, respectively, in the salvage RT group. Results of ROAM/EORTC 1308 trial which are expected in 2025 will help to better define the postoperative management of these patients. Reirradiation Thanks to the continuous improvement in radiation science and technology, reirradiation has emerged as a feasible approach for patients with different brain tumors [55]. Few retrospective studies have reported the feasibility of reirradiation for patients with recurrent meningiomas [66,68,88]. In a series of 43 patients receiving a second course of RT, Lin et al. [74] showed local control, progression-free survival, and overall survival rates of 77%, 60%, and 87% at 1 year, and 70%, 43%, and 68% at 2 years, respectively, for grade 2 and grade 3 meningiomas, with no significant differences between fractionated RT and SRS. The treatment was associated with an acceptable toxicity profile, with 15% of patients who developed grades 2 to 4 radionecrosis. This is consistent with previous studies on reirradiation of brain gliomas suggesting that the risk of symptomatic brain necrosis is low if the cumulative equivalent dose of 2 Gy per fraction (EQD2) is less than 100 Gy [88]. Overall, a few studies support the role of reirradiation as a feasible treatment option for selected patients with recurrent atypical and anaplastic meningiomas that recurred after previous standard treatment. Prospective studies with appropriate follow-up are needed to validate the favorable impact of reirradiation, delivered either as fractionated SRT or as SRS, for recurrent meningiomas. Toxicity The reported toxicity after postoperative RT for atypical and anaplastic meningiomas is modest; using typical doses of 54-60 Gy, toxicity ranges from 0 to 17% and includes radiation-induced brain necrosis (0-15%), visual disturbances (2-5%), hypopituitarism (5-30%), and cognitive disturbance (2-17%) (Tables 1, 2, 3, and 4). In the EORTC 22042-26042 observation study, the rate of the late adverse effect of the Common Terminology Criteria for Adverse Events grade 3 or more associated with adjuvant RT following gross total resection for atypical meningioma was 14.3% with no toxic death using a radiation dose of 60 Gy is given in 2 Gy per fraction [57]. In the NRG Oncology/RTOG0539 trial reporting the clinical outcome for 53 patients who received IMRT with a dose of 60 Gy given in 30 fractions for a high-risk meningioma, Rogers et al. [10] reported combined acute and late adverse events in about 40% of patients, although they were limited to grades 1 to 3, except for a single necrosisrelated grade 5 event at a median follow-up of 4 years. Of note, only grade 1 and 2 adverse events occurred in patients with intermediate-risk meningiomas who were treated with IMRT or 3D conformal RT using doses of 54 Gy given in 30 fractions in the same trial [4]. A similar acceptable incidence of radiation-related toxicity has been reported in the majority of published studies of conventionally fractionated RT including either atypical or anaplastic meningiomas (Tables 1 and 2). For patients receiving SRS, neurological toxicity rates up to 26% have been reported in few studies [22,24,27,82], although it remains below 10% when limited volumes are treated [15,17,21]. Potential neurocognitive toxicity of adjuvant RT is a major cause that makes physicians hesitate to apply it to patients with an atypical meningioma after gross total resection. The incidence of neurotoxicity ranges from 3.4 to 16.7% according to the location of the lesion, radiation dose, and radiation modality, although no published studies have evaluated neurocognitive changes after RT using formal neuropsychological testing. In general, studies support the safety of radiation treatment given adjuvantly or at recurrence. The impact of advanced techniques for RT such as IMRT and VMAT can lead to improvement in safety profile. Conventionally fractionated RT is usually employed as adjuvant treatment for patients with large resection cavity or large recurrent tumors, while SRS or hypofractionated schedules may represent a feasible treatment option for small-to-moderate tumors, usually less than 3 cm or not in close proximity to sensitive brain structures, such as brainstem or optic apparatus. Conclusions At present, surgery retains a central role in the management of atypical and anaplastic meningiomas. For most patients, gross total resection remains the benchmark, although total surgical excision within the constraints of acceptable morbidity is not always achievable. Postoperative RT is usually recommended after subtotal resection, with several studies indicating improvements in local control up to 70% at 5 years. Similar rates have been shown after SRS; however, the latest is usually offered to patients with smallerto-moderate recurrent tumors. Controversy exists regarding the role and the efficacy of postoperative adjuvant RT in patients receiving gross total resection. The relatively divergent results in the literature are most likely explained by the retrospective nature of the series and the relatively small number of patients evaluated. Keeping this in mind, EORTC 22042-26042 and RTOG 0539 prospective trials have already confirmed an excellent patients' outcome, with approximately 90% progression-free survival rates at 3 years for WHO grade 2 meningioma undergoing complete resection and adjuvant high-dose RT, depending on patient-and tumor-treatment-related factors. Additional studies should better elucidate the timing, the optimal dose/fractionation, and radiation technique for these tumors. The development of a molecularly based classification of meningiomas will provide a better understanding of tumor biology and could help us predict which patients will benefit from adjuvant therapy. Data availability All data supporting the results of this review are published in the cited references. Code availability Not applicable Author contribution GM designed and drafted the manuscript, performed literature research and data extraction. LV, SA, MG, IR, VC, SP contributed to the development, preparation and shaping of the manuscript. PT contributed to the preparation of the manuscript and edited the final version. All authors read and approved the final manuscript. Funding Open access funding provided by Università degli Studi di Siena within the CRUI-CARE Agreement. Declarations Ethics approval Not applicable (literature review). Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
v3-fos-license
2021-09-11T06:17:04.261Z
2021-09-01T00:00:00.000
237469254
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/14/17/5048/pdf", "pdf_hash": "2605b8ec93762432b7ba83383bd9c93a4b893956", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42205", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "sha1": "ac213cf55afdf8e77b6e9a5cf1d63544bae40389", "year": 2021 }
pes2o/s2orc
Adhesive-Ceramic Interface Behavior in Dental Restorations. FEM Study and SEM Investigation The purpose of this study is to identify the stress levels that act in inlay and onlay restorations, according to the direction and value of the external force applied. The study was conducted using the Finite Element Method (FEM) of three types of ceramics: pressed lithium disilicate and monolith, zirconia, and three different adhesive systems: self-adhesive, universal, and dual-cure cements. In addition to FEM, the inlay/onlay-dental structure interface analysis was performed by means of Scanning Electron Microscopy (SEM). The geometric models were reconstructed based on computer tomography images of an undamaged molar followed by geometrical procedures of inducing the inlay and onlay reconstructions. The two functional models were then simulated for different orientations of external force and different material properties, according to the considered adhesives and ceramics. The Scanning Electron Microscopy (SEM) was conducted on 30 extracted teeth, divided into three groups according to the adhesive cement type. Both FEM simulation and SEM investigations reveal very good mechanical behavior of the adhesive-dental structure and adhesive-ceramic interfaces for inlay and onlay reconstructions. All results lead to the conclusion that a physiological mastication force applied, regardless of direction, cannot produce a mechanical failure of either inlay or onlay reconstructions. The adhesive bond between the restorations and the dental structure can stabilize the ceramic restorations, resulting in a higher strength to the action of external forces. Introduction Restoring large tooth loss structure of the posterior teeth may be an option, along with direct adhesive technique and indirect adhesive techniques, of which inlay and onlay may be an option because it ensures a safe long-term adaptation with minimally invasive preparation. However, there is controversy about the type of restoration that should be used to restore major defects to avoid fractures and improve survival. The design of an indirect restoration must strike a balance between strictly removing only decayed structures and increasing the strength of the restoration. Removing the marginal ridges and achieving the width and depth of the inlay cavity are the main reasons for the low fracture strength [1]. In the posterior area of the dental arches, the replacement of classical restorations with inlays or onlays is a solution that improves all the parameters on which the success of dental treatment depends [2]. For the restoration of posterior teeth, there are a great variety of materials that correspond aesthetically to the new requirements of patients on the market, such as composite resins, ceramics, and zirconia. Ceramics offer adequate resistance to fractures (160-450 MPa), good survival rates, a high modulus of elasticity, and do not suffer too much abrasion or wear [3]. Ceramic restorations have been introduced in dentistry using traditional feldspar porcelain that is biocompatible and resistant to abrasion and compression forces. Regardless of the method of manufacturing, the inlay and the onlay of glass-ceramic have shown a high survival rate, providing evidence that these restorations are a safe treatment [4]. In recent decades dental ceramics have evolved significantly, especially zirconia, which has proven to be a high-strength dental material [5], but still, problems like fracture and debonding remain the main concerns of restoration failure. The use of adhesive techniques for cementing ceramic restorations has led to an increase in their mechanical strength. It should be taken into account that proper adhesive cementation can increase the fracture strength of ceramics by up to 69%, which increases the strength and reduces the propagation of cracks [6]. Cementation of ceramic restorations is usually done with self-etch, self-adhesive resins, or dual-cure adhesive resins (lightcuring and chemical polymerization). For low-strength feldspar porcelains, the use of the total-etch technique (etch and rinsing) is recommended, followed by the use of dual-cure adhesive cements for crowns and inlays [7]. Self-adhesive cements ensure adhesion by chelating calcium ions by acid groups, producing a chemical bond with the hydroxyapatite of the dental structure. This is a superficial interaction and does not promote the formation of a hybrid layer or smear plug in the dentinal tubules achieved by conventional adhesive cements by the initial application of the adhesive complex with the promotion of the formation of a hybrid layer and a better bond to dentin [8]. Each of these cementation methods has an exact application protocol at the dental level, in one or more stages, which must be strictly observed to obtain the expected results. On the other hand, it is necessary to prepare by etching with 5 and 10% hydrofluoric acid the inner surface of the ceramic to ensure a surface with micro-retentions and increased adhesion [9]. The application of silanes on the etched inner surface of the ceramic ensures good adhesion with the adhesive cement, the silane realizing the connection between the mineral particles of the ceramic and the organic component of the adhesive cement. One of the challenges produced by cementation is related to the way in which the adhesion at the interfaces of the tooth/adhesive layer/cement/ceramic monolith is achieved. The tighter these areas are, without gaps or fracture lines, the higher the clinical performance. Aesthetics is not the only criterion that should be discussed in the realization of the restoration treatment plan, but, perhaps more importantly, the chosen materials should be resistant to stress produced by functional forces exerted on the tooth during mastication and also to the action of parafunctional forces that can cause stress on hard dental structures, tissues and periodontal bone. Determining the distribution and analysis of these stresses are of fundamental importance in research and can constantly contribute to reducing the risk of dental restoration failure. The stress aspects in dental restorations are difficult to determine by experiments, but with an acceptable approximation, the stress-strain state can be evaluated using finite element analysis (FEA). The simulation allows a better understanding of the biomechanics of any dental geometry consisting of a multitude of materials with different mechanical properties [10]. The simulation input usually comes from three directions: geometry, material properties, and loading/fixing conditions. The loading conditions, for example, are coming from real measurements. These reveal that the maximum occlusal force of a healthy man can reach up to 847 N and a healthy woman up to 597 N, while bruxism can increase the maximum occlusal force to over 900 N [11][12][13][14]. Additionally, the elastic properties of natural tooth and restorative materials have to be considered in simulation. Identification of these represents a constant concern in the field of dentistry and material sciences [15][16][17]. The objectives of this study are to identify the values of equivalent stress and strain as well as the deformations that occur according to the direction of applied load on inlay and onlay restorations made of three types of ceramics: IPS e.max CAD-on, Ivoclar Vivadent; ceramics IPS e.max Press, Ivoclar Vivadent; and Novodent GS Zirconia, cemented with three adhesive systems: self-etching Maxcem Elite, Kerr; universal-RelyX Ultimate Clicker, 3M ESPE; and dual-cure Variolink Esthetic LC, Ivoclar Vivadent. In addition, the interface analysis by scanning electron microscopy was conducted, focusing on local aspects at dental structure-adhesive cement interface and also at the interface between the adhesive and the dental ceramic. Design of Inlay and Onlay Restorations The construction of the geometric model was accomplished by reconstructing a set of computer tomography images of an unaffected natural molar. This reverse engineering technique allowed us to achieve accurate shape and dimensions of the tooth and to reproduce the finest geometric features [18][19][20]. The reconstruction was made in MIM-ICS 10.1 (Materialise Inc., Leeuwen, Belgium), in which the 3D geometry was achieved. The refinement and repairing of the structural mesh were then conducted in Geomagic Studio 9 (3D Systems, Morrisville, NC, USA) in order to obtain a valid virtual solid. The rest of the geometric operations were done on SolidWorks 2019 (Dassault Systèmes SE, Vélizy-Villacoublay, France). The starting image collection and the reconstructed molar obtained in this way are presented in Figure 1. The natural reconstructed molar has been truncated with a plan in order to isolate only the crown and cervical area. The purpose of this operation was to reduce the root area, which is not particularly interesting in the analysis, only multiplying the calculation model without bringing an essential effect on the types of preparations to be studied. healthy man can reach up to 847 N and a healthy woman up to 597 N, while bruxism can increase the maximum occlusal force to over 900 N [11][12][13][14]. Additionally, the elastic properties of natural tooth and restorative materials have to be considered in simulation. Identification of these represents a constant concern in the field of dentistry and material sciences [15][16][17]. The objectives of this study are to identify the values of equivalent stress and strain as well as the deformations that occur according to the direction of applied load on inlay and onlay restorations made of three types of ceramics: IPS e.max CAD-on, Ivoclar Vivadent; ceramics IPS e.max Press, Ivoclar Vivadent; and Novodent GS Zirconia, cemented with three adhesive systems: self-etching Maxcem Elite, Kerr; universal-RelyX Ultimate Clicker, 3M ESPE; and dual-cure Variolink Esthetic LC, Ivoclar Vivadent. In addition, the interface analysis by scanning electron microscopy was conducted, focusing on local aspects at dental structure-adhesive cement interface and also at the interface between the adhesive and the dental ceramic. Design of Inlay and Onlay Restorations The construction of the geometric model was accomplished by reconstructing a set of computer tomography images of an unaffected natural molar. This reverse engineering technique allowed us to achieve accurate shape and dimensions of the tooth and to reproduce the finest geometric features [18][19][20]. The reconstruction was made in MIMICS 10.1 (Materialise Inc., Leeuwen, Belgium), in which the 3D geometry was achieved. The refinement and repairing of the structural mesh were then conducted in Geomagic Studio 9 (3D Systems, Morrisville, NC, USA) in order to obtain a valid virtual solid. The rest of the geometric operations were done on SolidWorks 2019 (Dassault Systèmes SE, Vélizy-Villacoublay, France). The starting image collection and the reconstructed molar obtained in this way are presented in Figure 1. The natural reconstructed molar has been truncated with a plan in order to isolate only the crown and cervical area. The purpose of this operation was to reduce the root area, which is not particularly interesting in the analysis, only multiplying the calculation model without bringing an essential effect on the types of preparations to be studied. Relying on the basic model, two functional models were designed, hereafter referred to as inlay and onlay. These were constructed as two functional assemblies and shaped in such a way that the basic model, the restorations, and the adhesive layer are distinctive parts on which different material properties can be applied. The design stages of the inlay model can be observed in Figure 2. The basic model includes the dentin volume and the enamel shell. On this structure, the inner hole was processed and assembled with two Relying on the basic model, two functional models were designed, hereafter referred to as inlay and onlay. These were constructed as two functional assemblies and shaped in such a way that the basic model, the restorations, and the adhesive layer are distinctive parts on which different material properties can be applied. The design stages of the inlay model can be observed in Figure 2. The basic model includes the dentin volume and the enamel shell. On this structure, the inner hole was processed and assembled with two conjugated volumes: one corresponding to the adhesive cement and another corresponding to the restoration geometry. The final model is an assembly containing four distinctive geometric elements. The onlay model involves performing a reconstruction of the entire upper surface of the dental crown and a transversal cutting, in two planes, for generating the necessary surfaces for virtual restoration. The dimensional jump obtained by profile cutting materializes a particularly important area from a practical point of view: a concentrated stress area both at the level of ceramics and at the level of cement. The stages for onlay model design are presented in Figure 3. As on the inlay model, on the shaped basic model, the adhesive cement part was overlapped, followed by overlapping the ceramic restoration. In this way, the complete onlay model consists of four distinctive geometric elements. The onlay model involves performing a reconstruction of the entire upper surface of the dental crown and a transversal cutting, in two planes, for generating the necessary surfaces for virtual restoration. The dimensional jump obtained by profile cutting materializes a particularly important area from a practical point of view: a concentrated stress area both at the level of ceramics and at the level of cement. The stages for onlay model design are presented in Figure 3. As on the inlay model, on the shaped basic model, the adhesive cement part was overlapped, followed by overlapping the ceramic restoration. In this way, the complete onlay model consists of four distinctive geometric elements. Defining the Materials and Model Association The simulations were conducted on ANSYS 2019 (ANSYS Inc., Canonsburg, PA, USA) separately for each functional model. For static structural analysis, the following simulation parameters were set: materials corresponding to each structure; contacts between components; loading and fixing conditions; discretization of the structure, and the type of results [21][22][23][24][25]. The commercially available dental materials that were used in the study are presented in Table 1. The elastic properties of restoration structures (longitudinal modulus of elasticity, bending modulus of elasticity, and Poisson's ratio) were selected from the literature based on experimental trials of either ceramics and cement conducted by manufacturers or independent researchers and are presented in the table 2 [26][27][28][29][30]. geometric elements. The onlay model involves performing a reconstruction of the entire upper surface of the dental crown and a transversal cutting, in two planes, for generating the necessary surfaces for virtual restoration. The dimensional jump obtained by profile cutting materializes a particularly important area from a practical point of view: a concentrated stress area both at the level of ceramics and at the level of cement. The stages for onlay model design are presented in Figure 3. As on the inlay model, on the shaped basic model, the adhesive cement part was overlapped, followed by overlapping the ceramic restoration. In this way, the complete onlay model consists of four distinctive geometric elements. Defining the Materials and Model Association The simulations were conducted on ANSYS 2019 (ANSYS Inc., Canonsburg, PA, USA) separately for each functional model. For static structural analysis, the following simulation parameters were set: materials corresponding to each structure; contacts between components; loading and fixing conditions; discretization of the structure, and the type of results [21][22][23][24][25]. The commercially available dental materials that were used in the study are presented in Table 1. The elastic properties of restoration structures (longitudinal modulus of elasticity, bending modulus of elasticity, and Poisson's ratio) were selected from the literature based on experimental trials of either ceramics and cement conducted by manufacturers or independent researchers and are presented in the table 2 [26][27][28][29][30]. Defining the Materials and Model Association The simulations were conducted on ANSYS 2019 (ANSYS Inc., Canonsburg, PA, USA) separately for each functional model. For static structural analysis, the following simulation parameters were set: materials corresponding to each structure; contacts between components; loading and fixing conditions; discretization of the structure, and the type of results [21][22][23][24][25]. The commercially available dental materials that were used in the study are presented in Table 1. The elastic properties of restoration structures (longitudinal modulus of elasticity, bending modulus of elasticity, and Poisson's ratio) were selected from the literature based on experimental trials of either ceramics and cement conducted by manufacturers or independent researchers and are presented in the Table 2 [26][27][28][29][30]. Loading, Fixing Conditions, Contacts and Discretization of the Models Loading and fixing represent key parameters in simulation. The more accurate these are compared to the natural one, the more realistic simulation becomes. Following this principle, the loading surface was selected on the cuspidal zone for the onlay model and on the transverse groove for the inlay model. A single loading value → F = 170 N was chosen for both models but applied in six directions, one at a time. The directions simulate the pure shear effect (0 • ) on the models at one end and the pure compression effect (90 • ) at the other. The intermediate loadings of 30 • , 45 • , 60 • , 75 • , where the combined effect of shear and compression appears in Figure 4. The loading value corresponds to a normal mastication force that may occur at the molar level [11][12][13][14]. The fixed part of both models was at the level of truncation plane, simulating the fixed dental roots in normal conditions. All the displacements of the structure were set free so that the model can deform in any direction. Loading, Fixing Conditions, Contacts and Discretization of the Models Loading and fixing represent key parameters in simulation. The more accurate these are compared to the natural one, the more realistic simulation becomes. Following this principle, the loading surface was selected on the cuspidal zone for the onlay model and on the transverse groove for the inlay model. A single loading value ⃗ = 170 N was chosen for both models but applied in six directions, one at a time. The directions simulate the pure shear effect (0°) on the models at one end and the pure compression effect (90°) at the other. The intermediate loadings of 30°, 45°, 60°, 75°, where the combined effect of shear and compression appears in Figure 4. The loading value corresponds to a normal mastication force that may occur at the molar level [11][12][13][14]. The fixed part of both models was at the level of truncation plane, simulating the fixed dental roots in normal conditions. All the displacements of the structure were set free so that the model can deform in any direction. The bounded contacts definition was done manually between surfaces: dentin-cement, dentin-enamel, inlay/onlay-adhesive, and adhesive-enamel. Ensuring a real contact surface in terms of type, surface dimensions, and position is very important as it directly influences how the stress and strain are transmitted from one element of the model to another. The discretization of the structure was done in automatic mode using tetrahedral elements of type 10. The continuum structure was divided into discrete elements of the following parameters: the inlay model contains 39,897 nodes and 20,945 elements; the onlay model contains 45,960 nodes and 23,211 elements. Scanning Light Microscopy Analysis Verification of the marginal adaptation of ceramic and zirconia inlays to dental structures (dentin and enamel) was performed by an in vitro study. The study was performed The bounded contacts definition was done manually between surfaces: dentin-cement, dentin-enamel, inlay/onlay-adhesive, and adhesive-enamel. Ensuring a real contact surface in terms of type, surface dimensions, and position is very important as it directly influences how the stress and strain are transmitted from one element of the model to another. The discretization of the structure was done in automatic mode using tetrahedral elements of type 10. The continuum structure was divided into discrete elements of the following parameters: the inlay model contains 39,897 nodes and 20,945 elements; the onlay model contains 45,960 nodes and 23,211 elements. Scanning Light Microscopy Analysis Verification of the marginal adaptation of ceramic and zirconia inlays to dental structures (dentin and enamel) was performed by an in vitro study. The study was performed on 30 extracted teeth on which inlay and onlay preparations were made and were divided into three groups ( Table 3). The preparations were made according to the minimally invasive standard rules, and the cementing was done in strict compliance with the manufacturers' instructions. For conducting SEM analysis, the samples were embedded in the resin and sectioned horizontally under cooling, and fixed on a STAB-type aluminum support. The newly formed system was introduced in the Quorum type coating and coated with a 9 nm gold layer to perform the conductivity of the investigated sample at SEM for 60 s. The investigation of the samples was performed using the QUANTA INSPECT F scanning electron microscope equipped with a 1.2 nm resolution field-emission gun (FEG) and energy dispersive X-ray spectrometer (EDS) with the MnK of 133 eV. Results The FEM simulation revealed the stress, strain, and displacement values in all nodes of the discrete structure, in the form of colored maps. In order to have a better interpretation of the results, a series of stress values were extracted from special points of the structure as follows: the stresses in the middle area of the restoration material and the stresses in the middle area of the adhesive cement. The maps showing these stress sampling areas are presented together with the coordinates in the XoY plane for the inlay and onlay models in Figure 5. The gray areas represent very low stress values in the structure, while the areas represented in yellow and red are intensely stressed. Ivoclar Vivadent For conducting SEM analysis, the samples were embedded in the resin and sectioned horizontally under cooling, and fixed on a STAB-type aluminum support. The newly formed system was introduced in the Quorum type coating and coated with a 9 nm gold layer to perform the conductivity of the investigated sample at SEM for 60 sec. The investigation of the samples was performed using the QUANTA INSPECT F scanning electron microscope equipped with a 1.2 nm resolution field-emission gun (FEG) and energy dispersive X-ray spectrometer (EDS) with the MnK of 133 eV. Results The FEM simulation revealed the stress, strain, and displacement values in all nodes of the discrete structure, in the form of colored maps. In order to have a better interpretation of the results, a series of stress values were extracted from special points of the structure as follows: the stresses in the middle area of the restoration material and the stresses in the middle area of the adhesive cement. The maps showing these stress sampling areas are presented together with the coordinates in the XoY plane for the inlay and onlay models in Figure 5. The gray areas represent very low stress values in the structure, while the areas represented in yellow and red are intensely stressed. The equivalent stresses presented in the paper are the von Mises stress, computed from the principal stresses using Equation (1). The maximum shear stresses presented in the paper are formulated using the principal stresses, according to Equation (2 The equivalent stresses presented in the paper are the von Mises stress, computed from the principal stresses using Equation (1). The maximum shear stresses presented in the paper are formulated using the principal stresses, according to Equation (2). The equivalent elastic strain is formulated based on Poisson's ratio and component strain values according to Equation (3). Inlay Model Simulation Figures 6 and 7 show the equivalent and shear stress acting in the longitudinal sections of the inlay model. The stress distribution is unbalanced in the case of horizontal loading (0 • ), and it tends to uniformly distribute in pure compression loading (90 • ). It is expected that higher cement-ceramic interface stresses to occur for horizontal loading, while more uniform and therefore lower values of stress to be recorded for compression loading at the same interface. The gradual stress distribution can be observed from 0 to 90-degree loading direction, but it appears that the intermediate orientations of 30,45, and 60 degrees return similar distributions. tions of the inlay model. The stress distribution is unbalanced in the case of horizontal loading (0°), and it tends to uniformly distribute in pure compression loading (90°). It is expected that higher cement-ceramic interface stresses to occur for horizontal loading, while more uniform and therefore lower values of stress to be recorded for compression loading at the same interface. The gradual stress distribution can be observed from 0 to 90-degree loading direction, but it appears that the intermediate orientations of 30, 45, and 60 degrees return similar distributions. tions of the inlay model. The stress distribution is unbalanced in the case of horizontal loading (0°), and it tends to uniformly distribute in pure compression loading (90°). It is expected that higher cement-ceramic interface stresses to occur for horizontal loading, while more uniform and therefore lower values of stress to be recorded for compression loading at the same interface. The gradual stress distribution can be observed from 0 to 90-degree loading direction, but it appears that the intermediate orientations of 30, 45, and 60 degrees return similar distributions. The elastic strain of the structure is shown in Figures 8 and 9 according to the loading direction. This dimensionless parameter depends on the modulus of elasticity of each component and the state of internal loading. The strain represents a very good indicator of how the dental structure reacts in order to deal with the external loadings. Again, symmetric strain in the dentine can be observed for symmetric loadings, while asymmetric strains can be observed for oblique loads. Oblique loadings (15 • -45 • ) produce a significant deformation on one side of the adhesive cement, leaving the other side undeformed. The elastic strain of the structure is shown in Figures 8 and 9 according to the loading direction. This dimensionless parameter depends on the modulus of elasticity of each component and the state of internal loading. The strain represents a very good indicator of how the dental structure reacts in order to deal with the external loadings. Again, symmetric strain in the dentine can be observed for symmetric loadings, while asymmetric strains can be observed for oblique loads. Oblique loadings (15°-45°) produce a significant deformation on one side of the adhesive cement, leaving the other side undeformed. The variation of stresses according to the sampling point, together with the mechanical strengths of cements and ceramics used in the study, are shown in Figures 10 and 11. Here, the curves for each orientation studied with the mechanical strengths of dental materials found in the literature [28][29][30] were compared. With all stresses and mechanical strength being expressed in MPa, a very good judgment can be made in order to identify possible exceeding of the critical values. Thus, it can be observed that the mechanical strength values of all three ceramics (cer I , cer II , and cer III ) exceed by at least an order of magnitude the equivalent stress or shear values recorded at the interface. This confirms that a loading within the physiological mastication values of any direction cannot produce a mechanical failure of the adhesive-ceramic inlay interface. The results can only be reliable in ideal conditions of bonding contact between the two materials. Regarding the cement-biological tissue interface, the stresses in the three adhesive cements (cem I , cem II , cem III ) were highlighted in the same way. It seems that the adhesives' mechanical strength is much closer to the stress values in the structure but still under the critical point. The variation of stresses according to the sampling point, together with the mechanical strengths of cements and ceramics used in the study, are shown in Figures 10 and 11. Here, the curves for each orientation studied with the mechanical strengths of dental materials found in the literature [28][29][30] were compared. With all stresses and mechanical strength being expressed in MPa, a very good judgment can be made in order to identify possible exceeding of the critical values. Thus, it can be observed that the mechanical strength values of all three ceramics (cer I , cer II , and cer III ) exceed by at least an order of magnitude the equivalent stress or shear values recorded at the interface. This confirms that a loading within the physiological mastication values of any direction cannot produce a mechanical failure of the adhesive-ceramic inlay interface. The results can only be reliable in ideal conditions of bonding contact between the two materials. Regarding the cementbiological tissue interface, the stresses in the three adhesive cements (cem I , cem II , cem III ) were highlighted in the same way. It seems that the adhesives' mechanical strength is much closer to the stress values in the structure but still under the critical point. The variation of stresses according to the sampling point, together with the mechanical strengths of cements and ceramics used in the study, are shown in Figures 10 and 11. Here, the curves for each orientation studied with the mechanical strengths of dental materials found in the literature [28][29][30] were compared. With all stresses and mechanical strength being expressed in MPa, a very good judgment can be made in order to identify possible exceeding of the critical values. Thus, it can be observed that the mechanical strength values of all three ceramics (cer I , cer II , and cer III ) exceed by at least an order of magnitude the equivalent stress or shear values recorded at the interface. This confirms that a loading within the physiological mastication values of any direction cannot produce a mechanical failure of the adhesive-ceramic inlay interface. The results can only be reliable in ideal conditions of bonding contact between the two materials. Regarding the cement-biological tissue interface, the stresses in the three adhesive cements (cem I , cem II , cem III ) were highlighted in the same way. It seems that the adhesives' mechanical strength is much closer to the stress values in the structure but still under the critical point. Onlay Model Simulation In the case of the onlay model, the load application surface was in the median area (longitudinal axis) of the molar. The larger contact surfaces between the restoration and the natural dentine-enamel structure allowed a better stress distribution from the reconstruction material to the dentine and the dental roots (fixed surface in our case). In Figures 12 and 13, the equivalent and shear stresses can be observed in the longitudinal section of the model. A stress concentration point can be identified in the vicinity of the dimensional jump of the ceramic restoration. For this reason, the sampling points considered are concentrated in this area. This special site has two weak factors: it has a smaller cross-sectional area than the rest of the restoration and is also very close to the surface where the force is applied. The two factors combined to generate a higher stress state in the model. Onlay Model Simulation In the case of the onlay model, the load application surface was in the median area (longitudinal axis) of the molar. The larger contact surfaces between the restoration and the natural dentine-enamel structure allowed a better stress distribution from the reconstruction material to the dentine and the dental roots (fixed surface in our case). In Figures 12 and 13, the equivalent and shear stresses can be observed in the longitudinal section of the model. A stress concentration point can be identified in the vicinity of the dimensional jump of the ceramic restoration. For this reason, the sampling points considered are concentrated in this area. This special site has two weak factors: it has a smaller cross-sectional area than the rest of the restoration and is also very close to the surface where the force is applied. The two factors combined to generate a higher stress state in the model. Onlay Model Simulation In the case of the onlay model, the load application surface was in the median area (longitudinal axis) of the molar. The larger contact surfaces between the restoration and the natural dentine-enamel structure allowed a better stress distribution from the reconstruction material to the dentine and the dental roots (fixed surface in our case). In Figures 12 and 13, the equivalent and shear stresses can be observed in the longitudinal section of the model. A stress concentration point can be identified in the vicinity of the dimensional jump of the ceramic restoration. For this reason, the sampling points considered are concentrated in this area. This special site has two weak factors: it has a smaller cross-sectional area than the rest of the restoration and is also very close to the surface where the force is applied. The two factors combined to generate a higher stress state in the model. The stress values at the sampling points are given in Tables 6 and 7, corresponding to the directions applied force. Based on these, the stress variation plots were obtained, which were further compared to the mechanical strength of the dental materials, as in the case of the inlay model. The mean and standard deviation represent the average aspect of the stress in the considered sampling points of the interface. The stress values at the sampling points are given in Tables 6 and 7, corresponding to the directions applied force. Based on these, the stress variation plots were obtained, which were further compared to the mechanical strength of the dental materials, as in the case of the inlay model. The mean and standard deviation represent the average aspect of the stress in the considered sampling points of the interface. The elastic and shear strains of the structure (Figures 14 and 15) show the preference deformation state of the structure, according to the external loading. The lower elastic modulus materials, dentine in our case, are naturally deformed in order to transmit the internal stresses. In the vicinity of the larger section of the restoration ceramic, the strain value is low (also due to high rigidity material), so the deformations are flowing in a preferred direction, from the dimensional jump of the structure directly to the dentine and eventually to the fixed plane. Figures 16 and 17 show the stress curves drawn with the considered sampling points, according to the six loading directions, and also the constant curves of mechanical strengths of dental materials and cements. Again, the mechanical strength values of the ceramics used are not exceeded by the shear stresses or equivalent stresses in the structure, even if these curves indicate a significant increase (100-200% for the equivalent stresses) of the stresses in the dimensional jump area. Regarding the cements used in the simulation, it is found that for the onlay model, the maximum stress values are not exceeding the mechanical limit of the materials. This is mainly due to the wider contact surface of the cement in this case, compared to the inlay model. The elastic and shear strains of the structure (Figures 14 and 15) show the preferenc deformation state of the structure, according to the external loading. The lower elasti modulus materials, dentine in our case, are naturally deformed in order to transmit th internal stresses. In the vicinity of the larger section of the restoration ceramic, the strain value is low (also due to high rigidity material), so the deformations are flowing in a pre ferred direction, from the dimensional jump of the structure directly to the dentine and eventually to the fixed plane. Figures 16 and 17 show the stress curves drawn with the considered sampling points, according to the six loading directions, and also the constant curves of mechanical strengths of dental materials and cements. Again, the mechanical strength values of the ceramics used are not exceeded by the shear stresses or equivalent stresses in the structure, even if these curves indicate a significant increase (100-200% for the equivalent stresses) of the stresses in the dimensional jump area. Regarding the cements used in the simulation, it is found that for the onlay model, the maximum stress values are not exceeding the mechanical limit of the materials. This is mainly due to the wider contact surface of the cement in this case, compared to the inlay model. Figures 16 and 17 show the stress curves drawn with the considered sampling points, according to the six loading directions, and also the constant curves of mechanical strengths of dental materials and cements. Again, the mechanical strength values of the ceramics used are not exceeded by the shear stresses or equivalent stresses in the structure, even if these curves indicate a significant increase (100-200% for the equivalent stresses) of the stresses in the dimensional jump area. Regarding the cements used in the simulation, it is found that for the onlay model, the maximum stress values are not exceeding the mechanical limit of the materials. This is mainly due to the wider contact surface of the cement in this case, compared to the inlay model. SEM Analysis of Inlay Model The attachment of the adhesive cement to the dental structure and to the ceramic inlay was investigated using SEM analysis at 200, 500, and 1000× magnification. According to Figure 18A, there is a very good attachment of cement to the dental structure without cracks, gaps and/or fracture lines. In Figure 18B, the adhesive-inlay cement interface SEM Analysis of Inlay Model The attachment of the adhesive cement to the dental structure and to the ceramic inlay was investigated using SEM analysis at 200, 500, and 1000× magnification. According to Figure 18A, there is a very good attachment of cement to the dental structure without cracks, gaps and/or fracture lines. In Figure 18B, the adhesive-inlay cement interface can be observed. It presents an alternation of areas in which the very good attachment of the involved structures and areas with gaps can be observed. According to Figure 18C, there is good adhesion to both the tooth-adhesive system interface and the adhesive-cementadhesive system and a slight detachment of the inlay from the adhesive cement. (a) (b) Figure 17. Mechanical strength of dental materials and shear stresses recorded at interface: (a) ceramic; (b) adhesive cement. SEM Analysis of Inlay Model The attachment of the adhesive cement to the dental structure and to the ceramic inlay was investigated using SEM analysis at 200, 500, and 1000× magnification. According to Figure 18A, there is a very good attachment of cement to the dental structure without cracks, gaps and/or fracture lines. In Figure 18B, the adhesive-inlay cement interface can be observed. It presents an alternation of areas in which the very good attachment of the involved structures and areas with gaps can be observed. According to Figure 18C, there is good adhesion to both the tooth-adhesive system interface and the adhesive-cement-adhesive system and a slight detachment of the inlay from the adhesive cement. According to Figure 19A, at the border between the two materials, detached microfragments from the ceramic are observed, the adhesion being probably lost during the sectioning of the pieces phase. There is no clear demarcation limit of the materials, which suggests that the cement adheres very well to the ceramic. In Figure 19B, there is a perfect adhesion between the adhesive system and the adhesive cement, as well as between it and the dentin determined by the presence of a thin and uniform hybrid layer. According to Figure 19C, the perfect hybridization of the enamel is observed, characterized by the interlocking of the adhesive at the level of the micro retentions created by the action of phosphoric acid. According to Figure 19A, at the border between the two materials, detached microfragments from the ceramic are observed, the adhesion being probably lost during the sectioning of the pieces phase. There is no clear demarcation limit of the materials, which suggests that the cement adheres very well to the ceramic. In Figure 19B, there is a perfect adhesion between the adhesive system and the adhesive cement, as well as between it and the dentin determined by the presence of a thin and uniform hybrid layer. According to Figure 19C, the perfect hybridization of the enamel is observed, characterized by the interlocking of the adhesive at the level of the micro retentions created by the action of phosphoric acid. SEM Analysis of Onlay Model The SEM investigation on the onlay model was conducted in the same way as for inlay mode. The results can be observed in Figure 20A-C. The adhesive cement layer is homogeneous and well represented and adheres very well to the ceramic component Image (a). Image (b) shows the perfect adhesion of the cement to the ceramic, without gaps and/or fractures, and the presence of silane that penetrated the level of micro retentions obtained by sandblasting and the application of hydrofluoric acid. According to Image (c), there is a detachment of the onlay from the adhesive cement and good adhesion between it and the dentin, with the presence of a dense, narrow, and uniform hybrid layer. SEM Analysis of Onlay Model The SEM investigation on the onlay model was conducted in the same way as for inlay mode. The results can be observed in Figure 20A-C. The adhesive cement layer is homogeneous and well represented and adheres very well to the ceramic component Image (a). Image (b) shows the perfect adhesion of the cement to the ceramic, without gaps and/or fractures, and the presence of silane that penetrated the level of micro retentions obtained by sandblasting and the application of hydrofluoric acid. According to Image (c), there is a detachment of the onlay from the adhesive cement and good adhesion between it and the dentin, with the presence of a dense, narrow, and uniform hybrid layer. SEM Analysis of Onlay Model The SEM investigation on the onlay model was conducted in the same way as for inlay mode. The results can be observed in Figure 20A-C. The adhesive cement layer is homogeneous and well represented and adheres very well to the ceramic component Image (a). Image (b) shows the perfect adhesion of the cement to the ceramic, without gaps and/or fractures, and the presence of silane that penetrated the level of micro retentions obtained by sandblasting and the application of hydrofluoric acid. According to Image (c), there is a detachment of the onlay from the adhesive cement and good adhesion between it and the dentin, with the presence of a dense, narrow, and uniform hybrid layer. Discussions This study examined the behavior of two restoration models, inlay and onlay, subjected to a physiological loading condition of 170 N in value and six directions of orientation. Through the microscopic analysis, the study also examined the way in which the adhesion is made at the level of the restoration/adhesive/tooth interfaces, according to the dental material used. The inlay model simulated a frusto-conical ceramic inlay located in the central area of the molar and inserted into the bone structure to a depth of 4.2 mm. Based on the simulated structure, the equivalent stresses, shear stresses, equivalent strains, and shear strain were extracted from the surface and inside of the model in accordance with the properties of ceramic and adhesive dental materials. The onlay dental reconstruction model was designed to simulate a situation as unfavorable as possible from the mechanical point of view. This was achieved by a dimensional jump at the level of the ceramic element, in the vicinity of which a high concentration of stresses is expected. A very recent study that analyzed the transmission of stresses in ceramic inlays showed that the highest values of stress occurred in teeth restored with inlay, the stresses Discussions This study examined the behavior of two restoration models, inlay and onlay, subjected to a physiological loading condition of 170 N in value and six directions of orientation. Through the microscopic analysis, the study also examined the way in which the adhesion is made at the level of the restoration/adhesive/tooth interfaces, according to the dental material used. The inlay model simulated a frusto-conical ceramic inlay located in the central area of the molar and inserted into the bone structure to a depth of 4.2 mm. Based on the simulated structure, the equivalent stresses, shear stresses, equivalent strains, and shear strain were extracted from the surface and inside of the model in accordance with the properties of ceramic and adhesive dental materials. The onlay dental reconstruction model was designed to simulate a situation as unfavorable as possible from the mechanical point of view. This was achieved by a dimensional jump at the level of the ceramic element, in the vicinity of which a high concentration of stresses is expected. A very recent study that analyzed the transmission of stresses in ceramic inlays showed that the highest values of stress occurred in teeth restored with inlay, the stresses recorded on onlay being lower. The authors found the highest stresses on the zirconium inlay. The lowest stresses were observed in the dental structures, cement, and at the interface between the restoration and the dental tissue, according to our study [17]. For onlay, restorations made of lithium disilicate ceramic offered the highest breaking strength, especially when the restorations were cemented with dual-setting resin, but the differences were of no statistical significance. Therefore, the fracture strength of onlay is not significantly influenced by the material used to manufacture them [3,31]. In our study, the shear stress distribution at the outer level of the molar showed a gradual increase with the transition from 0 • to 90 • loading direction. Equivalent stresses, on the other hand, varied significantly only between the two extreme loading directions: vertical (0 • ) and horizontal (90 • ). It was also shown that 170 N loading-value could not produce mechanical fracturing effects of the ceramic inlay manufactured from the considered materials regardless of which direction it was oriented. Related to the effect of cavity design on stress distribution, onlay design protected dental structures more effectively than inlay models, according to other researchers [1,31]. According to another FEA study performed on CAD-CAM ceramic restorations (IPS e.max CAD-on) and loaded on the occlusal and occlusal-vestibular surface, no significant differences were reported in the cement layer or between different preparation designs [32]. Özkir SE conducted his FEA study to determine the stress distributed on the tooth and onlay-type restorations made of integral ceramic and composite resin, reporting that the highest stress concentration was observed at the ceramic restoration (3.77 GPa) while the lowest value of stress was recorded on the tooth (1.69 GPa) [33]. Stress concentration sites indicate the likely onset of tooth failure and fracture. High elastic modulus materials (porcelain, for example) tend to accumulate high stresses but fail to transfer those stresses further to the tooth structure, and therefore avoid crown fractures [33,34]. Another recent study that analyzed the strength of occlusal and occlusal-vestibular restorations to the action of a force of 300 N showed that teeth restored with IPS e.max CAD presented the highest stress at the level of restoration, the lowest stress of the substrate teeth, and the lowest probability of failure for the overall system, without reporting significant differences in the cement layer or between different preparation designs [33,34]. Inlay restorations have a higher fracture strength when made of lithium disilicate ceramic. Onlays made of lithium disilicate ceramic offered the highest breaking strength, especially when the restorations were cemented with dual-setting resin [3,31]. Cementation seems to be a critical step, and its long-term success is based on adherence to clinical protocols. Proper management of adhesive cementation protocols requires knowledge of adhesive principles and adherence to the clinical protocol to achieve a lasting connection between the tooth structure and restorative materials. The adhesive bond between the restorations and the dental structure can stabilize the ceramic restorations, resulting in higher resistance to the action of external forces. The values of mechanical strength of adhesives being much lower than the mechanical strength of ceramics lead to the focus on adhesive interface behavior. Variolink Esthetic dual-cure cement has a mechanical strength limit very close to the stress values obtained in the critical points of the structure. This could cause a structural failure in inlay restorations, especially when the load is applied application oblique or tangent. In the onlay model, mainly due to the wider contact surface of the cement, the maximum elastic limit is not exceeded. However, for an accidental increase of loading value over the simulated 170 N, the Maxcem self-adhesive cement properties can be easily exceeded. Cementation of restorations based on lithium disilicate ceramic used in this study is done after etching the inner surface with hydrofluoric acid, which aims to increase the adhesion by infiltrating the adhesive cement in the micro retentions obtained from this action. In addition, the silanization process increases the ceramic-cement adhesion by the connection between the ceramic silica and the organic cement matrix [35]. SEM images of the self-adhesive cement-inlay interface reveal areas with very good attachment interrupted by gaps and, in the case of the onlay, a homogeneous, wellrepresented layer of adhesive cement that adheres very well to the ceramic. When evaluating the universal adhesive cement, detached ceramic micro fragments can be seen at the inlay-cement interface, perfect adhesion to the ceramic, and the presence of the silane penetrating the micro-retentions in the case of onlay. When evaluating the adhesion of dual-cure cement (Variolink Esthetic), a separation of the interfaces is observed for both zirconia inlay and onlay. The SEM analysis of the adhesive cement-dental structure interface exposes a very good adhesion of self-adhesive cement (Maxcem) without cracks, gaps, and/or fracture lines. There is also a good adhesion of the dual-cure cement (Variolink Esthetic), highlighted by the presence of a dense, narrow, and uniform hybrid layer. In the case of universal cement (RelyX Ultimate), there is the perfect hybridization of the enamel and the presence of a thin and uniform hybrid layer in the dentin area; this aspect is observed by Aguiar et al., which highlighted the formation of a uniform hybrid layer of high density and long resin smear plugs [36]. Conclusions The FEA simulation and SEM investigation upon the inlay and onlay restoration structures lead to several synthetic conclusions: • The mechanical strength values of the ceramics (cer I , cer II , cer III ) are superior by at least one order of magnitude to the equivalent and shear stress values in either inlay and onlay models. This confirms that a real loading value similar to a simulated one cannot produce a mechanical failure of any of the ceramics. • The adhesive cements, especially dual-cure cem III , had a mechanical strength limit very close to the stress values obtained in the critical points of the structures. This could lead in practice to fracture initiation sites, especially when the applied load is oblique or tangent to the molar. • Shear stresses show jumps of 100-200% due to the dimensional leap of onlay reconstruction. This type of structure is at least geometrically inferior to the inlay, making it a good candidate for crack failure when accidental force occurs. • The adhesion between the restorations and the tooth structure can stabilize the ceramic restorations, resulting in higher resistance to the action of external forces. • The adhesive cement/restoration interface seems to be more difficult to achieve in inlay; self-adhesive and universal cements seem to be more efficient in onlay type restorations. • The adhesive cement/dental structure interface is much more efficient both for the types of design and for all types of cementing techniques.
v3-fos-license
2019-04-13T13:11:32.163Z
2014-05-23T00:00:00.000
39481534
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ms.copernicus.org/articles/5/21/2014/ms-5-21-2014.pdf", "pdf_hash": "f6fdd22c97bb6fce38642417df2d1d8b77fb8026", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42206", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "7f4ad59f13bbf90b8dcf5540312404e508a7a3c1", "year": 2014 }
pes2o/s2orc
An articulated handle to improve the ergonomic performance of robotic dextrous instruments for laparoscopic surgery . Hand-held robotic instruments with dextrous end-effectors offer increased accessibility and gesture precision in minimally invasive laparoscopic surgery. They combine advantages of both intuitive but large, complex, and expensive telesurgery systems, and much cheaper but less user-friendly steerable mechanical instruments. However, the ergonomics of such instruments still needs to be improved in order to decrease surgeon discomfort. Based on the results of former experimental studies, a handle connected to the instrument shaft through a lockable ball joint was designed. An experimental assessment of ergonomic and gesture performance was performed on a custom-made virtual reality simulator. Results show that this solution improves ergonomics, demanding less wrist flexion and deviation and elbow elevation, while providing gesture performance similar to a robotic dextrous instrument with standard pistol-like handle configuration Introduction Several gestures in laparoscopic surgery are rather difficult to perform with conventional non-dextrous instruments.Such a straight and elongated instrument passing through a cannula has only a reduced set of 4 degrees of freedom (DOFs).As a consequence, the orientation of the instrument end-effector is coupled to its position in the abdominal cavity, and by extension to the handle posture above the patient.These kinematic constraints often force surgeons to work in awkward and painful postures, inducing discomfort and even pain after a while (Nguyen et al., 2001). Adding one or several distal DOFs can help to restore mobility as in open surgery, making complex gestures like suturing easier.The da Vinci surgical system (Intuitive Surgical Inc., Sunnyvale, CA) offers such a functionality, and the benefits in terms of ease of use and ergonomics are well established (Freschi et al., 2013).However, its wide diffusion and use in clinical routine are restricted by the high selling price and maintenance costs of this telesurgery robotic system, and by its size and bulk. A couple of simpler hand-held dextrous instruments have been available since 2006, such as RealHand HD from Novare Surgical Systems, Cupertino, CA (Danitz, 2006), Radius from Tuebingen Scientific Medical GmbH, Tübingen, Germany (Schwarz et al., 2005), Roticulator from Covidien Inc., Mansfield, MA (Marczyk et al., 2013), or Autonomy Laparo-Angle from Cambridge Endo, Framingham, MA (Lee and Chamorro, 2008).Research prototypes are also under development (e.g.Awtar et al., 2012;Wang et al., 2012).The distal DOFs of these devices are actuated manually either via knobs or joysticks on the handle (fingertip control), or through a jointed handle that is mechanically coupled to the end-effector (wrist control).They are very useful especially for single-port laparoscopic procedures (MacDonald et al., 2009), where the two instruments are inserted into the same Published by Copernicus Publications.cannula and cross each other in the abdominal wall, requiring intra-abdominal bending to reach the same site in a triangulated configuration (Rettenmaier et al., 2009;Frede et al., 2007;Rosenblatt et al., 2012;Endo et al., 2011).Kolwadkar et al. (2011) demonstrated experimentally that a fingertipcontrolled instrument with a miniature joystick outperforms a wrist-controlled instrument on a needle driving task in conventional laparoscopy.The same conclusion was reached by Okken et al. (2012).However, the latter study also showed that, in some cases, conventional straight instruments performed even better than both dextrous prototypes.According to the authors, this can be due to the difference in quality between commercialized standard instruments and their research prototypes, and to the absence of locking mechanisms for the jointed handle and the joystick.However, adding locking mechanisms to a miniature joystick might not be that straightforward, especially if one wishes to lock one DOF and leave the other free.For example, to perform a stitch in a plane that is not orthogonal to the longitudinal axis of the instrument, the optimal strategy is to actuate the terminal roll motion of the grasper under a constant yaw bending (Hassan Zahraee et al., 2010).Furthermore, a direct mechanical transmission that synchronizes joystick and end-effector angular configurations (position-position mapping) requires a smooth and continuous finger motion to actuate the intraabdominal DOFs, and restricts their range of motion. One solution to these limitations is to replace the direct mechanical link by actuators whose velocity can be controlled by the joystick (position-velocity mapping).This has several advantages over a mechanical transmission.Distal range of motion is not restricted by the joystick angular range.Using non-backdrivable transmissions between actuators and end-effector induces a self-locking behavior: if the surgeon does not tilt the joystick, the intra-abdominal DOFs remain at rest.In addition, while a high amount of force must be applied by the finger on the joystick to stitch with a mechanical transmission, the required force and torque are now provided by the actuators.Finger and hand muscular fatigue can therefore be reduced.We demonstrated on a custom-made virtual-reality (VR) simulator that the number of successful stitches was significantly higher when controlling the distal DOFs with a thumb-actuated joystick mounted on the instrument handle in a position-velocity mapping, with respect to a jointed handle with position-position mapping and a locking feature (Hassan Zahraee et al., 2010).Hand-held robotic instruments might thus combine the advantages of both solutions: a cost-effective dexterity as in mechanical dextrous instruments; with a user-friendly control of distal DOFs as offered by telesurgery systems.Three hand-held robotic instruments have so far been put on the market, all using the optimal yaw-roll kinematics: JAiMY from EndoControl Medical, La Tronche, France (Paik et al., 2010), Kymerax from Terumo Europe Advanced Surgical, Eschborn, Germany (Hackethal et al., 2012) and Dextérité Hand Held Robot from Dextérité Surgical, Annecy, France (Barrier et al., 2010).Various research prototypes were also introduced by the University of Tokyo, Japan (Yamashita et al., 2004), Toshiba Medical Systems with the Keio University School of Medicine, Tokyo, Japan (Jinno et al., 2002), the University of Darmstadt, Germany (Röse et al., 2009), Scuola Supereriore Sant'Anna, Pisa, Italy (Piccigallo et al., 2008), and the Delft University of Technology (Lassooij et al., 2012).These instruments help surgeons with their biggest challenge which is performing complex gestures in laparoscopy.However, they fail to completely address another very important problem of laparoscopists -the poor ergonomics of laparoscopic surgery. Decoupling handle and shaft orientation with a free ball joint between them allows exploration of the entire intraabdominal workspace without excessive wrist flexion or deviation, since the handle is not forced to rotate while swiveling the shaft.We showed on our VR simulator (Herman et al., 2011) that the ergonomic performance of such an articulated handle was significantly higher, but at the cost of a lower gesture performance (i.e. an increase in task duration and path length).This can be attributed mainly to the "floating" behavior of the free handle that might complicate reaching a precise target by inducing some loss of shaft controllability.To solve this problem of floating behavior, introducing the possibility to lock/unlock the joint between handle and shaft seems to be a promising approach.The Dextérité Hand Held Robot also features a lockable joint between the handle and the shaft.However, it is only a 1-DOF revolute joint that allows a Yaw motion, which might not be sufficient for releasing all strains in the surgeon's wrist.In this paper, we introduce the design and performance assessment of a robotic instrument equipped with two lockable joints between handle and shaft that provide 3 revolute DOFs.Our hypothesis is that it could allow a more comfortable arm and wrist configuration regardless of the shaft orientation, while restoring instrument rigidity for a precise control of fine surgical gestures.The remainder of this paper is organized as follows: Sect. 2 describes the handle mock-up and its locking modes.An experimental performance assessment using the VR simulator is detailed in Sect.3. Results are reported in Sect. 4 and discussed in Sect. 5. Design overview Figure 1 depicts the prototype of a lockable articulated handle.It is composed of a Nunchuk handle (Nintendo) with a thumb-actuated 2-DOF joystick that allows the control of the Yaw and Roll distal DOFs.Note that, for this study, these distal DOFs are implemented virtually in the simulator, whereas the actual robotized prototype (not described here) embeds DC motors to actuate them.The Nunchuk handle is connected to the instrument shaft through two lockable joints: 5, 21-28, 2014 www.mech-sci.net/5/21/2014/As shown in Fig. 2, the universal joint is made up of a plastic ball joint with a pin partly inserted into the ball, perpendicularly to the instrument shaft.The pin can slide along a straight groove in the joint base to allow Yaw and Pitch motions of the shaft, but prevents any Roll motion (i.e.selfrotation).The universal joint allows the user to move the shaft within a cone of approximately 70 • of aperture, while self-rotation is unrestricted.A star-shaped knob at the front of the handle is linked to the instrument shaft via a flexible axle in order to control its Roll motion with the index finger, as on most straight instruments. Both joints are locked by means of a compression spring and can be released by a thumb-actuated lever that pulls a Bowden cable.The locking torque produced by the spring on the universal joint is 0.5 Nm for Pitch (i.e.left-right motion of the instrument tip) and 0.25 Nm for Yaw (i.e.up-down motion).It is obviously not sufficient to perform a task in the real surgical world, where a force of up to 50 N may be required to insert a surgical needle into muscle tissue.Nevertheless, since it is designed to operate in a VR simulator without any interaction force between the instrument tip and the environment, the only forces that the locking mechanism has to withstand are the reaction and friction forces in the trocar (neglecting the shaft's own weight).Furthermore, we determined experimentally that the locking torque is one order of magnitude higher than the friction torque inside the joint.The user can thus easily feel if the joint is locked or free during a trial run on the VR simulator, which is the purpose of this mock-up. Locking modes The handle can be used in one of these four modes: 1. Standard fixed handle, with both joints locked in their central position. 2. Fully free handle, with both joints released during the entire task, the shaft self-rotation being controlled by the index finger. 3. 2-DOF adjusted and locked, 1-DOF free, in which the 2-DOF joint is locked in a convenient configuration during the entire task while the 1-DOF joint can be actuated by finger during the task. 4. Both joints adjusted and locked in a convenient configuration during the entire task. The first two modes are similar to those tested previously in Herman et al. (2011), although the actuated self-rotation of the shaft through the active trocar was replaced by finger actuation using the star-shaped knob described above. For the last two modes, the handle is pre-locked in a convenient and ergonomic configuration after positioning the end-effector in the task region, before starting the task itself. According to Matern and Waller (1999), this configuration is ideal with the arm slightly abducted, retroverted, and rotated inwards at shoulder level, the elbow bent at about 90-120 • , and the hand in medial configuration (i.e.without any pronation, wrist flexion or deviation).The pre-locking procedure is compatible with actual surgery, during which instrument motions remain confined in a small region of the intraabdominal workspace for a few minutes to perform a specific action (e.g.dissection, cutting, clip placement, stitching). Performance assessment The performance of these four modes was assessed through an experiment on the aforementioned VR simulator.This section describes the manipulation task, the two main comparison metrics, the experimental setup and protocol, and the statistical methods used to analyze recorded data.Since the present study derives directly from the experiment reported in Herman et al. (2011), only important information and differences with the previous study are summarized in this paper.The reader interested in further details will find them in the aforementioned reference. Task A single pick-and-place task (Fig. 3) was used to assess the performance of the four handle modes.It reproduces the gesture complexity of stitching but, contrary to the latter, remains feasible with the same degree of difficulty in the absence of haptic feedback.As explained above, the task was improved compared to the previous experiment so as to reduce the instrument motion required during a task, by placing the ring close to the pin at the beginning of the trial.In addition, the pin can be placed either vertically (normal to the bottom plane) or at 45 • from the vertical, so as to require a ring orientation during the task. The chronometer is triggered when the subject bursts a balloon placed on top of a pin with the instrument tip.He/she must then grasp a ring placed horizontally on the bottom of the workspace, next to the pin.The grasp must be performed on a specific portion of the ring and under a certain orientation by using the intra-abdominal DOFs.This mimics the fact that a curved needle must be grasped under specific conditions to perform a stitch.Finally, the ring must be placed on the pin after a reorientation of the end effector.The chronometer is stopped when the ring is released on the pin. Metrics The performance of the four handle modes was assessed using the same metrics as in Herman et al. (2011): a global gesture performance score available in most commerciallyavailable laparoscopy simulators, and an instantaneous ergonomic score. The global performance score (P ), initially proposed by Huang et al. (2005), combines the time to complete the task (TTC), the number of errors (Err) and the motion economy (ME), defined as the ratio between actual and optimal (shortest) path length: (1) The weight for motion economy had been chosen previously in agreement with surgeons for the study reported in Herman et al. (2011).The ergonomic score is based on a real-time index inspired by the Rapid Upper Limb Assessment (RULA) form (McAtamney and Corlett, 1993) that was adapted by Person et al. (2001) to laparoscopic surgery.This instantaneous score is computed from joint angles of the major upper arm (i.e. the one that holds the dextrous instrument).It ranges over an integer scale from 1 (excellent posture) to 9 (very poor ergonomics).All fine details regarding the computation of this instantaneous score are reported and depicted in Herman et al. (2011). Experimental setup The experimental setup is shown in Fig. 4. The mock-up of robotic instrument with an articulated handle and one standard laparoscopic grasper (not required to perform the task but used to make the situation more realistic) are inserted in a Mech.Sci., 5, 21-28, 2014 www.mech-sci.net/5/21/2014/pelvitrainer through trocars that form a 10 cm equilateral triangle with a virtual laparoscope.The pelvitrainer and monitor heights are adjusted to respect the standard ergonomic prescriptions for laparoscopy.Positions and orientations of both instruments with respect to the pelvitrainer are measured using a Polaris Optotrak system (Northern Digital Inc., Waterloo, Canada) connected to the VR simulator.To compute the instantaneous ergonomic score described above, major upper limb segments are tracked using a Codamotion system (Charnwood Dynamics Ltd., Leicestershire, UK) consisting of a Hub computer unit and 3 Cx1 units, offering a tracking accuracy better than 0.3 mm. Protocol Six right-handed subjects with no experience in surgery and variable experience in playing music or video games took part in the experimental campaign.At the beginning of each session with a new participant, the basic principles of laparoscopic surgery were explained.Then, the purpose of the study was detailed, along with the two metrics.The experimental setup and protocol were briefly explained during the placement of the Codamotion tracking markers on the subject.After calibration of the markers, the VR simulator and the handle were introduced via a short demonstration.The participant was then asked to perform the tasks as quickly as possible and using the shortest possible path, avoiding errors (i.e. a collision between ring and pin, or an instrument out of sight).This instruction was repeated regularly during the session. Before starting the experiment, a learning exercise similar to the task was performed so as to become familiar with the VR simulator and the instrument.The learning curve for the global performance score and the task duration were fitted after each learning trial using Matlab.The exercise was repeated until the subject reached a performance plateau.On average, 10 repetitions were required.The learning exercise was also repeated after each handle mode change before starting the 5 recorded trials, until the subject felt comfortable with the new mode and the global performance score became stable.Three to five repetitions of the learning exercise were required, depending on the subject and the handle mode. The task was repeated five times with each handle mode, each repetition being placed at one specific position in the virtual abdomen and under a specific angle so as to cover the entire workspace.The same order of pin placement and orientation was repeated for all handle modes and participants.The experiment lasted 75 min on average, including initial introduction, learning and experimental phases, and closing discussion. Each participant started the experiment with handle mode 1 (fixed in a central configuration), then continued with one of the six permutations of the remaining three modes, randomly selected.Since the subjects had to become accustomed to both the instrument and the VR simulator itself during the initial learning session, we decided to always start with the most basic handle mode so as to facilitate the understanding of the VR simulator.In order to check whether this not fully randomized sequence had any influence on the results, the first two subjects repeated the experiment with the first handle mode (five repetitions of the task) at the end of their session, and their performance did not seem to be different from that at beginning of the experiment, suggesting no important cross-learning effect between modes. Statistical analysis A total of 120 repetitions (6 subjects, 4 handle modes, 5 trials for each mode) were recorded during the experimental session.During 5 trials, the subject let the ring fall and had to redo the entire gesture, increasing task duration and motion economy significantly.These trials were therefore excluded from the performance analysis, although they were taken into account in the ergonomics analysis since the gesture was performed normally.Conversely, recording of the upper limb kinematics did not work during 4 trials and these were not taken into account for the ergonomics analysis, although the performance metrics were recorded properly and included in the statistical analysis. Valid data were analyzed with JMP 10.0.2 software (SAS Institute Inc.).An ANOVA was performed on the global performance score P , time to task completion TTC, motion economy ME, and on the average and maximum values of the RULA-based instantaneous ergonomic score for each trial.Each model contained the following effects: handle mode, subject, and their interaction.The handle mode was defined as a fixed factor, while the subject and the twofactor interaction were defined as random.The linear model was solved using REML (Restricted Maximum Likelihood).The Tukey HSD test of multiple comparisons was used to compare modalities of significant factors. Results Results are depicted on Figs.5-8.Figures 5 and 7 represent the box-and-whisker plot of the average RULA-based score and the global performance score, respectively.These graphs were derived from the valid data (see above), and indicate the degree of dispersion and skewness in the data.Upper and lower limits of the boxes represent the first and third quartiles, respectively, while the intermediate line is the median.Whiskers depict the lower and higher values, excluding the outliers depicted as dots.These remaining outliers were not excluded from the analysis.Table 1 summarizes the leastsquare means and standard errors modeled by the ANOVA for each ergonomic and performance metric. The ANOVA performed on the average RULA-based ergonomic score turned out to be significant (p = 0.0011).The Tukey HSD test showed that the first handle mode (standard www.mech-sci.net/5/21/2014/Mech.Sci., 5, 21-28, 2014 fixed handle) was significantly less ergonomic (higher score) than all other modes (p = 0.001 for mode 4, p = 0.0076 for mode 2, and p = 0.0114 for mode 3), as can be seen on the upper-right plot of Fig. 6.No difference was found between the other modes (p > 0.63).The interaction plot on Fig. 6 suggests that there might be significant differences in terms of average RULA score between two groups of subjects: subjects 2 and 5 have a better (lower) ergonomic score than all the others.However, the plot shows that there is no interaction between factors "handle mode" and "subject": the same handle mode has poorer performance for all subjects.The ANOVA on the maximum ergonomic score is also significant (p < 0.0001) and the Tukey HSD results are similar, with a significantly higher value for the first mode (p ≤ 0.0015), no significant difference between the last three modes, and no interaction. No significant difference in global performance score was found between handle modes (p = 0.42).The same result was found for the task duration (p = 0.3) and for the motion economy (p = 0.54).Differences between subjects are minor, as Mech.Sci., 5, 21-28, 2014 www.mech-sci.net/5/21/2014/can be seen on Fig. 8, and there is no interaction between factors. Interpretation Figure 6 and the statistical analysis show, as expected, that the ergonomics is improved significantly thanks to the articulated handle.This is in accordance with the previous study (Herman et al., 2011).It confirms the benefits of such a device for laparoscopists, enabling them to operate with a more comfortable and less tiring arm posture than with a handle fixed to the shaft of a conventional or dextrous instrument.In addition, this ergonomic improvement is obtained when the handle is free (mode 2) as in the previous study, and also when it is locked in a convenient configuration (modes 3 and 4 featured by the new design).This confirms our intuition that locking the handle partly or fully does not affect the general arm posture when gestures are concentrated momentarily in a small portion of the intra-abdominal workspace.This increase in ergonomics is obtained with no decrease in average performance, as depicted in Fig. 8.This was expected for mode 4 where the handle is also fully locked during the task.However, it differs from what we found previously with a free handle (mode 2): in Herman et al. (2011), we reported that a free handle was less efficient than a fixed one.This difference between the two experimental results is due mainly to the fact that rolling the shaft is performed manually using a finger in this experiment, whereas it was motorized and controlled by the joystick previously.This might suggest that the DOFs decoupling proposed in this paper with manual control of the Roll shaft motion is more natural and intuitive than the previous actuated version, although this needs to be confirmed experimentally. The significant difference found between subjects stems mainly from the fact that the simulator height could only be adjusted between two positions.Depending on his/her own height, each subject chose the most comfortable adjustment.However, arm angles could differ between subjects for the same instrument tip position inside the virtual abdomen, resulting in different average RULA-based scores.In addition, although the same instructions were given repeatedly to the subjects, some tried to maintain the most ergonomic posture possible, while others tried to work as fast as they could and paid less attention to their arm posture. Conclusions Several surgical gestures are difficult to perform using standard laparoscopic instruments.Hand-held robotic instruments with additional end-effector DOFs might be an optimal solution, combining the dexterity enhancement offered by tele-surgery robotic systems and the cost-effectiveness of purely mechanical devices.However, despite these technological improvements, surgeons still have to take uncomfortable, or even painful, postures. This paper introduces a novel articulated handle that releases constraints between upper limb configuration and instrument tip position and orientation inside the abdominal cavity.An experimental study performed on a custom-made VR simulator tends to demonstrate that the articulated handle helps in restoring an ergonomic arm posture, without reducing gesture performance.A fully-functional instrument prototype is currently being developed for future bench-top and in vivo validation.Ongoing work also includes the implementation of force feedback on the VR simulator, so as to assess its influence on the comparison between instruments. In addition, although the decrease in ergonomic score offered by the articulated handle is significant, it would be interesting to know to what extent it has an effect on the surgeon's comfort over a period of time longer than the duration of the experiment.A complementary study could therefore be performed using either the VR simulator or the instrument prototype under development.During this study, subjects would repeat gestures with the handle in mode 1 or 3 for at least one hour, and several physiological parameters (e.g.cardiac rythm, EMG in shoulder, arm and forearm muscles) that correlate with physical workload, comfort and ergonomics would be measured. Finally, although our study focuses only on robotic instruments with intra-abdominal mobility, one can assume that its conclusions could be extended to standard laparoscopic instruments.Since not all surgical gestures require the high dexterity provided by the actuated distal DOFs, standard instruments (e.g.graspers, hooks, scissors) could easily be equipped with such an ergonomic handle and at a low additional cost. Figure 1 . Figure 1.Prototype of articulated handle with lockable joints. Figure 2 . Figure 2. CAD view of the universal joint with locking compression spring. Figure 3 . Figure 3. Screen capture of the VR simulator pick-and-place task. Figure 4 . Figure 4. Overview of the experimental setup. Figure 5 .Figure 6 . Figure 5. Box-and-whisker plot of the average RULA-based score for each handle mode.Sample size for each box is reported in Table 1.Outliers are represented as dots. Figure 7 .Figure 8 . Figure 7. Box-and-whisker plot of the global performance score P for each handle mode.Sample size for each box is reported in Table 1.Outliers are represented as dots. Table 1 . Least-squares means and standard errors modeled by ANOVA for each handle mode.
v3-fos-license
2016-05-04T20:20:58.661Z
2015-10-06T00:00:00.000
1742562
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/s11671-015-1098-6", "pdf_hash": "b7409cd3c8fdc3d32aa44718ee0397fc227be7fe", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42207", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "sha1": "b7409cd3c8fdc3d32aa44718ee0397fc227be7fe", "year": 2015 }
pes2o/s2orc
Electromagnetic Enhancement of Graphene Raman Spectroscopy by Ordered and Size-Tunable Au Nanostructures The size-controllable and ordered Au nanostructures were achieved by applying the self-assembled monolayer of polystyrene microspheres. Few-layer graphene was transferred directly on top of Au nanostructures, and the coupling between graphene and the localized surface plasmons (LSPs) of Au was investigated. We found that the LSP resonance spectra of ordered Au exhibited a redshift of ~20 nm and broadening simultaneously by the presence of graphene. Meanwhile, the surface-enhanced Raman spectroscopy (SERS) of graphene was distinctly observed; both the graphene G and 2D peaks increased induced by local electric fields of plasmonic Au nanostructures, and the enhancement factor of graphene increased with the particle size, which can be ascribed to the plasmonic coupling between the ordered Au LSPs and graphene. Background Graphene is the first two-dimensional carbon atomic crystal which is constructed by several layers of honeycombarrayed carbon atoms. This promising material is highly attractive for the fabrication of high-frequency nanoelectronic and optoelectronic devices due to its exceptional optical and electrical properties, such as extreme mechanical strength, ultrahigh electrical carrier mobility, and very high light transmittance [1][2][3]. Unfortunately, the graphene of only one-atomic-layer thickness exhibits lower light absorption (only~2.3 % for a single layer) originating from the weak light-graphene interaction, which is unfavorable for high-performance graphene-based optoelectronic devices. Several approaches have been proposed to enhance the absorption of graphene, including using onedimensional photonic crystal and localized surface plasmons (LSPs) [4,5]. LSPs in conventional systems are the collective oscillations of conduction electrons in the metal nanoparticles when illuminated and excited by light with appropriate wavelength, and the resonance excitation of the LSPs induces a large enhancement and confinement of the local electric field in the vicinity of the metal nanostructures. Generation of LSPs stimulates a wide range of applications such as ultratrace biochemical sensing, enhanced absorption in photovoltaic cells, surface plasmonenhanced fluorescence, and Raman scattering. From a spectroscopic point of view, surface-enhanced Raman spectroscopy (SERS) has become a promising spectacular application of plasmonics especially for the graphene-LSP hybrid system. On the other hand, the two-dimensional nature of graphene and its well-known Raman spectrum make it a favorable test bed for investigating the mechanisms of SERS, and various nanoparticle geometries have proven to deliver a considerable Raman enhancement in the case of graphene [6][7][8][9][10][11]. A Raman enhancement of 103 times had been detected for graphene from the dimer cavity between two closely packed Au nanodisks. However, fabrication and space control of the Au nanodisks are very complex and costly [6]. Sun et al. deposited Ag on the surface of a graphene film, and distinct Raman enhancement had been achieved. However, the quality of graphene significantly deteriorated after deposition of metal nanoparticles which limits the further application of graphene [8]. Except for enhancing intensity of Raman scattering of graphene by LSPs, graphene had also been adopted to tune the surface plasmons resonance wavelength of metal nanostructures. For instance, the plasmonic behavior of Au nanoparticles can be tuned by varying the thickness of the Al 2 O 3 spacer layer inserted between the graphene and nanoparticles [12]. Nevertheless, the Au nanoparticles are randomly distributed on the surface of the Al 2 O 3 layer which is unfavorable for the precise controllability and investigation of the inter-coupling of the graphene-metal hybrid system. Obviously, combination of enhanced near-fields of ordered plasmonic nanostructures with unusual optoelectronic properties of graphene will provide a more promising application for novel graphenebased optoelectronic devices, and thus, research on the plasmonic coupling between graphene and plasmonic ordered nanoparticles is highly desirable. In this paper, ordered and size-controlled Au nanostructures were fabricated using the inverted self-assembled monolayer template of polystyrene microspheres. A chemical vapor deposition (CVD) graphene was transferred directly on top of Au nanostructures, and the interaction between graphene and LSPs of Au has been systematically investigated. We found that the SERS of graphene was apparently observed and Raman intensities of both the graphene G and 2D peaks increased with the size of Au which was induced by the local electric field of plasmonic Au nanostructures. On the other hand, the absorption spectra of Au nanostructures exhibited a redshift of~20 nm and a slight broadening by the presence of graphene, which was due to the inter-coupling between Au LSPs and graphene. Sample Preparation Colloidal microspheres of polystyrene (PS) (2.5 wt. %) with a diameter of 500 nm were purchased from Alfa Aesar and self-assembled to form a hexagonal close-packed (hcp) monolayer on the SiO 2 /Si or quartz substrates. Prior to the hcp alignment of the PS microspheres, the target substrates were firstly immersed in piranha solution (98 % H 2 SO 4 :37 % H 2 O 2 = 7:3) for 3 h to achieve a completely hydrophilic surface, which contributed to the adhesion of PS microspheres to the substrates surface. Two parallel hydrophilic Si wafers with a distance of~100 μm were mounted on the dip coater, and two or three drops of the PS sphere suspension were dropped into the gap of the Si wafers. With one Si substrate fixed, the other parallel Si was lifted with a constant speed of approximately 500 μm/s. The monolayer PS microspheres were ultimately formed on the hydrophilic surface of Si. Then, the PS monolayer was used as a template for the deposition of the Au film. The Au films were deposited on the PS template using the ion beam-assisted deposition (IBAD) system with a Kaufmann ion source. The size and shape of Au nanostructures were adjusted by varying the nominal thicknesses of the initial Au films ranging from 15 to 40 nm. After ultrasonic washing in acetone for 30 min, the PS microspheres together with the upper Au on them were completely removed and the ordered and sizetunable bottom Au nanostructures were formed. The fewlayer graphene (FLG) films were synthesized on the Cu foils by low-pressure CVD in a tubular quartz reactor, using methane as the carbon source under H 2 and Ar atmosphere at 1000°C. Then, they were transferred to cover the Au nanostructures after the Cu foils were removed by wet chemical etching. The schematic illustration of fabrication processes for the ordered Au nanostructures with graphene coverage is shown in Fig. 1. Characterization Surface morphologies of the ordered Au nanostructures with various shapes and sizes were characterized by atomic force microscopy (AFM, NT-MDT solver P47). The Au nanostructures with graphene coverage were investigated by field-emission scanning electron microscopy (SEM, Hitachi FE-S4800). Raman spectra of graphene were recorded with a Horiba LabRAM HR800 spectrometer using the 514-nm excitation line from an Ar ion laser. For both SEM and Raman measurements, the SiO 2 /Si substrates were adopted. The ultraviolet (UV)-visible absorption spectra of the ordered Au nanostructures were measured as a function of the incident wavelength using a Varian Cary 5000 spectrophotometer in a double-beam mode. The quartz substrate was adopted for the UV-vis Figure 2 shows the AFM images of Au nanostructures with different sizes and shapes after removing the PS microspheres. Obviously, the long-range hexagonal order inserted from the original PS template is conserved. When the initial thickness of the Au film is 15 nm, the shape of the Au nanostructures is sphere-like nanoparticles, with an average width of~100 ± 4.5 nm. Another point we have to notice from the SEM images is that the shape of the Au nanostructures gradually changed from more sphere-like nanodots to sharp triangles with the increase of the initial thickness of the Au film from 15 to 20 nm. The effect of the shape changes was also quantified by evaluating the circularities of the individual nanostructures, defined as the ratio of the square of the perimeter to 4πA, where A is the area of the particular nanostructure. The circularity should be 1.654 for a regular triangle and should approach 1 for a perfect circle [13]. The resultant values of the Au nanostructures with and without graphene coverage are listed in Table 1. From the table, we can distinctly see that with the deposition time increasing from 10 to 30 min (corresponding initial thickness from 15 to 40 nm), the circularity of the Au nanostructures first increases and then decreases, and the average width changes from 100 ± 4.5 to 140 ± 7.8 nm. As the diameter of the PS sphere is 500 nm, the gap (inscribed circle) among the PS spheres is approximately 77 nm. When the initial thickness of the Au film is only 15 nm, the Au film cannot cover the whole gap surface. Due to the different thermal expansion coefficients between the Au films and the substrate, when the initial thickness of the Au film is~15 nm, the compressive stress induced by the Ostwald ripening mechanism would cause the Ag films to form isolated nanoparticles. With the increase of the initial thickness of the Au film to 20 nm, the whole gap among the PS nanospheres can be approximately filled, leading to shape transformation of the Au nanostructures from nanodots to triangles. When the initial thickness of Au was further increased to 40 nm, the whole gap of the PS spheres can be fully filled and the shape of the Au nanostructures changes to nanospheres due to the larger thickness of Au. The typical SEM images of 15-and 20-nm Au nanostructures covered with graphene are presented in Fig. 3. It is clear that the continuous graphene film has been successfully transferred on the surface of Au nanostructures, and the electron beam can easily penetrate through the atomically thin graphene to display the underlying Au nanostructures. The ridges and cracks formed on the graphene surface during the wet transfer processes can be also distinctly observed. To investigate effects of graphene on the plasmonic Au nanostructures, LSP properties of Ag nanostructures with and without graphene coating were characterized by a UV-visible absorption spectrophotometer. As shown in Fig. 4, the absorption spectra of the Au nanostructures on quartz substrates exhibit a wide plasmonic resonance peak varying from sample to sample, indicating that the LSP resonance of Au can be tuned by adjusting the size and shape of the Au nanostructures. The absorption intensity after graphene coverage increases slightly which can be ascribed to the absorption of the graphene film. Meanwhile, an obvious redshift and broadening of the Au LSP peak can also be clearly observed for all samples with graphene coating. Resonance positions of Au LSP before and after graphene coverage are also summarized in Table 1, and we can clearly see that the Au LSP resonances exhibit an~20-nm redshift. On the other hand, the full width at half maximum (FWHM) of Au nanostructures with graphene also shows a distinct broadening compared with their counterparts without graphene. As the sizes of the nanostructures are smaller than the wavelength of incident light, the quasistatic model can be used to describe the position of the resonance peak. When irradiated by light, the conduction electrons will move and resonate with a specific frequency which is referred to as plasmon frequency of the particle dipole. According to the quasistatic analysis, the absorption peak corresponds to the dipole surface plasmon resonance for the ordered Au nanostructures on the quartz substrates [14]. When the particle size increases, the conduction electrons cannot all move in phase anymore, that is to say, the quadrupole resonances may occur except for the dipoles. The optical spectra of Au nanoprisms display in-plane dipole and quadrupole resonances. And the interaction of the dipole and quardrupole leads to a reduction of the depolarization field, which is the origin of the redshift of Au LSP resonances [14]. Results and Discussion The above results also indicate that the Au LSPs are also strongly affected by the presence of graphene, which can be attributed to the coupling between the graphene film and the localized electromagnetic field of the Au nanostructures. As the incident light is perpendicular to the surface of the Au nanostructures, the incident electric field is parallel to the sample surface and has no vertical component, and only the lateral electron oscillations within the Au nanostructures can be induced. When the LSPs of Au nanostructures with graphene coating are excited, the image dipoles or quadrupoles within the graphene sheet which are antiparallel to the dipoles or quadrupoles in Au will be formed [5,12]. The presence of the antiparallel image dipoles and quadrupoles can reduce the internal electric field in the Au nanostructures, which results in the redshift and broadening of LSP resonance peaks for the Au nanostructures with graphene. Theoretical calculation based on the dipole approximation (results not shown here) has been conducted to understand the redshift of Au LSPs after graphene coverage. In the calculation, the Au nanosphere is utilized as a representative of the Au nanostructures and is placed above a transparent substrate. The thickness of the substrate is assumed to be semi-infinite, and absorption of the substrate is completely omitted. The dielectric constant of graphene is based on an assumption that the optical response of every graphene layer is given by optical sheet conductivity [5,9], and the dielectric constant of Au can be found in the literature [15]. The graphene sheet covers the surface of the Au nanosphere, which is treated as a single dipole. Considering presence of the antiparallel dipole in graphene, the polarizability α of the Au nanosphere can be written as [9,16] where α is the polarizability of Au nanosphere, β is 1 for the lateral electric field, and d is the distance between the center of the Au nanosphere and the graphene sheet. ε 1 , ε 2 , and ε 3 are the dielectric constants of Au, ambient, and graphene, respectively [16]. The absorption efficiency of Au Q abs can be expressed as Q abs = [k/πa 2 ]Im(α). The calculated results of Q abs for a single 30-nm Au nanosphere with and without graphene coating show that the absorption peak of the Au nanosphere exhibits a slight redshift (about 20 nm) after graphene coating, which is in good agreement with the experimental data. The discrepancy of the absorption results between calculation and experimental results can be ascribed to the simplified assumptions during the calculation such as the semi-infinite substrate, the dipole approximation, the homogeneous particle size, and the crystal perfection of the Au nanosphere [5,10]. To shed light on the effects of plasmonic ordered Au nanostructures on the Raman properties of graphene, the Stokes Raman spectra of pristine graphene and graphene transferred on differently shaped and sized Au nanostructures were measured and the corresponding results are shown in Fig. 5. The Raman spectrum of the pristine graphene reveals the well-known D peak (1353 cm −1 ), G peak (1590 cm −1 ), and 2D peak (2690 cm −1 ). The G peak originates from the first-order Raman scattering process, while the 2D peak is due to a double-resonance intervalley Raman scattering process [17,18]. The nearly negligible intensity of the D peak indicates few structural and crystalline defects in the CVD graphene. The Raman peak of graphene on top of the 100-nm Au nanostructures exhibits almost the same amplitude with its counterpart on the SiO 2 /Si substrate. It is apparently revealed that the intensities of graphene G and 2D peaks are significantly enhanced with increasing size of the ordered Au nanostructures from 100 to 140 nm. With increasing size of the ordered Au nanostructures to 140 nm, the Raman peak intensities of graphene on top of Au display an enhancement factor of~3-fold for the G peak and 2.5 for the 2D peak. Therefore, the SERS of graphene caused by the ordered Au nanostructures was distinctly observed, and the underlying mechanism will be discussed in detail later. Generally, both the LSP and charge transfer can play a role in the SERS of graphene modes, since the SERS electromagnetic enhancement results from the Raman excitation and coupling with the LSP of nanostructures, while the metal-induced charge transfer can lead to the chemical resonance enhancement [19]. In our case, the enhanced Raman intensity of graphene/Au nanostructures is mainly attributed to the electromagnetic field enhancement induced by the plasmonic resonance of Au LSP. As illustrated Fig. 5, the enhancement factor of both G and 2D Raman peaks increase with the size of the Au nanostructures, which is an indication of the electromagnetic mechanism. The rather qualitative image presented in Fig. 5 coincides well with a theoretical model that has been proposed for Raman enhancement in the graphene-nanoparticle hybrid [10,20]. According to this model, the Raman enhancement due to the stand-alone nanoparticle is given by where ΔI SERS is the increase in Raman intensity with respect to its original intensity I 0 , σ is the relative crosssectional area of the nanoparticle, Q(ω) is the plasmonic enhancement from the Mie theory with ω s representing the Stokes Raman frequency, α is the particle radius, and h represents the separation between the particle center and the surface of the graphene sheet. From Eq. 2, we can easily deduce that the Raman enhancement scales with the cross section of the metallic nanostructures, with the fourth power of the Mie enhancement, and inversely with the tenth power of the distance between the graphene and nanoparticle center. Therefore, the Raman scattering enhancement with increasing particle size (as shown in Fig. 5) can be mainly ascribed to the plasmonic absorption profile Q(ω) of the nanoparticle. The shapes of the particles vary with different initial thicknesses of the Au film, although majority of Au nanostructures can be treated as spheres or triangle prisms. Experiments of the absorption spectra have shown that the plasmonic resonance positions for the ordered Au nanostructures with various size ranges from 100 to 140 nm lie around the excitation laser wavelength (514 nm, dotted line in Fig. 4); therefore, the incident laser beam excites the Au LSPs which will form a strong localized electromagnetic field around the nanostructures. As the graphene sheet is in close vicinity of the Au nanostructures, the electric field will penetrate into the graphene sheet, and an enhanced electromagnetic field will be formed on the graphene surface, although the antiparallel image dipole will form and reduce the local electric field around the Au nanostructures [12]. Ultimately, the Q(ω) in Eq. 2 will be significantly improved, and the SERS signal is greatly enhanced. Moreover, the enhancement factor of the G and 2D peaks gradually increases with the size of the Au nanostructures, as shown in Fig. 5, which can be interpreted as follows: The corresponding wavelengths of the G and 2D peaks of graphene are consistent with those of the Au LSP resonances (see Fig. 4). Therefore, the enhanced local electromagnetic field induced by Au LSPs contributes to the improved Raman signal. On the other hand, with increasing average radius of the ordered Au nanostructures, ΔI SERS will be distinctly increased which can be simply estimated from Eq. 2. Now we turn towards the nature of the chemical interaction between the ordered Au nanostructures and graphene, which is another physical mechanism for the SERS especially for the graphene-nanoparticle hybrid system [3,19]. From the Raman spectrum, we find that the G peak position spectrally shifts from 1350 to 1360 cm −1 for graphene that was transferred on the surface of the ordered Au nanostructures while the 2D peak position almost remains constant. It has been reported that the G peak of graphene is blueshifted for both electron and hole doping, while the 2D peak is redshifted for electron doping and blueshifted for hole doping [18,21]. In this case, graphene is in direct contact with the ordered Au nanostructures, and the work function of Au (5.0 eV) is nearly the same with that of graphene (4.8 eV). Considering that there are huge amounts of electrons for the Au nanostructures, electron transfer from Au to graphene will occur, leading to an electron doping for graphene; thus, the G peak is blueshifted and the 2D peak is slightly redshifted. On the other hand, we deduce that graphene is under compressive strain when it is directly transferred on top of Au as some ridges appear on the surface of graphene (clearly in Fig. 3); hence, both the G and 2D peaks exhibit a blueshift trend [21]. As a result, both strain and doping effects lead to a slight blueshift for the G peak and negligible shift for the 2D peak position. However, we consider that charge transfer is not the dominant mechanism for the enhanced Raman intensity since the Raman intensity is increased with the size of Au nanostructures and the charge transfer effect should be independent on the Au size. Conclusions In summary, the coupling between graphene and LSPs of ordered and size-controllable Au nanostructures has been investigated systematically by directly transferring graphene on the surface of the Au nanostructures. The absorption spectra of Au exhibit a redshift of~20 nm after graphene coverage, which can be ascribed to the plasmonic coupling between the Au LSPs and graphene. On the other hand, the graphene SERS is significantly observed, and intensities of the G and 2D peaks increase with increasing size of the ordered Au nanostructures. The electromagnetic plasmonic effect rather than the charge transfer mechanism is considered to be the dominant mechanism for the SERS effect of graphene. We believe the results are beneficial not only for further understanding the coupling mechanism between graphene and the ordered metallic nanostructures, but also for developing plasmonic graphene-based optoelectronic devices.
v3-fos-license
2016-05-04T20:20:58.661Z
2016-03-01T00:00:00.000
1223417
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CC0", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0004486&type=printable", "pdf_hash": "8c8db64ab31ded5f45154a81f1d0a1ba09178b7d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42209", "s2fieldsofstudy": [ "Economics", "Environmental Science", "Medicine" ], "sha1": "8c8db64ab31ded5f45154a81f1d0a1ba09178b7d", "year": 2016 }
pes2o/s2orc
Willingness to Pay for Dog Rabies Vaccine and Registration in Ilocos Norte, Philippines (2012) Background The Philippines is one of the developing countries highly affected by rabies. Dog vaccination campaigns implemented through collaborative effort between the government and NGOs have played an important role in successfully reducing the burden of disease within the country. Nevertheless, rabies vaccination of the domestic animal population requires continuous commitment not only from governments and NGOs, but also from local communities that are directly affected by such efforts. To create such long-term sustained programs, the introduction of affordable dog vaccination and registration fees is essential and has been shown to be an important strategy in Bohol, Philippines. The aim of this study, therefore, was to estimate the average amount of money that individuals were willing to pay for dog vaccination and registration in Ilocos Norte, Philippines. This study also investigated some of the determinants of individuals’ willingness to pay (WTP). Methods A cross-sectional questionnaire was administered to 300 households in 17 municipalities (out of a total of 21) selected through a multi-stage cluster survey technique. At the time of the survey, Ilocos Norte had a population of approximately 568,017 and was predominantly rural. The Contingent Valuation Method was used to elicit WTP for dog rabies vaccination and registration. A ‘bidding game’ elicitation strategy that aims to find the maximum amount of money individuals were willing to pay was also employed. Data were collected using paper-based questionnaires. Linear regression was used to examine factors influencing participants’ WTP for dog rabies vaccination and registration. Key Results On average, Ilocos Norte residents were willing to pay 69.65 Philippine Pesos (PHP) (equivalent to 1.67 USD in 2012) for dog vaccination and 29.13PHP (0.70 USD) for dog registration. Eighty-six per cent of respondents were willing to pay the stated amount to vaccinate each of their dogs, annually. This study also found that WTP was influenced by demographic and knowledge factors. Among these, we found that age, income, participants’ willingness to commit to pay each year, municipality of residency, knowledge of the signs of rabies in dogs, and number of dogs owed significantly predicted WTP. The Philippines is one of the developing countries highly affected by rabies where, annually, an estimated 200-300 human deaths are attributed to rabies [7]. Rabies prevention and control policies and dog vaccination campaigns are currently the cornerstone of rabies elimination strategy in this country. The "Anti-rabies Act of 2007" with the objective of eliminating rabies throughout the Philippines by 2020 was enacted by the government of Philippines in 2007 [8]. Dog vaccination campaigns have so far been shown to be an effective rabies prevention and control strategy in reducing the burden of disease in parts the country [9]. In particular, the Bohol Rabies Prevention and Elimination Program (BRPEP), with the support of the local government and international Non-Government Organizations (NGOs), was able to considerably reduce human rabies in the province of Bohol. This was made possible through effectively utilizing social awareness campaign, dog population control measures, dog registration and mass dog vaccination campaigns, in addition to improved dog bite management and veterinary quarantine services [10]. Similar models of rabies elimination have also been initiated in other provinces of the Philippines. The Communities Against Rabies Exposure (CARE) Project, with the aim of creating another rabies free zone using similar rabies elimination strategies implemented in Bohol, was launched in Sorsogon Peninsula and Ilocos Norte in 2012 [11]. In 2014, with yet another support from the local government and international NGOs, this program have successfully incorporated rabies prevention messages into the elementary school curriculum and vaccinated 35 per cent of some 76,000 dog population in the province of Ilocos Norte [12,13]. Through this program, the province of Ilocos Norte has been rabies-free since 2013 [13]. Nevertheless, long-term rabies elimination from an area requires recurrent implementation of mass dog vaccination campaigns that can help maintain the herd immunity in a given population. To achieve this, a multi-year commitment is required not only from governments and NGOs, but also from the local community that directly benefit from such efforts. An introduction of affordable dog vaccination and registration fees to the public is therefore essential and has been shown to be an important strategy in Bohol [10]. Study site and population A cross-sectional study was conducted in Ilocos Norte located in the northernmost province on the western side of Luzon, Philippines (Fig 1). Ilocos Norte had an estimated population of 568,017 in 2010 and is predominantly rural (2010 census data) [14]. The annual average family income is 204,000 Philippine Pesos (PHP) (4,334 USD) and the annual average family expenditure is 159,000 PHP (3,378 USD) (2012 census data) [15]. The survey was conducted over a period of two weeks in August 2012 during the peak of the rainy season. To identify participants, combinations of random sampling, cluster sampling, and convenience sampling methods were employed in three sequential steps. Cluster sampling with probability proportionate to size (PPS) was used to identify villages (locally known as barangays). A random sampling method was used to identify roads (locally known as puroks) and the first participating household in the cluster. Convenience sampling was subsequently used to identify the remaining nine households as well as actual participants within households. This methodology was particularly employed because complete randomization of households was not feasible due to the vastness of the study area and population. The identification of the village was carried out by creating a cumulative list of community populations and selecting a systematic sample from a random start. A list of all the villages in Ilocos Norte and their corresponding population size was first obtained from a census data and then the villages were arranged in an alphabetical order. A sampling interval (SI) was calculated by taking the ratio of the cumulative population of Ilocos Norte to the total number of clusters (thirty). A random start number was then generated to obtain the first participating village. This village had a population closest to but not greater than the random number. Subsequently, the SI was added to the previously identified random number in order to obtain the second village. The third village was again identified by adding the SI to the preceding random number. This process was repeated again and again until the 30 th village (cluster# 30) was identified. Through this process, a total of 30 villages in 17 municipalities (out of a total of 21 municipalities) were identified and included. A road in a particular village was randomly selected based on a list obtained from the respective village officials. The same list also included households within the selected roads and served as the sampling frame for randomly selecting the first participating household around which a cluster of households was further identified. A total of 10 households in each road were included in the survey. Participants were then selected by convenience sampling according to the inclusion criteria (respondents must be equal to or greater than 18 years of age). Only the head of the household or the next representative household member were interviewed. These participants were purposively selected to be interviewed because it was believed that they play an important role in the decision making process of the household. Sample size determination To determine the appropriate sample size for this survey, an estimated population proportion of 20%, with ±5% confidence interval and 95% and coefficient was used. In addition, a cluster size of 10 with rate of homogeneity at 0.2 was employed. Based on a priori evidence, we anticipated a design effect of 1.18 [9]. Human research considerations Human subjects' clearance for this study was obtained from the Centers for Disease Control and Prevention (CDC) Human Research Protection Office in Atlanta, United States, under CDC protocol #6337 as well as the Mariano Marcos Memorial Hospital and Medical Center Ethics Review Committee in Batac City, Ilocos Norte under the protocol number 2012-07-014, and was determined exempt from full Institutional Review Board (IRB) review. Written informed consent was obtained from all participant prior to commencement of the study. If the participant was unable to read and write, the consent form was read to the participant and a thumbprint was obtained in place of a signature. The age of consent in the Philippines is 18 years. Therefore only participants 18 years old and over were allowed to participate in this study. Survey instruments A paper based questionnaire was administered in Ilocano, the local language of the majority of Ilocos Norte's population (2002 census data). Interviews were conducted by representatives from CDC Atlanta, in collaboration with the Provincial Veterinary Office in Ilocos Norte. A total of 23 interviewers participated in conducting the survey. Interviewers were pre-trained on the survey methodology used and the questionnaire was pre-tested in the field in order to evaluate its workability, and appropriate modifications were made. Responses obtained from the interview were subsequently translated into English for analysis. The questionnaire covered four major categories relevant to this analysis. The first category consisted of questions regarding household dog(s). The second category of questions, the WTP section, included an introductory statement explaining the purpose and importance of dog vaccination and registration campaigns and collected information on WTP for dog rabies vaccination and registration accordingly. In this section, the contingent valuation method (CVM) was used to elicit WTP for dog rabies vaccination and registration. In the context of health care, the CVM is a survey-based, hypothetical and direct approach to elicit monetary value to improve goods and services. Contingent valuation questions are used to estimate the willingness to pay distribution of consumers towards specific goods/services. It is a stated preference model that can measure the value consumers place on certain aspects of health care services [16,17]. In this survey, a particular elicitation method, a bidding game [18] with a series of yes/no questions that aim to find the maximum WTP was employed. This method was chosen because it was expected to have criterion validity (which here refers to whether the instrument of measurement adequately represents the object of measurement) in the setting of Ilocos Norte where there was an established culture of price negotiation for most goods [19]. Furthermore, empirical studies have found this method to be very reliable [20,21]. During the process of the bidding game, the participants were first offered an initial maximum WTP price. If the respondent accepted the initial price, a series of higher prices were offered until the respondent rejected the price. Alternatively, if the respondent refused the initially offered price, then the prices were repeatedly decreased until the respondent accepted or reached zero (Fig 2). For a more accurate estimation of the maximum WTP, the bid was presented in Philippines Pesos (PHP). During the bidding process, a uniform distribution of 12 bid levels was presented for the WTP for vaccination section, and 10 bid levels were presented for the WTP for registration section. Each increasing and decreasing level had a difference of 20PHP (~0.50 US dollars) [22]. To minimize potential starting point bias, when the offered start bid influences the direction of the WTP, the initial biding value offered was selected randomly using random number generation at the interview site. The interviewers were pre-trained in applying the randomization processes as well as in conducting the bidding game. The third and the fourth category comprised of sets of questions that aimed to explore determinants of WTP. Specifically, these sections looked at the demographic characteristics of survey respondents (gender, age, employment status, household income, and dog ownership status) and explored participants' awareness of rabies transmission, exposure, and outcome. Data analysis Paper surveys were digitized and a database was built in Microsoft Access. Accuracy of the data was subsequently checked to minimize data entry error. Descriptive statistics of the demographics of the study population were calculated. Linear regression was used to examine factors influencing participants' WTP for dog rabies vaccination and registration. Linear regression diagnostics were first performed to check how well the data met the assumptions of the linear regression. Normality of the data was tested using graphical methods of residuals versus fitted (predicted) plot. Cook-Weisberg test was used for detecting heteroscedasticity [23,24]. Since the data violated the linear regression assumptions of both normality and heteroskedasticity, an attempt to attain the validity of that assumption was made using power transformation and robust standard errors, respectively. The appropriate power transformation of the response variable was determined using Tukey's ladder of power [25]. Accordingly, a power transformation of 0.5 was used for the approximation of the residuals of the response variable to normality. Kernel density plots were then generated against each continuous predictor variables to assess if the predictor variable satisfies the linearity assumption [26]. Predictor variables that violated this assumption were further transformed using the appropriate power transformation. The analysis also accounted for the sampling methodology used (cluster sampling) and standard errors were adjusted during the model building. Accordingly, cluster robust standard errors that put into consideration the clustering effect were used during the analysis. All predictor variables that remained significantly associated (P0.1) in the univariate model were retained (variables tested are presented in Tables 4 and 5). Multicollinearity among predictor variables was assessed for using Variance Inflation Factor (VIF). As a rule of thumb, a tolerance value (1/VIF) of lower than 0.1 (or VIF greater than 10) was used as a cutoff point to check if some level of multicollinearity existed. All of the VIF scores were less than 1.5 and therefore no signs of multicollinearity was observed between predictor variables. The predictor variables were then fitted into a full model and further reduced through backward selection at a 5% significance level to obtain the final model. Coefficients and direction of the linear association between variables were determined from the final model and back transformed to the original scale. Mean and range values of WTP for vaccination and registration were calculated using the transformed data. The sample mean of the transformed data together with its 95% confidence interval was then back transformed to obtain the population median and its corresponding 95% confidence interval of the original data [27]. Statistical analyses were performed using STATA version 13 (Statacorp, Texas, USA). Results Demographics A total of 300 respondents were included in the study ( Table 1). The majority of the respondents were female (65%) and the mean age was 48 (SD 16). Eighty six percent of the respondents reported an annual income of less than 120,000PHP (2,876USD). In addition, the majority of respondents owned dogs (n = 64%). The average number of dogs owned by a household was 1.3 (SD 1.6) and the dog to human ratio was 1:3.5. Therefore, the estimated owned dog population of Ilocos Norte is expected to be 162,290 utilizing the predicted dog to human ratio. Forty three percent of the dog owners stated that they have vaccinated at least one of their dogs once within the past two years. Out of these, 8% stated that they have previously paid for their dog(s) to be vaccinated. In regards to rabies awareness, almost all (99%) of the respondents have heard of rabies and the majority (69%) knew how to recognize the clinical signs of rabies in dogs. Signs and outcomes of rabies in humans were adequately identified by 50% and 63% of the respondents, respectively ( Table 2). This results varied by gender. Females were significantly more aware of the signs and outcome of rabies in humans than males. However, only 46% of respondents knew how rabies was transmitted to humans. There were no significant discrepancies in awareness of rabies between those who own dog(s) and those who did not (Table 2). Willingness to pay for dog vaccination and registration Bidding games were completed by almost all (298 of the 300) respondents to elicit their maximum WTP for dog vaccination and registration (Table 3). Two people withdrew from the interview due to the inconvenience associated with the length of the interview. Eighty eight ᵃThis category comprises of individuals who stated symptoms that are not specific to rabies. ᵇThese category comprises of individuals who stated at least one rabies related symptom. ᵓThis category comprises of individuals who stated exposures that are not specific to rabies. ᵈThese category comprises of individuals who stated at least one rabies related exposure. ᵐThis category comprises of individuals who stated outcomes that are not specific to clinical rabies. ᵑThese category comprises of individuals who identified any clinical manifestations of rabies or stated "death" as the clinical outcome of rabies in humans. percent of the WTP for dog vaccination and 86% of WTP for dog registration responses were above zero. The odds of participants stating they were willing to pay for dog registration was almost 30 times (CI; 13 to 70) higher among those who were willing to pay for dog vaccination compared to those who were not willing to pay for vaccination. The population medians for the WTP for dog vaccination and registration were estimated to be 69.65 PHP (1.67 USD) and 29.13 PHP (0.70 USD), respectively (Table 3). Looking at the distribution of the WTP medians across the selected municipalities in Ilocos Norte, the lowest median WTP for vaccination value was observed in the municipality of Pasuquin while the highest was in the municipality of Bacarra. For dog registration, the lowest and the highest medians were observed in the municipality of Banna and Burgos, respectively (Fig 3A and 3B). A good majority of those who owned dog(s) were willing to pay the stated amount for dog vaccination and/or registration. Only 10% and 14% of dog owners had a stated maximum WTP of zero PHP for vaccination and for registration, respectively. In general, the majority of respondents (86%) indicated they were willing to pay the stated amount to vaccinate each of their dogs annually, while the remaining proportion were either not willing to accept this commitment (12%) or didn't know (2%) if they wanted to commit. Eighty-six percent of dog owners were willing to commit. The same percentage of those who did not own dog/s where also willing to commit to pay for each of their dogs, annually. Fig 4A and 4B displays the proportion of the population willing to pay a given price or more for dog vaccination and registration. The hypothetical demand for dog vaccination falls gradually as price increases while it falls rapidly for dog registration. In the univariate analysis, WTP for vaccination was negatively associated with age and positively associated with income. Willingness to pay decreased as age increased and people in the relatively higher income group (above 120,001 PHP and above) were, on average, willing to pay significantly more than people of the relatively lower income group categories (120,000 PHP and below) ( Table 4). Also, we observed similar variables and direction of association for WTP for registration as we did for vaccination. In addition to age, gender, income, and municipality of residency, dog ownership status and number of household dogs predicted WTP for dog registration ( Table 4). Some of the knowledge parameters were also found to be independently associated with participants' WTP in the univariate analysis (Table 5). Adequate recognition of the outcome of rabies in humans was positively associated with participants' WTP for dog registration and vaccination, while adequate recognition of rabies signs in dogs and in humans were only associated with participants' WTP for registration. Participants' willingness to commit to pay for each of their dogs, annually was also found to be an important determinant of WTP for vaccination and registration in this study. Those who were not willing to commit to pay each year were, on average, willing to pay significantly less for dog vaccination and registration (Table 5). In assessing the characteristics of participants willing to commit to pay each year in this survey, participants aged 20 to 39 years were 3.94 (1.11-13.98) times as likely to be willing to commit to pay each year as those who were over the age of 65 years. Moreover, we also found that people who stated they strongly liked dogs were 4.22 (1.52-11.75) times as likely to be willing to commit to pay as people who strongly disliked dogs. This indicates that participants of younger age group or participants with a more favorable attitude towards dogs may be willing to commit to pay for each of their dogs annually. Similar direction and magnitude of association seen in the univariate linear regression were also observed in the multivariable analysis. A number of factors (such as age, income, number of dogs, participants willingness to commit to pay for each of their dogs annually, and participants' knowledge regarding the signs of rabies in dogs) that were significantly associated with WTP in the univariate analysis remained independently associated with the WTP for dog vaccination and/or registration in the multivariable model. Similarly, age was a significant predictor of the amount individuals were willing to pay for vaccination and registration in the ᶻ Differences in mean WTP reported in PHP * Reference group multivariable model as it was in the univariate analysis. We observed that even after adjusting for the other significant determinants of WTP for vaccination/registration, the higher the age, the lower the amount individuals were willing to pay (Table 6). No direct relationship was observed between WTP and employment status. However, we found a strong association between age and employment status. Specifically, those who were between the ages of 20-39 years, and 40-64 years were 11.20 (CI, 4.74 to 26.47) and 11.37 (CI, 5.25 to 24.61) times more likely to be employed, respectively, as those who were over the age of 65. Further stratification of data by employment status revealed that there was no statistically significant relationship between age and WTP for vaccination in the employed group while in the unemployed group, the relationship between age and WTP for vaccination was still significant. In the case of dog registration, however, the association between age and WTP still persisted regardless of employment status. Therefore employment status may have modified the relationship between age and individuals' WTP for vaccination, but may have had no effect on individuals' WTP for registration. ᵃThis category comprises of individuals who stated symptoms that are not specific to rabies. ᵇThese category comprises of individuals who stated at least one rabies related symptom. ᵓThis category comprises of individuals who stated exposures that are not specific to rabies. ᵈThese category comprises of individuals who stated at least one rabies related exposure. ᵐThis category comprises of individuals who stated outcomes that are not specific to clinical rabies. ᵑThese category comprises of individuals who identified any clinical manifestations of rabies or stated "death" as the clinical outcome of rabies in humans. doi:10.1371/journal.pntd.0004486.t005 Discussion In the present study, we elucidate the maximum amount residents of Ilocos Norte, Philippines were willing to pay for dog vaccination and registration. On average, Ilocos Norte residents were willing to pay 69.65 PHP (approximately 1.67 USD) for dog vaccination and 29.13PHP (0.70 USD) for dog registration. Eighty-six per cent of respondents were willing to pay the stated amount to vaccinate each of their dogs, annually. The study findings give policy makers some indication of how much residents may be willing to contribute financially towards dog vaccination and registration in this community. In recent years, actual registration and vaccination fees have successfully been introduced in other parts of the country as well as in another rabies endemic countries [10,28]. In Bohol, for instance, dog vaccination and registration fees charged to dog owners were, on average, 75.49 PHP (approximately 1.74 USD) and 50 PHP (approximately 1.11 USD in 2009), respectively [9,10]. These charged fees were slightly higher than what was found in the present study which implies that Bohol residents may attach higher value towards dog vaccination and registration compared to Ilocos Norte residents. This may have resulted from the implementation of the Bohol Rabies Prevention and Elimination Program which may have increased rabies awareness as well as promoted community level participation, hence enhanced commitment [10]. The observed difference may also have resulted from the hypothetical nature of this survey, which uses individuals' stated preference in Ilocos Norte as opposed to practical experience in Bohol. The average stated maximum WTP for vaccination, in general, was found to be higher than the overall estimated per dog vaccination cost for most Asian, African, and Latin American countries (1.55 USD) [3]. In comparison to specific rabies endemic countries, the WTP value was found to exceed the estimated per dog vaccination cost in Thailand (0.52 USD) while it was lower than the cost found in Tanzania (1.73 USD & 5.55 USD per dog vaccinated in two different settings using two different vaccination strategies) and N'Djamena, Chad (found through both owner-charged (19.40 USD) and free vaccination campaigns (2.90-3.80 USD)) [3,[29][30][31][32]. The result of our study, however, suggested that the stated maximum WTP for dog vaccination in this population may not pay for the entire cost of a vaccination program if 70% vaccination coverage of the estimated dog population is to be attained. The estimated program cost per vaccinated dog in the Philippines varies from one locality to another. For example, the estimated cost per dog vaccinated in Muntinlupa City during 1991 was 0.78 USD (1.11 USD in 2012) while in Bohol Island it was estimated at 1.62 USD, excluding the manpower cost [10,33]. These costs were estimated for vaccination coverage greater than 70% attained in these locations. The cost estimated in Bohol, which geographically is more comparable to Ilocos Norte than Muntinlupa City, is higher than the stated maximum WTP for vaccination found among most of the participants in our study. Only 46% of our survey participants had a stated willingness to pay for dog vaccination of 1.55 USD or higher. Theoretically, if the stated WTP holds true, the price of dog vaccination should be set between 25-45 PHP (0.59-1.08 USD) to ensure that 68%-79% of our study participants would be willing to pay for dog vaccination. Practically, however, stated WTP and observed pay for dog vaccination has been found to vary considerably across price of vaccination charged to owners [28]. Therefore, we may expect some degree of variation between stated and actual pay. Participants who were willing to pay for dog vaccination were significantly more likely to be willing to pay for dog registration. This may be because dog registration fees are mandatory by law in the Philippines which may have influenced their decision to accept these fees as obligatory fees. Another explanation for their willingness to pay for registration might be the fact that registration fees act as insurance to receive subsidized post-exposure treatments from the government if dog owners are bitten by a dog [34]. In this study, we found that dog owners were willing to pay significantly less for both registration and vaccination services than those who did not own dogs (Table 4). This discrepancy in WTP may have occurred simply because people who did not own dog/s may not have considered the practical economic implications/consequence of paying compared to those who actually utilize these services, resulting in a significant variation in WTP between these groups. The present study also found that WTP for dog vaccination and registration was influenced by some of the demographic factors and pre-existing knowledge. Among these factors, we found that age, income, and participants' willingness to commit to pay each year, and municipality of residency were significantly associated with WTP. Age was an important determinant of WTP and was strongly associated with both WTP for dog vaccination and registration even after controlling for participants' income (in the case of WTP for dog vaccination), municipality of residency, willingness to pay for each of their dogs annually, and number of dogs owned (in the case of WTP for dog registration). This, in part, may be explained by the relationship that was observed between age and employment status in this study. Although there was no statistically significant relationship between employment status and WTP in this study, the strong relationship observed between WTP and age, and the great dependency of employment status on age of individuals may have implications not on their WTP, but rather on their ability to pay. This, in turn, may give cause for concern about relying on a single strategy of collecting dog rabies vaccination fees and could be used to argue that strategies to shield the elderly (who may be unemployed) from user fees related to dog vaccination need to be considered in the implementation of such programs. Another explanation for the strong association between WTP and age in this survey may be because of the relationship observed between age and rabies awareness. Specifically, compared to those younger than 65 years old, participants 65 years old and over were less likely to have heard about rabies, and less likely to have correctly identified the outcome of rabies in humans. This gap in rabies awareness that exist among the elderly may also have had an indirect effect on their WTP. This may be addressed by implementing educational campaigns that target individuals in this age category. Raising rabies awareness among the younger generation may also act as a long term solution. The positive association between income level and the amount individuals were willing to pay for dog vaccination and registration was consistent with the theoretical construct of positive income elasticity which states that higher income should be associated with higher WTP. However, this again may imply that people of the lower income category may be less able to pay than people of high income groups. Therefore there may be a need for adjusting premiums for those within the low income category through subsidization in order to balance participation. This study also found that WTP was significantly influenced by individuals' willingness to commit to pay for dog vaccination and registration. Ages 20-39 years and/or having favorable attitude towards dogs partially defined the characteristics of those who were willing to commit to pay. This may suggest that programs that enhance favorable attitude towards dogs, targeting particularly those of the younger age groups (20 to 39 years) may be a good strategy to attain a continued financial commitment that is required to sustain canine rabies elimination programs such as mass dog vaccination and registration campaigns. Participants with higher knowledge of the outcome of rabies in humans were willing to pay a significantly higher amount than those who had no knowledge. This positive association may imply that more individuals' awareness of rabies outcome in humans may mean more value towards dog vaccination and registration. However, as rabies cases fall, awareness is likely to fall and in turn may affect WTP. There are some limitations of the present study that need to be considered. In the use of contingent valuation technique, possible biases which may distort actual from hypothetical WTP may arise [17]. Reliability upon this technique highly depends on the information respondents possess of the service being valued. Given that the service being valued in this study is dog rabies vaccination and registration, and that participants have demonstrated some degree of awareness about them, the WTP value is likely to be valid and reliable. Second, starting point bias may influence the validity of stated WTP estimates. However, strategies were developed to minimize this bias by increasing the number of bidding levels in addition to randomly assigning the bids to participants. Third, this survey utilized a convenient sampling strategy to determine participating households and participants. This factor may limit the external validity/ generalizability of the survey. In the effort to maximize generalizability, the survey attempted to improve representativeness through maximizing the number of cluster within each municipality as well as increasing the number of household selected within each cluster. Fourth and last, the utilization of paper-based questionnaires may by itself carry the risk of introducing bias to the study. To prevent biases related to questionnaire studies from being introduced, this study used in-person interview strategy which provided an opportunity for participants to inquire and request for more clarification if uncertain about specific questions. In addition, enumerators were also pre-trained in the survey methods and questionnaires were translated into the local language in order to maintain consistency and improve comprehension. As a result of this effort, no strategically introduced bias was observed and missing values were seen to be randomly distributed across respondents and enumerators. The analysis has also accounted for these missing values to eliminate errors that could result from miscalculation. Conclusion This study provided evidence on the perceived monetary value of dog vaccination and registration in Ilocos Norte, Philippines by assessing the maximum amount of money individuals are willing to pay. It found that the majority of Ilocos Norte residents stated they were willing to pay an average of 1.67 USD for dog vaccination and 0.70 USD for dog registration. Socio-economic and demographic factors such as age, income, and number of dogs owned, municipality of residency, and participants willingness to pay for each of their dogs annually were found to influenced stated WTP. This factors, therefore, may need to be considered prior to the introduction of such fees to the public. Creating rabies awareness and promoting favorable attitude towards dogs may also aid in the effective delivery of such programs.
v3-fos-license
2018-04-03T00:00:37.097Z
2017-01-29T00:00:00.000
3333245
{ "extfieldsofstudy": [ "Medicine", "Biology", "Mathematics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/gepi.22041", "pdf_hash": "7c0e715902fbc7d22247e444c00f60dffcc1ebb9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42212", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "a39c5bf7b5febad2b0a6a7f931877b3b8f227fd3", "year": 2017 }
pes2o/s2orc
Semiparametric methods for estimation of a nonlinear exposure‐outcome relationship using instrumental variables with application to Mendelian randomization ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. INTRODUCTION Often the shape of association between an exposure and an outcome is nonlinear. For example, the observed association between body mass index (BMI) and all-cause mortality in a Western context is J-shaped (or U-shaped), as risk of mortality is increased for individuals at both ends of the BMI distribution (Flegal, Kit, Orpana, & Graubard, 2013). However, particularly for underweight individuals, this could reflect either reverse causality or confounding, rather than a true causal effect of low BMI increasing mortality risk. Instrumental vari-This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc. able (IV) methods can be used to distinguish between correlation and causation. However, these methods typically assume that the exposure-outcome relationship is linear when estimating a causal effect (Hernán & Robins, 2006). In many cases, investigating the shape of the exposure-outcome relationship is the primary aim of a study. This can be used to define treatment thresholds for pharmaceutical interventions or health guidelines. A natural way of tackling the nonlinearity problem in IV analysis is to perform a two-stage analysis similar to the well-known two-stage least squares method, except fitting a nonlinear function in the second stage (Horowitz, 2011;Newey & Powell, 2003). However, this approach requires the instrument and any covariates included in the first-stage model to explain a large proportion of variance in the exposure, as information for assessing the shape of relationship between the exposure and outcome will only be available for the fitted values of the exposure from the first-stage regression. If the proportion of variance in the exposure explained by the IV is small, then observing nonlinearity for this limited range of values is unlikely. In Mendelian randomization, the use of genetic variants as IV, genetic variants typically only explain a small percentage of the variance in the exposure (usually in the region of 1-4%; Ebrahim & Smith, 2008). Two approaches for addressing nonlinearity in the context of Mendelian randomization have recently been proposed (Burgess, Davies, & Thompson, 2014;Silverwood et al., 2014). Burgess et al. (2014) assessed the consequences of performing a linear IV analysis when the exposure-outcome relationship truly was nonlinear, as well as stratifying individuals using the exposure distribution to obtain IV estimates, referred to as localized average causal effects (LACE), in each stratum. Silverwood et al. (2014) performed metaregression of LACE estimates across strata to examine whether a quadratic rather than a linear model was a better fit for relationships between alcohol consumption and a variety of cardiovascular markers. In this paper, we present two novel semiparametric methods for investigating the shape of the exposure-outcome relationship using IV analysis developed for use in Mendelian randomization. The first is based on fractional polynomials (Royston & Altman, 1994;Royston, Ambler, & Sauerbrei, 1999), whereas the second fits a piecewise linear function. We also propose a test for nonlinearity based on the fractional polynomial method, and assess the impact of varying the number of strata of the exposure distribution used to test for nonlinearity and to estimate nonlinear relationships. We illustrate the methods using data from UK Biobank (Sudlow et al., 2015), a large UK-based cohort, to investigate the shape of the relationship between BMI and blood pressure using Mendelian randomization. Stratifying on the IV-free exposure We define the exposure-outcome relationship as the function relating the exposure to the expected value of the outcome. We initially assume that this function is homogeneous for all individuals in the population, and return to its interpretation in case of heterogeneity in the discussion. To assess the shape of association between exposure and outcome using a single instrument , we first stratify the population using the exposure distribution. If we were to strat-ify on the exposure directly, then an association between the IV and outcome might be induced even if it were not present in the original data, thus invalidating the IV assumptions (Didelez & Sheehan, 2007). This problem can be avoided by instead stratifying on the residual variation in the exposure after conditioning on the IV, assuming that the effect of the IV on the exposure is linear and constant for all individuals across the entire of the exposure distribution (Burgess et al., 2014). In econometrics, this residual is known as a control function (Arellano, 2003). We calculate this residual by performing linear regression of the exposure on the IV, and then setting the value of the IV to 0. We refer to this as the IV-free exposure. It is the expected value of the exposure that would be observed if the individual had an IV value of 0, and can be interpreted as the nongenetic component of the exposure. In each stratum of the IV-free exposure, we estimate the LACE as a ratio of coefficients: the IV association with the outcome divided by the IV association with the exposure. The assumption that the effect of the IV on the exposure is constant is a stronger version of the monotonicity assumption (Angrist, Imbens, & Rubin, 1996), and hence the LACE are local average treatment effects (also called complier-averaged causal effects; Yau & Little, 2001) for each stratum (Imbens & Angrist, 1994). We then proceed to estimate the exposureoutcome relationship from these LACE estimates using two approaches: the first based on fractional polynomials, and the second a piecewise linear function. Fractional polynomial method The fractional polynomial method consists of metaregression of the LACE estimates against the mean of the exposure in each stratum in a flexible semiparametric framework (Bagnardi, Zambon, Quatto, & Corrao, 2004;Thompson & Sharp, 1999). Fractional polynomials are a family of functions that can be used to fit complex relationships for a single covariate (Royston & Altman, 1994). The standard powers used when modeling using fractional polynomials are = {−2, −1, −0.5, 0, 0.5, 1, 2, 3}, where the power of 0 refers to the (natural) log function. These powers are used throughout this paper. Fractional polynomials of degree 1 are defined as where ∈ . Similarly, fractional polynomials of degree 2 are defined as where 1 , 2 ∈ . In both cases, 0 is interpreted as log( ). As fractional polynomials of degree larger than 2 are rarely required in practice, these were not considered in this paper (Royston & Altman, 1994). Because a causal effect is an estimate of the derivative of the exposure-outcome relationship (Small, 2014), we fit the LACE estimates using the derivative of the fractional polynomial function (from either (1) or (2)). The method proceeds as follows. First, we calculate the IVfree exposure, and stratify the population based on quantiles of its distribution. Second, the LACE estimate is calculated in each stratum as a ratio of coefficients (the LACE estimate for stratum iŝ| , | , wherê| , is the estimated association of the IV with the outcome in stratum and̂| is the estimated association of the IV with the exposure in the whole population), and the standard error of the LACE estimate is computed as (̂| , ) | (the first term of the delta method approximation; Thomas, Lawlor, & Thompson, 2007). Third, these LACE estimates are metaregressed against the mean of the exposure in each stratum using the derivative of the fractional polynomial function as the model relating the LACE estimates to the exposure values. The original fractional polynomial function then represents the exposure-outcome relationship. As this function is constructed from the LACE estimates, the intercept of the exposure-outcome curve cannot be estimated and must be set arbitrarily. If it is set to 0 at a reference value (for instance, the mean of the exposure distribution), then the value of the function represents the expected difference in the outcome compared with this reference value when the exposure is set to different values. Confidence intervals (CIs) for the exposure-outcome curve can be computed arithmetically under a normal assumption either using the estimated standard errors from the metaregression or by bootstrapping the second and third steps from above (we maintain the strata and estimate of the IV on the exposure as in the original data, and estimate the associations of the IV with the outcome in bootstrapped samples for each stratum). To explore a range of possible parametric forms, we fit all possible fractional polynomial models of degrees 1 and 2, and select the best-fitting one based on the likelihood. A fractional polynomial of degree 2 is preferred over one of degree 1 if the twice the difference in the log-likelihood is greater than the 95th percentile point of a 2 2 distribution for the best-fitting fractional polynomial in each class (Royston & Altman, 1994). Piecewise linear method Another way of estimating the exposure-outcome relationship is to use a piecewise linear approach. The exposure-outcome relationship is estimated as piecewise linear function with each stratum contributing a line segment whose gradient is the LACE estimate for that stratum. The function is constrained to be continuous, so that each line segment begins where the previous segment finished. As in the fractional polynomial method, although the intercept for each line segment is fixed by the previous line segment, the overall intercept of the exposure-outcome curve cannot be estimated and must be set arbitrarily. CIs are estimated by bootstrapping the IV associations with the outcome as in the fractional polynomial approach. For a 95% CI, the piecewise linear method is performed for each bootstrapped dataset, and then the 2.5th and 97.5th percentiles of the function are taken at selected points across the exposure distribution; we chose the mean exposure values in each of the strata. Tests of nonlinearity There are already two proposed methods in this framework for testing whether a nonlinear exposure-outcome model fits the data better than a linear model. The first is a heterogeneity test using Cochran's Q statistic to assess whether the LACE estimates differ more than would be expected by chance. The second is a trend test where the LACE estimates are metaregressed against the mean value of the exposure in each strata; this is equivalent to fitting a quadratic exposureoutcome model. A more flexible version of this method is to test the bestfitting fractional polynomial of degree 1 against the linear model. This can be achieved by comparing twice the difference in the log-likelihood between the linear model and the best-fitting fractional polynomial of degree 1 with a 2 1 distribution. SIMULATION STUDY To assess the performance of these methods in realistic scenarios for Mendelian randomization, we performed a simulation study. We simulated data for 10,000 individuals for an IV , a continuous exposure that takes only positive values, a continuous outcome , and a confounder (assumed to be unmeasured). The data-generating model for individual is (1), ∼ Unif(0,1), ∼ (0, 1), and ℎ( ) is the function relating the exposure to the outcome (the exposure-outcome relationship). Exposure values were taken to be positive and away from zero so that the outcome takes sensible values for log and negative power functions. The IV explains 2.6% of the variance in the exposure. Choice of exposure-outcome model For the fractional polynomial method, all possible fractional polynomials of degrees 1 and 2 were considered as the functional form of the exposure-outcome relationship. Combinations of effect sizes for the parameters were chosen ranging from 0 to 2. For fractional polynomials of degree 2, we also considered effects in opposing directions for 1 and 2 ; these simulations yielded similar results to those discussed here (results not shown). Fixed-effects metaregression was used in the simulations, however, random-effects metaregression yielded similar results (results not shown). For the piecewise linear method and comparisons between methods, linear, quadratic, square-root, and logarithmic functions were considered as the functional form of the exposure-outcome relationship, as well as a threshold model: Evaluating the performance of the methods To evaluate the fractional polynomial method, we first fitted the correct fractional polynomial model (i.e., with the correct degree and powers) and assessed the bias and coverage of the effect parameter estimates. Subsequently, we fitted all fractional polynomials of the same degree and selected the best-fitting polynomial based on the likelihood. We assessed the proportion of simulations where the best-fitting fractional polynomial was the correct fractional polynomial. If the correct fractional polynomial was not the best-fitting fractional polynomial, we tested whether it was in the group of fractional polynomials that fit the data almost as well as the best-fitting polynomial; defined as those fractional polynomials where twice the difference in the log-likelihood (compared with the best-fitting polynomial) was less than the 90th percentile point of a 2 distribution, where = 1 for comparing fractional polynomials of degree 1 and = 2 for comparing polynomials of degree 2. To evaluate the piecewise linear method, we first compared the outcome estimates at the mean exposure value in each quantile to the values of the true model at the same points. The coverages of the bootstrapped 95% CIs were also evaluated at these points. For comparing the fit of the fractional polynomial and piecewise linear models, we used the following heuristic function: where summation is across the quantile groups, and̄is the expected value of the outcome evaluated at the mean value of the exposure in each quantile group. Varying the number of strata In the initial simulations, the population was split based on the IV-free exposure into decile groups. Further simulations were performed varying the number of strata using 5, 10, 50, and 100 quantile groups. Tests of nonlinearity were performed to assess the impact of the number of strata on the empirical power of each test. The empirical power of each test was reported as the proportion of simulation replicates withvalue less than 0.05. The heuristic function (3) was calculated based on 10 deciles for each number of strata. For each simulation and set of parameters, 500 replications were performed. Bootstrap 95% CIs were generated using 500 bootstrap samples. All analyses were performed using R version 3.0.2. Additional simulations to assess impact of violations of assumptions We performed additional simulations in which the underlying assumptions that the effect of the IV on the exposure and the effect of the exposure on the outcome are fixed and independent were relaxed. In these simulations, we assessed both modeling assumptions by allowing the effect of the IV on the exposure to vary (by drawing the effect parameter from a normal distribution (0.25, 0.1 2 ) for each individual in the population), and allowing the exposure-outcome relationship to vary (by drawing the causal parameter from a normal distribution ( , 0.2 2 ) for each individual in the population). We assessed the impact of allowing each of these parameters to vary separately and both to vary together. In addition, we also allowed variation in both parameters to be correlated by drawing the parameters from a bivariate normal distribution with correlation 0.2. For fractional polynomials of degree 2, only the causal parameter for the second polynomial was allowed to vary across individuals. We also performed further simulations using a lowfrequency genetic variant having a large effect on the exposure (minor allele frequency = 0.03, linear effect on the exposure = 0.75), and using the original genetic variant but having a superadditive (first allele increases exposure by 0.1 units, second by 0.3 units) and a subadditive (first allele increases exposure by 0.3 units, second by 0.1 units) effect on the exposure in the data-generating model. Fractional polynomial method Comparisons of fractional polynomials for all powers are provided in Table S1 (degree 1) and Table S2 (degree 2); a summary of results for the most commonly encountered powers is given in Table 1. For fractional polynomials of degree 1, when fitting the correct fractional polynomial model, the causal estimate was Notes. Results for all the fractional polynomials of degree 1 (all effect sizes) and degree 2 ( 1 =1 and 2 =2) are presented in Tables S1 and S2; this table is a summary of results for the most commonly encountered powers. are the powers and are the effect parameters. Coverage refers to the number of replications where the true value of was contained within the corresponding 95% CI. The power(s) was correctly chosen (Correct) if the best-fitting fractional polynomial was also the correct fractional polynomial, whil the correct model was within the set of powers that fit the data equally, as well as the best-fitting fractional polynomial (Set) if the difference between twice the log-likelihood for the correct model and the best-fitting model was less than the 90th percentile of the relevant 2 distribution. SD, standard deviation; SE, standard error; FP, fractional polynomial; CI, confidence interval. generally unbiased (Table 1). Coverage estimates were close to the nominal 95% rate, except for fractional polynomials of power 2 (and power 3; Table S1), where causal estimates were slightly biased, and this small bias led to undercoverage. However, under the null, causal estimates were unbiased and correct coverage rates were maintained. For fractional polynomials of degree 2, a similar pattern was observed except that small biases and resulting undercoverage was more common, although correct coverage rate under the null was always maintained. When fitting all the fractional polynomial models, the correct fractional polynomial model was fitted more often for a fractional polynomial of degree 1, and when the power of the fractional polynomial differed substantially from 0. Although the power to detect the correct functional form was low for the logarithmic and square-root functions, this is to some extent an artifact of the choice of powers; if a basis with only one concave function (either a logarithmic or a square-root function) were used, then the correct model would be chosen more often. As the causal parameter in the model increased, the correct model was chosen more frequently. However, in all cases, the correct fractional polynomial was in the set of bestfitting fractional polynomials in at least 89% of simulations. The fractional polynomial test for nonlinearity rejects the null exactly when the linear model is not in the set of best-fitting fractional polynomials. With a null causal effect ( = 0), the probability of fitting the "correct" fractional polynomial was not estimated as all fractional polynomials with zero coefficients would describe the data equally well. In reality, the true exposure-outcome relationship is unlikely to have an exact functional form, so the ability to estimate the shape of the relationship is more important that the precise identification of the function. Piecewise linear method The piecewise linear method performed well when the true model was piecewise linear (such as a linear or a threshold relationship), with the predicted mean values of the outcome similar to their true values at the mean value of the exposure within each decile of the IV-free exposure ( Table 2). The bootstrapped CIs also had approximately 95% coverage at these points, except for the quantiles at or either side of the point of inflection of the threshold model. However, when the true model was not piecewise linear (in particular, for a quadratic relationship), estimates were biased and coverage was below nominal levels. Using the heuristic function (3) to compare between the estimates for the best-fitting fractional polynomial and the piecewise linear model, the models performed similarly under a linear model. For a quadratic model, the fractional polynomial method outperformed the piecewise linear method, whereas the opposite was true for a threshold model. This is unsurprising, as the fractional polynomial method performed best when the true model was a polynomial, and likewise for the piecewise linear method when the model was piecewise linear. Varying the number of strata The best-fitting fractional polynomial method had a similar or slightly better model fit (judged by the heuristic function) when a greater number of strata were used (Table 3). However, the piecewise linear method fitted the data better when fewer strata were used. Although the fractional polynomial method ensures that the estimate of the exposureoutcome relationship is a smooth function regardless of the number of strata, the estimate from the piecewise linear method becomes increasingly jagged as the number of strata increases. The coverage under the null (i.e., a linear model) was not overly inflated for any of the tests. In general, the T A B L E 3 Varying the number of strata and tests of nonlinearity fractional polynomial and quadratic tests were more powerful than the Cochran Q test across the simulations. The power of the Cochran Q test also decreased as the number of strata increased, whereas the power of the other tests either remained the same or increased. The quadratic test slightly outperformed the fractional polynomial test when the true model was a quadratic or a threshold model; the fractional polynomial test was slightly superior when the true model was a logarithm or a square-root model. Additional simulations to assess impact of violations of assumptions In the simulations where we relaxed the assumptions that the IV-exposure and the exposure-outcome effects are the same for all individuals, we found that the fractional polynomial models of degree 1 and the piecewise linear method both performed well in terms of bias and coverage (Tables S3 and S4). The only concern was that tests of nonlinearity had slightly inflated Type I error rates when the IV-exposure and exposure-outcome effects were varied in a correlated way; Type I error rate inflation was not observed when the effects were varied either separately or independently. In the simulations with a low-frequency genetic variant having a large effect on the exposure (Tables S5 and S6), there was some bias in estimates. This is likely to be the result of weak instrument bias (Burgess & Thompson, 2011): with a low-frequency variant, the variation in instrument strength between the strata is much larger, and so the chances of weak instrument bias affecting the results in specific strata are increased. However, nominal Type I error rates for tests of nonlinearity were maintained. With a superadditive or subadditive model for the genetic association with the exposure (Tables S5 and S6), estimates of the causal parameter in the fractional polynomial method were unbiased, but the power to detect nonlinearity was somewhat reduced. Estimates in the piecewise linear method for a threshold exposure-outcome relationship were somewhat biased. This finding is consistent with previous work on measurement error in the independent variable: here, the IV-free exposure is estimated with error resulting from misspecification of the genetic association with the exposure (see Section 6). APPLICATION OF METHODS TO THE RELATIONSHIP BET WEEN BMI AND BLOOD PRESSURE IN UK BIOBANK We illustrate the methods proposed in this paper in an applied example considering the shape of relationship between BMI and blood pressure in the UK Biobank study. The observational relationship between BMI and blood pressure has been investigated previously in a variety of contexts: populations of lean individuals (Kaufman et al., 1997), Danish adolescents (Nielsen & Andersen, 2003), and Iranian adolescents (Hosseini et al., 2010). The relationship has been demonstrated to be monotonically increasing, with inconclusive evidence for or against nonlinearity due to limited sample sizes. UK Biobank is a prospective cohort study of 502,682 participants recruited at 22 assessment centers across the United Kingdom between 2006 and 2010 (Sudlow et al., 2015). Participants were aged between 40 and 69 at baseline. Extensive health, lifestyle, biological, and genetic measurements were taken on all participants. At the time of writing this paper, genetic information was only available for 133,687 individuals of European ancestry. For individuals on antihypertensive medication, 15/10 mmHg were added to their SBP/DBP (where SBP is systolic blood pressure and DBP is diastolic blood pressure) measurement, respectively. A sensitivity analysis was performed in individuals who had no history of hypertension. To create an allele score (also called a genetic risk score) of variants related to BMI to be used as an IV, we extracted the 97 variants previously associated with BMI at a genome-wide level of significance by the GIANT consortium (Locke et al., 2015). A proxy variant (rs751414; 2 = 0.99) was used instead of rs2033529, as this variant was not available in UK Biobank; the linkage disequilibrium information was calculated using the European samples from 1000 Genomes (Abecasis et al., 2012). All of the variants were either directly genotyped or well-imputed (INFO > 0.9). The allele score for each individual was computed by multiplying the number of BMI-increasing alleles for each variant by the effect of the variant on BMI (as estimated in the GIANT consortium) and summing across the 97 variants . Overall, this score explained 1.7% of the variance in BMI. We performed both fractional polynomial and piecewise linear methods for estimating the relationships of BMI with SBP and DBP. The fractional polynomial method was implemented using 100 strata, whereas the piecewise linear method was implemented using 10 strata to avoid the exposure-outcome curve being overly jagged. The reference point was set at 25 kg/m 2 . To account for the multiple centers, we standardized the measure of BMI by stratifying individuals based on their residual value of BMI (the IV-free exposure) after regression of BMI on the allele score, age, sex, and center (as a categorical variable). Adjustment for age, sex, and center was also made in the regressions to obtain the LACE estimates in each quantile group. If additional population stratification were expected, we could additionally adjust for genetic principal components to minimize the effect of population stratification in biasing IV estimates. To assess the assumption that the effect of the IV on BMI is constant over the entire distribution of BMI, we also considered BMI as the outcome and calculated the associations of the IV with BMI in each of the strata. We then conducted tests (trend and Cochran Q tests) to investigate heterogeneity in the IV associations with BMI in different strata. Results of applied example The exposure-outcome relationships for BMI with SBP and DBP estimated using the fractional polynomial and piecewise linear methods are presented in Figure 1. There were strong causal effects of BMI on both SBP and DBP ( -value < 1 × 10 −5 for the causal estimates differing from zero in the fractional polynomial methods). For comparison, the standard two-stage least squares linear estimate was 0.527 mmHg per 1 kg/m 2 increase in BMI (95% CI: 0.363, 0.691) for SBP and 0.433 mmHg (95% CI: 0.338, 0.528) for DBP. There was strong evidence that the association between BMI and SBP was nonlinear, with the quadratic test yielding a -value of 0.0026 (fractional polynomial test -value = 0.0164, Cochran Q test -value = 0.0346). The best-fitting fractional polynomial of degree 1 for the relationship between BMI and SBP had power −0.5, and there was no evidence to suggest that a fractional polynomial of degree 2 fitted the data better ( -value = 0.135). The estimate of the exposureoutcome relationship from the piecewise linear method visually suggested a threshold-type relationship, with a steep slope up to a BMI value of about 32 kg/m 2 , and a slightly negative slope from 32 kg/m 2 onwards. The relationship between BMI and SBP was similar in individuals with no history of hypertension (Fig. S1). The association between BMI and DBP was also nonlinear (quadratic test -value = 0.0005, fractional polynomial test -value = 0.0114, Cochran Q test -value = 0.0049), and there was strong evidence that the best-fitting fractional polynomial of degree 2 (with 1 and 2 = 3) fitted the data better than the best-fitting fractional polynomial of degree 1 ( -value = 0.0062). There was no evidence of a different relationship between BMI and DBP for underweight individuals, with the exposure-outcome curve increasing almost linearly up to a BMI of around 40 kg/m 2 . But for hyperobese individuals (BMI > 40 kg/m 2 ), DBP seemed to decrease sharply. This was particularly evident in the fractional polynomial method, which used a greater number of strata and hence had more resolution to consider the shape of the exposureoutcome relationship at the extremes of the BMI distribution. One potential reason for this finding is that hyperobese individuals with high DBP are less likely to be enrolled in UK Biobank, perhaps due to differential survival probability. Another reason could be the difficulties in estimating blood pressure in hyperobese individuals (Leblanc et al., 2013). However, there was no evidence that the relationship between BMI and DBP was nonlinear in individuals with no history of hypertension ( -value >0.05 for all tests; Fig. S1). There was no evidence that the associations of the IV with BMI varied between the different strata (trend test -value = 0.135, Cochran Q test -value = 0.901). DISCUSSION In this paper, we have proposed and tested two novel methods for examining the relationship between an exposure and an outcome using IV analysis in the context of Mendelian randomization. Both methods rely on stratifying the population based on the IV-free exposure; the exposure minus the effect of the IV. A causal effect, referred to as a LACE, is estimated in each stratum of population. The first method performs metaregression on these LACE estimates using fractional polynomials. The second method estimates a continuous piecewise linear function, the gradient of which in each stratum is the LACE estimate for that stratum. Both methods were demonstrated in a simulation study to estimate the true exposure-outcome relationship well when its functional form corresponded to the form of the estimate from each method (i.e., when the exposure-outcome relationship was a fractional polynomial for the fractional polynomial method, and when the relationship was piecewise linear for the piecewise linear method), with causal estimates being close to unbiased and coverage rates generally maintaining nominal levels (in particular, coverage rates were always correct under the null). Additionally, tests of nonlinearity were provided and their performance was assessed. The quadratic and fractional polynomial tests had the best performance in terms of Type I error rate and power. Comparison of methods The recommendation as to which method to use depends on the aim of the investigation. The fractional polynomial method will always provide a smooth estimate of the exposure-outcome relationship, and as such has more consistent performance when a large number of strata are chosen (i.e., when the shape of the relationship is considered over a wider and more detailed range of the exposure distribution). Fractional polynomials of degree 1 had better performance than those of degree 2 in terms of bias and coverage of effect estimates. However, fractional polynomials of degree 1 are less flexible and would not be able to model complex exposure-outcome relationships. Additionally, they tend to smooth over discrepancies in the data. For example, if the LACE estimate for individuals in the lowest quantile group for BMI was substantially different to the other LACE estimates, then both this difference and any uncertainty in the LACE estimate would be smoothed over somewhat in the fractional polynomial estimate. Preference between the methods therefore comes down to a question of prior belief: if one truly believes the true exposure-outcome relationship to be smooth, and that estimates in the surrounding quantiles should be used to model the LACE in the target quantile, then the fractional polynomial method should be preferred. However, if one does not want to smooth over estimates, then the piecewise linear method should be preferred; however, the estimate of the exposure-outcome relationship will be more jagged and variable. The number of strata and divisions between strata should ideally be specified before the analysis according to practical considerations (e.g., previously determined categories of BMI) and the sample size available. If the number of strata is too large, then each stratum will have a small sample size. The LACE estimate in a small stratum will be imprecise and may be susceptible to weak instrument bias (Burgess & Thompson, 2011), particularly if the genetic variant is rare. Interpretation of the exposure-outcome relationship If the function relating the exposure to the average value of the outcome is homogeneous across the population, then the methods provided in this paper estimate this function (the exposure-outcome relationship) even if there is unmeasured confounding. If the function is heterogeneous, then the situation is more complicated (Small, 2014). For example, taking BMI as the exposure, if the subject-specific effect curve (as defined by Small, 2014) is linear for all individuals in the population, but the magnitude of effect is greater for overweight individuals, then the exposure-outcome relationship will be quadratic (or at least convex and positive) rather than linear. The exposure-outcome curve at low values of the exposure is only estimated using underweight individuals, and at high values of the exposure only using overweight individuals. However, this is perhaps the most relevant way to express the exposure-outcome relationship, as the causal effect of reducing one's BMI from 20 to 18 kg/m 2 is not so relevant for someone with a BMI of 40 kg/m 2 . Hence, we do not claim any global interpretation of the exposure-outcome relationship as estimated in this paper apart from in the unlikely case that the functional relationship is homogeneous in the population. It is better interpreted as a series of local estimates, which are graphically connected in order to compare and contrast trends in these local estimates at different values of the exposure, and to compare the relative benefit of intervening on the exposure for individuals with different values of the exposure, but which does not necessarily reflect the effect of intervening on the exposure to take any value in its distribution for any single individual. Measurement error in the exposure As has been noted in other contexts, estimates of nonlinear relationships are sensitive to measurement error in the exposure (Keogh, Strawbridge, & White, 2012). The standard "triple whammy" of measurement error is likely to apply here: measurement error biases parameter estimates, reduces power, and obscures important features in the shape of relationships (Carroll, Ruppert, Stefanski, & Crainiceanu, 2006). For example, with a threshold relationship, measurement error in the exposure would mean that the point of inflexion in the exposure-outcome relationship would be less sharply evident. In the case of BMI, measurement error is not such an issue, as height and weight can be measured precisely, and neither variable experiences substantial diurnal or seasonal variation. However, for other exposures, measurement error may affect results more severely. As noted in the additional simulation analyses, bias due to measurement error can also occur if the model for the genetic association with the exposure is misspecified. Requirement of concomitant and individual-level data Many recent advances in Mendelian randomization have enabled investigations to be performed using summarized data on the genetic associations with the exposure and with the outcome only, and/or in a two-sample setting in which genetic associations with the exposure and with the outcome are estimated on separate groups of individuals (Burgess, Butterworth, & Thompson, 2013;. However, estimation of the exposure-outcome relationship requires both individual-level data and a one-sample setting (otherwise neither stratification of the population nor the estimation of genetic associations with the outcome in the strata is possible). Although large cohorts with concomitant data on genetic variants, exposures, and outcomes are becoming more widely available, particularly in the form of biobanks such as UK Biobank. In conclusion, these two novel methods are useful in investigating nonlinear exposure-outcome relationships. The methods allow for easy graphical assessment of the shape of the relationship, and allied with tests of nonlinearity, provide an effective tool for assessing nonlinear exposure-outcome relationships using IV analysis for Mendelian randomization.
v3-fos-license
2023-04-20T15:18:31.023Z
2021-06-01T00:00:00.000
258227741
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://njaat.atbu.edu.ng/index.php/jasd/article/download/212/196", "pdf_hash": "0546eb33f6d8db2e60539384847f045e312b0bbd", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42213", "s2fieldsofstudy": [ "Economics" ], "sha1": "439138db7dee371e5920256d7b016bef8f69be69", "year": 2021 }
pes2o/s2orc
ECONOMIC ANALYSIS OF SAWN TIMBER MARKETING IN SAPELE LOCAL GOVERNMENT AREA, DELTA STATE, NIGERIA , INTRODUCTION Nigeria is blessed with agricultural resources. About 80% of the land is cultivable and about 9% of the land is forested (Nwandu, 2019). Timber is the most economically important product of the forest. Timber industry has the potential to improve economic performance and increase state and household revenues. In 2008, the export of industrial round wood, sawn wood and wood-based panels from developing countries accounted for US $ 13.1 billion (FAO STAT, 2010). Timber plays a significant role in the nation's socio-economic development with relevant benefits to human welfare. Its benefits ranges from its usefulness for interior and exterior decoration in homes and industries, production of electric poles, plywood, pulp wood, veneers and planks needed by building and construction industries (Adebara et al., 2014). Timber is broadly classified into hard wood and soft wood. Hard wood comes from broad leaves trees. These trees have flowers and produce seeds such as nuts and fruits. Examples are Oak, beech and mahogany. Hard woods are denser than softwoods and are stronger and more durable. They are used for furniture making. Hardwood is much more expensive than soft wood. Soft woods come from cone bearing trees. Examples are pine, redwood and fir. Softwoods can be used for furniture and doors but are mostly used in construction for roof trusses and stud partitions. Nigerian's timber sector contributes an estimated US $39 billion annually in foreign exchange by supplying wood fuel to meet 80% of the country's total energy needs (Idumah and Awe, 2017). The commercial wood fuel value chain that supplies cities and towns generates over 300000 full time jobs (Odetola and Etumnu, 2013). In Nigeria, the export revenue from timber industry grew at 4. 1, 8.0 and 28.8 percent between 19501, 8.0 and 28.8 percent between -60, 19601, 8.0 and 28.8 percent between -70, and 19701, 8.0 and 28.8 percent between -80, respectively (Aribisola, 1993. The Nigerian government policy on forest industries currently, is meant to increase the domestic value in the processing of wood products and has thus put a ban on the export of logs, rough sawn and clean wood except processed wood. These measures were put in place to make raw materials locally available for secondary processing mill to achieve the designed value-addition for export (Larinde et al., 2010). Most firms targeted the local market for their products. Incidentally, the majority of them do not keep records of annual sales and volume produced, making it difficult to establish a market flow diagram (Odetola and Etumnu, 2013). Timber marketing enterprise is one of the main economic activities in Sapele Local Government Area. This is obvious from the fact that timber market holds daily in the area. The greater percentage of the local people depends on the wood industry for their livelihood (some as harvesters, producers, transporters and marketers). Despite the significance of the timber industry, there is little or no study to access the cost and returns and the level of profitability or otherwise of this venture. This necessitates a comprehensive study on the economics of timber marketing in the study area. In view of these, this study will seek to evaluate the economics of timber marketing in Sapele Local Government Area of Delta State. The specific objectives described the socio-economic characteristics of respondents in timber marketing business; examined the cost and return of timber marketing; evaluate the profitability in timber marketing; access the socio-economic determinant of profitability in timber marketing; and describe the constraints in timber marketing business in the study area. The null hypothesis stated that the socio-economic characteristic has no significant effect on the profitability of timber marketing business. MATERIALS AND METHODS The Study Area This study was conducted in Sapele Local Government Area (LGA) in Delta State. It is well known for farming and trading activities as well as civil services jobs. It is located within 5°54 °N and 5°40 °E. It has a tropical climate and the annual average temperature is 26.6°C with an annual rainfall of 2406mm. It is a very important industrial center producing agricultural goods such as palm oil, timber, rubber among others. Sampling Procedure and Size Multi-Stage sampling technique was used to select sample for the study. The first stage was the purposive selection of five (5) communities with timber markets namely; Okirigwe, Gana, Ogorode, Amukpe and Sapele main town. These communities were purposefully selected based on the higher concentration of timber marketers in the area. Second stage was the random selection of 25 timber marketers from each of the community markets selected giving a total of 125 timber marketer respondents used for the study. Data Collection Method Data was collected through primary sources by means of structured questionnaire personally administrated to the timber marketer respondents selected in the study area. Analytical Techniques Data was analyzed using simple descriptive and inferential statistical tools including tables, percentages mean, gross margin and regression analysis. In model specification, the gross margin analysis is: GM=TR -TVC where; GM = Gross margin TR = Total Revenue TVC = Total variable Profit function used in the study was determined using multiple regression analysis specified as: Y = x + β0 + β1X1 + β2X2 + β3X3 + β4X4 + β4X5… β15X15 where; Y = (Profit from timber marketing Naira) X1 = Age X2= Sex X3 =House holder size X4= Marital status X5 = Schooling years X6 = Timber trade income X7 = timber marketing experience X8 = Storage cost X9 = Rent X10 = Transportation cost X11 = Marketing cost (Expenses) X12 = Cost of Timber X13 = Price of timber X14 = Channel Status of marker. RESULTS AND DISCUSSION Socio-economic Characteristics of Timber Marketers The socio-economic characteristic of sawn timbers marketers in the Study area was presented in Table 1. Majority (98%) of the respondents were males while 2.0% were females. This could be because the conversion processes of logs were tedious and require physical strength as confirmed in (Oladele et al., 2013). About 51% of the respondents were between the ages of 31-40 years. This meant that majority of the timber processors and marketers were still in their active age and this served as an advantage when doing the business of processing, marketing and sourcing for timber species from the forest. This was in line with the result from the marketing performance of Irvingia wombulus (Usman et al., 2005). Majority (55.3%) of the respondents were married being married with children help them to be more diligent with their business as they tend to feed and train their children. The mean household size was 5. This implied that there was enough labour from the household to utilize in the timber business. Cools et al. (2018) in their study observed that family size affects parents' labour market outcomes in the long run Majority of the respondents attained formal education (94.3%), while 5.7% had no formal education. This implied that level of education has nothing to do with the processing and marketing of timber though, it can promote the productivity level and aid better management. This study agreed with Cools et al. (2017). Findings of Table 1 further revealed that 46.7% of the respondents had 11-15 years of experience in sawn timber marketing and processing, 27.3% had 6-10 years of experience while 20.0% and 6.0% had less than 5 years and 16 years above experience respectively. The implication was that years of experience, was one of the factors that determine the level of profit made in sawn timber business. Gollin (2018) found that there was a strong positive relationship between experience and labour productivity. Results also showed that 46.7% respondents businesses had been in existence between 16-20 years while 27.3% of the sawn mill business had been between 11-16 years in existence. Finding showed that sawn timber business had been in existence for a long time in the study area. The ownership structure showed that 98% of the sawn timber mills were privately owned without support from government. Similarly 91.3% were not members of timber associations. This made price for wood vary in the study area. Journal of Agripreneurship and Sustainable Development (JASD) Volume 4 Table 2 showed R 2 value of 0.765 which implies that 77% of the changes in the marketing margin of timber marketers were determined by the socioeconomic characteristics and other variables included in the model. Furthermore 4 of the variables were significant on their effect on marketing margin. These variables were: selling price, transport cost, education and age of sawmilled Selling price with a coefficient of 0.972 and a t-value of 2.802 showed that selling price of sawn timber is positively related to marketing margin at 0.05 level of significant. Effect of Socio-economics Characteristics of Timbers Marketers on Marketing Margin The implication was that a 1% increase in the selling price will increase the marketing margin of timber by 0.97. Transport cost with a coefficient of -0.15 and a t-value of -6.818 showed that the transport cost of sawn timber is negatively related to the market margin and it is statistically significant at 0.01 level. A 1% increase in the transport cost will decrease the marketing margin by 0.15. Education level with the coefficient of -51.822 and a t-value of -1.736 showed that educational level was negatively related to marketing margin at 0.1 level of significance. It could be deduced that an increase in education of the marketers will decrease the marketing margin all things being equal. The age of saw mill with a coefficient of 8.564 and a t-value of 2.953 showed that age of saw mill was positively related to the marketing margin at 0.05 level of significance. This implied that a 1% increase in age of saw mill will increase the marketing margin. Note: *** 1% (High significance level), **5% (Medium significance level); * 10% (low significance level) Table 3 showed the cost and returns of sawn timber by timber marketers. Findings revealed that selling price of 1 x 12 was highly profitable to the timber marketers as against the cost price of the same 1 x 12. This was followed by 2 x 6 which had a higher selling price. The implications were that selling 1 x 12 and 2 x 6 were viable on all fronts. Hence, the sawn timber marketers would be more willing to cut timbers into 1 x 12 and 2 x 6 dimensions because of its high income. Table 4 showed the marketing margin of sawn timber markets. Although, many factors accounted for the cost incurred in marketing of sawn timbers by marketers, storage cost accounted for the highest (N1994.00) followed by Marketing expenses (N1103.60); Transportation cost (N684.133), and rent (fixed cost) (N534.07). The total variable cost was N575,000 and a total revenue for the sale of 100 units of sawn timber N1,180,000 with a gross margin of N600,684.20. The Return on was profitable business in the study area. These results corroborated with that of Aiyeloja et al. (2011). The Return on Investment (ROI) which was 1.037 indicated that the marketing of sawn timber was a profitable venture in the study area. Also this agreed with the work of Larinde and Olasupo (2011) that showed that the wood trade was very profitable and an average wood marketer would be able to recoup investment with better returns in short period of time. Table 5 showed that types of specie sold was a major constraint encountered by sawn timber marketers as it scored 27.1%, which was the highest followed by seasonality 26.4%, transportation cost 22.0%, access to loan 12.9%, difficulty of getting timber from the forest 10.5% and lastly area of production 1.1%. The indication was that type of species sold, seasonality and transportation were major constraints that influences the marketing of sawn timber which in turn influence the price. CONCLUSION AND RECOMMENDATIONS The study shows that marketing of sawn timber in the study area was a profitable enterprise and had large number of buyers and marketers. If resources are efficiently utilized this could bring about the much needed boost in the sawn timber enterprise. This will eventually accelerate the economic development in the study area. The sawn timber marketing had the prospects of sustaining livelihoods in the study area and even help in the development of the economy of Nigeria. Recommendations include: 1. Government should construct roads and maintain the already existing ones for easy access to forest were these timbers are gotten and thus reduce transportation cost in order to boost the revenue of the marketers. 2. Improved marketing efficiency and sustainable timber supply are panacea to increased and sustainable profit in sawn wood marketing. 3. Sustainable supply of timber remained an issue begging for attention. So, there should be a regulatory framework on the sustainable management of timber, extraction from the forest, replanting of felled trees and planting of new trees. These will help in the sustainability of the supply of timber. 4. Adequate marketing facilities should be provided to help marketers increase their income. Sawn timber marketers should be able have access to loan.
v3-fos-license
2018-01-20T17:23:29.327Z
2017-06-09T00:00:00.000
21635490
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10730-017-9325-4.pdf", "pdf_hash": "3986633ee71503b81c9cf8a8e80c7ae07f7c8131", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42215", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "01cc93570b0d4d72f92252568b9ee40d7685622f", "year": 2017 }
pes2o/s2orc
Clinical Ethics Support for Healthcare Personnel: An Integrative Literature Review This study describes which clinical ethics approaches are available to support healthcare personnel in clinical practice in terms of their construction, functions and goals. Healthcare personnel frequently face ethically difficult situations in the course of their work and these issues cover a wide range of areas from prenatal care to end-of-life care. Although various forms of clinical ethics support have been developed, to our knowledge there is a lack of review studies describing which ethics support approaches are available, how they are constructed and their goals in supporting healthcare personnel in clinical practice. This study engages in an integrative literature review. We searched for peer-reviewed academic articles written in English between 2000 and 2016 using specific Mesh terms and manual keywords in CINAHL, MEDLINE and Psych INFO databases. In total, 54 articles worldwide described clinical ethics support approaches that include clinical ethics consultation, clinical ethics committees, moral case deliberation, ethics rounds, ethics discussion groups, and ethics reflection groups. Clinical ethics consultation and clinical ethics committees have various roles and functions in different countries. They can provide healthcare personnel with advice and recommendations regarding the best course of action. Moral case deliberation, ethics rounds, ethics discussion groups and ethics reflection groups support the idea that group reflection increases insight into ethical issues. Clinical ethics support in the form of a “bottom-up” perspective might give healthcare personnel opportunities to think and reflect more than a “top-down” perspective. A “bottom-up” approach leaves the healthcare personnel with the moral responsibility for their choice of action in clinical practice, while a “top-down” approach risks removing such moral responsibility. Introduction Healthcare personnel frequently face ethically difficult situations in the course of their work and these issues cover a wide range of areas in clinical practice (Å strom et al. 1995;Beauchamp and Childress 2009;Lindseth et al. 1994;Sørlie et al. 2000;Tabitha et al. 1979) and community home healthcare services (Karlsson et al. 2013). In such situations, healthcare personnel can experience unease or uncertainty (Cohen and Erickson 2006) over what is right or good to do, or there may be disagreement about what should be done. Moreover, some ethical issues can be connected to conflicting interests between healthcare workers and patients and their next-of kin (Beauchamp and Childress 2009;Rasoal et al. 2015); for example, situations where patients do not follow the recommendations of healthcare personnel, such as when patients and healthcare personnel have different opinions regarding what to do (Hermsen and van der Donk 2009;Slettebø and Bunch 2004), or issues that are related to ongoing life-sustaining treatment (Cassel 1984;Schaffer 2007;Silén et al. 2008). At times, healthcare personnel experience distress as a result of ethical issues in patient care (Kälvemark et al. 2004;Pauly et al. 2009). One way to support healthcare personnel in dealing with these ethical challenges has been through the development of clinical ethics support (CES). CES is defined as the formal or informal provision of advice and support to healthcare personnel on ethical issues arising from clinical practice and patient care within the healthcare setting (Owen 2001;Puntillo et al. 2001;Slowther et al. 2004a). CES is becoming more prevalent with the increased awareness worldwide of the importance of ethical issues in healthcare and with personnel encountering an increasing number of ethical issues in clinical practice (Bartholdson et al. 2015;Doran et al. 2015;Oberle and Hughes 2001;Ulrich et al. 2010). Philosophical papers and empirical research have led to the development of various approaches to CES that have the goal of supporting healthcare institutions, healthcare personnel, and patients as well as next-of-kin (Å strom et al. 1995;Reiter-Theil and Hiddeman 2000). There are no universal norms regarding which approaches should be used to support healthcare personnel in clinical practice. CES approaches can roughly be divided into ''top-down'' or ''bottom-up'' perspectives, which can be contrasted in terms of the nature, purpose and goals of the support. Within ''top-down'' perspectives, an ethical consultant or a group of ''experts'' has an influential advisory role or act(s) as the primary ethical decision maker, providing advice or recommendations (Aulisio et al. 1998;Crigger 1995;La Puma and Schiedermayer 1991). Those supporting such an approach claim that the ethical issues in healthcare are too complex to be managed by the healthcare personnel themselves. In this vein, personnel facing ethical issues require specialist expertise in the same way that medical doctors need to consult with each other within different specialties (La Puma and Schiedermayer 1991). In contrast, in ''bottomup'' approaches to CES, reflection begins with healthcare personnel's everyday experiences of ethical issues in clinical practice (Hansson 2002). The discussion is facilitated by an ethicist or philosopher, a ''facilitator'' who has the goal of fostering greater insight among the personnel into ethical considerations rather than focusing on decision-making in any particular case (Hansson 2002;Stolper et al. 2014). Adherents of ''bottom up'' approaches claim that ethical issues need to be reflected on critically by the healthcare personnel themselves, since they are the only legitimate decision-makers and are morally responsible for the outcomes (Hansson 2002). The facilitator for such an approach is considered to lack the knowledge needed to give advice and make recommendations for the best course of action. The existence of such contrasting approaches leaves the question open regarding which approach can be ''the golden middle way'' to guide healthcare personnel in clinical practice. However, to our knowledge, there is a lack of integrative reviews regarding available approaches to ethics support and how different approaches support healthcare personnel deal with ethical issues. It is reasonable to believe that practitioners need some kind of CES reflection that relates to their personal experiences of everyday ethical issues. Therefore, in this paper, we aim to describe which clinical ethics support approaches are available to support healthcare personnel in clinical practice in terms of their construction, functions and goals. from the three databases to 231 articles. The titles, and when available the abstracts, were scrutinized by the authors in relation to the inclusion criteria, which resulted in the exclusion of 177 articles and the selection of 54 articles for further investigation. From the systematic search of all three databases, 54 articles were selected for further investigation. After checking the reference lists of the articles and citations, eight additional articles were found. Manual Search The first manual search using the search term ''clinical ethics support'' yielded 184 articles. The second manual search using the search term ''ethics support'' found 512 articles. After applying the inclusion criteria of English-language peer-reviewed articles, published in 2000-2016, the number of articles reduced to 247. Checking the reference lists and citations revealed 11 additional articles. Quality Appraisal In total, 320 articles from the systematic and manual searches, as well as additional articles that were identified by checking the reference lists and citations, were included for further investigation. Three of the authors read and appraised the articles by reading the titles, and when available the abstracts. We excluded duplicates, articles that did not match the inclusion criteria, editorials and review articles. After the appraisal of the 320 articles, 54 remained for further analysis. Full text was not available for three of the articles, and they had to be purchased. All the articles were discussed among all of the authors in order to reach agreement regarding the content in relation to the research aim. We used a quality assessment check of the included articles (SBU 2014). The included articles were both theoretical papers and empirical research that reflected on ethically difficult situations in health care and how to support health personnel from diverse cultures and countries worldwide. Data Analysis Empirical, qualitative and quantitative studies as well as theoretical papers with various approaches were included. First, the articles were sorted based on the CES approach. Second, a critical review of each article was performed, with particular attention given to the results and conclusions and their relation to the aim provided in the article. Notes were made regarding their content (Table 1). The analysis process was inspired by manifest content analysis (Graneheim and Lundman 2004).Third, the important parts of each article, such as approach, aim, method, results and conclusion, were written up into a matrix (Garrard 2010). Fourth, the results and conclusions of each qualitative, quantitative and theoretical paper were imported into to a new sheet in a word processor so they could be coded. Fifth, the first author performed descriptive coding of the articles' results and conclusions. Sixth, based on content similarities and differences among the similar approaches, descriptive and manifest categories emerged from the codes. Finally, the results of Clinical ethics consultation Theoretical paper To describe the evolution of an ethics consultation service at a metro medical center in an urban public hospital, its struggle to thrive, and subsequent revitalization Ethics consultation utilized a service that increased fourfold over a three-year period, a usage rate maintained since. A key step was its use of an adaptive small-team approach including an ethics consult-care team meeting. These meetings often result in either (1) the dissolution of apparent ethical conflict or uncertainty as lines of communication are opened or (2) clarity on the part of the care team members regarding the next steps they must take in order to address the ethical issues under discussion. The assumption that the consultant does not need specific competencies aside from general knowledge and skills has been rejected by the American Society for Bioethics and Humanities. Ethics consultation is a distinctive services that responds to a specific request for assistance, focuses on addressing uncertainty or conflict regarding value-laden concerns and addresses those value-laden concerns through ''ethics facilitation''. Those designated to perform the role should have the requisite competencies to address the question or concern appropriately in health care consultation. Rasmussen (2011 Rasmussen (2016). Clinical Ethics consultants are not ''ethics'' experts-but they do have expertise. Clinical ethics consultation Theoretical paper To describe clinical ethics consultation and their expertise concerning the right moral answer Clinical ethics consultation is substantive, which requires a kind of training that other professions undergo, but that is not normatively binding. Opponents of CEC and moral expertise may essentially be objecting to the idea of people who profess to have the right answer in moral situations, because: (1) they hold that there is no such objectively verifiable thing, and (2) this society respects and protects autonomous moral decision-making more highly than correct moral decision-making. CECs are the only, or indeed the most desirable model for the provision of ethics support and guidance in clinical practice Provision of clinical ethics support may include consideration of individual cases, or debate on the ethical issues they raise; the education of health professionals on such issues; and ethical input into trust policy and guidance. It is accepted that these functions require the identification and analysis of ethical problems within a legal framework, if criticisms of lack of 'due process' are to be addressed. Since ethical support may be provided by individuals, small groups or committees, the core competencies identified are to be considered as ''collective'' in their application to a particular committee or group. and principlism To analyze two methodologies: the ''dilemmatic'' and the ''problematic'' It is easier to reason than to deliberate. Deliberation is a difficult task and it requires many conditions, such as: lack of external constraints, good will, capacity to give reasons, respect for others when they disagree, an ability to listen, disposition to influence and to be influenced by arguments, and a desire to understand, cooperate and collaborate. This is the framework of a true deliberation process. Deliberation rests not on ''decision'' but on ''commitment.'' Within this framework, almost all existing bioethical methods can be useful to some extent. Molewijk et al. (2008a). Teaching ethics in the clinic. The theory and practice of moral case deliberation. (d) to present the implementation process The results showed that the moral case deliberations, the role of the ethics facilitator, and the train-the-facilitator program were regarded as useful and were evaluated as (very) positive. Healthcare professionals reported that they improved their moral competencies. They have developed skills to reflect on their work, and to create an atmosphere of dialogue instead of discussion and debate. Molewijk et al. (2008b). Implementing moral case deliberation in Dutch healthcare-Improving moral competency for professionals and quality of care. with an emotion is finding the right middle ground between being overwhelmed and remaining untouched. Moral case deliberation can provide tools for dealing with emotions in clinical practice. This is not just a matter of rationally determining a balance. One has to be able to act in line with the right middle, and embody the appropriate attitude. Dealing with emotions is a matter of virtue and character. To test the assumption that enhanced ethical competence would help to decrease reported moral distress, a prospective controlled study was set up Ethical competence is a key factor in preventing or reducing moral distress. The results show that generally, there were differences in levels of moral distress between pharmacies and hospital departments. Ethics rounds may be seen as opportunities for ethical discourse, where participants jointly explore their own personal sets of values and seek to balance these with professional value sets. The ethics rounds method was also developed to strengthen the organizations' ethical dimension. Svantesson et al. (2008 relief. Negative experiences were associated with a sense of unconcern and alienation, as well as frustration with the lack of solutions and a sense of resignation that change is not possible. In assisting healthcare professionals to learn a way through ethical problems in patient care, a balance should be found between ethical analyses, conflict resolution and problem solving. the primary healthcare workers feel, how important healthcare workers in primary care think it is to better deal with these challenges and what kind of ethics support they want The majority of primary healthcare workers in this study reported that they experience ethical challenges in their work. These challenges were closely related to professional and organizational circumstances, with the lack of resources, e.g., lack of staff and competence being the most prominent. The findings showed that the healthcare workers' values clash with what they see themselves doing in their practice, such as hiding medication in food, tying patients to the chair or using force to clean the patient. These are the issues that are given less attention than, e.g., ethical challenges related to end of life. Magelssen et al. (2016). Ethics support in community care makes a difference for practice. Norway. Nursing Ethics Ethics support Quantitative, online questionnaires n = 2. Responses in total n = 354 To study outcomes of ethics activities and examine which factors promote or inhibit significance and sustainability of activities The participants of this study found the ethics project to be highly significant for their daily professional practice. Outcomes include better handling of ethical challenges, better employee cooperation, better service quality, and better relations with patients and next of kin. Factors associated with sustainability and/or significance of the activities were sufficient support from stakeholders, sufficient available time, and ethics facilitators having sufficient knowledge and skills in ethics and access to supervision. The facilitators who are responsible for the activities must receive sufficient follow-up and training in ethics deliberation methods and relevant topics in health care ethics. how it is interwoven with practice A threefold account of the relationship between theory and practice based on narrative and hermeneutical approaches were discussed. The relationship between theory and practice took the form of a ''hermeneutic circle.'' Using theories to interpret experiences makes theoretical concepts clearer. It indicates our basic attitudes to our daily work by summarizing: (1) that we acknowledge our dependencies and responsibilities within the social sphere, and (2) that we believe that all human identities are constructed by means of narratives as (3) we perceive human beings as story-telling agents. In addition, (4) we emphasized our focus on fostering mutual understanding; (5) we acknowledge that understanding is mediated by language, words and concepts; and (6) we opt for taking personal and professional experiences seriously, making them accessible in dialogues, and learn from each other in changing perspectives. Schildmann et al. (2013) (Sandelowski et al. 2006), such as different approaches to CES. Results The results revealed four CES approaches that are available to support healthcare personnel who are dealing with ethical issues (Table 1). They comprised: clinical ethics consultation, clinical ethics committees, moral case deliberation, and ethics rounds/ethics discussion groups/ethics reflection groups, which we have combined together due to the similarity of their form and content. Although CES can be categorized into four main approaches, it is important to point out that due to a lack of firm definitions, it is difficult to draw distinct lines between them, which results in some overlap of the boundaries. Clinical Ethics Consultation Clinical ethics consultation is defined as a set of services that generally occurs following requests from healthcare personnel, patients or their surrogates (Aulisio et al. 2000). It can also be performed routinely by a permanent body such as a hospital ethics committee (Reiter-Theil 2000; Tomazic et al. 2004). The consultation is provided by an individual or a small team of individuals in response to ethical issues (Adams 2009;Aulisio et al. 2000; Tarzian and ASBH Core Competencies Update Task Force 2013). Those who provide consultations have various professions, such as physicians, nurses, social workers or members of the clergy (McClimans et al. 2016). It is argued that the person(s) who provide the consultations are required to have certain skills and competencies in ethics, in order to support healthcare personnel in dealing with ethical problems (Aulisio et al. 2000). Ethics consultations have been shown to help patients and personnel clarify ethical problems arising in daily health care practices and to improve collaborative decision-making (Fox et al. 2007; Tarzian and ASBH Core Competencies Update Task Force 2013). Ethics consultations may have the goal of improving quality of care for the patient and/or for solving certain aspects of ethical conflicts that occur between healthcare personnel, patients and next-of-kin (Aulisio et al. 2000;Paola and Walker 2006). Beside requests concerning specific patient cases, ethical consultation services can provide educational activities in order to increase awareness concerning ethics in the clinic (Fukuyama et al. 2008) or to help deal with moral distress (McClimans et al. 2016). In the US, there has been a movement to certify ethics consultants to assure that they possess key knowledge and skill competencies (Tarzian and ASBH Core Competencies Update Task Force 2013). Ethics consultants should possess a range of knowledge competencies that includes moral reasoning and ethical theory, relevant ethical codes, health law and local policies, and knowledge regarding the clinical context and staff and patient perspectives (Tomazic et al. 2004). In terms of skills, ethics consultants should have the ability and interpersonal skills to assess the nature of the ethical conflict by drawing on relevant ethics knowledge and ''process'' skills required to conduct clinical ethics consultation services effectively. In addition, a code of ethics has been developed by the American Society for Bioethics and Humanities, which identifies a set of professional responsibilities for those engaged in healthcare ethics consultation (Tarzian et al. 2015). Ethics consultation services are multifaceted. There is no agreement regarding their core role worldwide and they vary in role and function depending on the country. For example, in Japan, ethics consultation services may prioritize the review of scientific and clinical research (Fukuyama et al. 2008) before case analysis and patient consultation (Adams 2009;Aulisio et al. 2009; Tarzian and ASBH Core Competencies Update Task Force 2013). Ethical consultation services can be used in specific ways, such as in response to requests for assistance in addressing uncertainty or conflict regarding a value-laden conflict of interest (Aulisio et al. 2000;Paola and Walker 2006). This can be between various stakeholders, such as patients, next-of-kin, healthcare personnel or the health organization (Adams 2009; Tarzian and ASBH Core Competencies Update Task Force 2013). The role of ethical consultation may be less specific, such as when consultations are triggered by the institution in order to educate health personnel in how to deal with moral distress, to improve ethical and moral qualities of decisionmaking and actions (McClimans et al. 2016), or to review research protocols (Fukuyama et al. 2008). Some ethics consultant(s), (depending on the country) even have the authority to make decisions or give advice/recommendations, whether alone or in agreement with next-of-kin or healthcare staff, as to the best course of action. The idea that an ethicist/consultant with specific knowledge can assume the role of ethics expert and make judgments in ethically difficult situations has been supported by some (Aulisio et al. 2000; Tarzian and ASBH Core Competencies Update Task Force 2013). It has been criticized by others, who argue that while there is expertise in ethics, there is no such thing as an ethics expert (Adams 2009;Rasmussen 2011Rasmussen , 2016. Regardless of the contrasting positions described above, the approach of ethics consultation remains authoritarian, because while the consultation process is triggered by health personnel requesting a consultation, it is the consultants who have the authority and power (as a result of their position) to interpret the clinical ethics case (Agich 2001). Clinical Ethics Committees A clinical ethics committee is typically a standing committee which functions as an independent institution or authority to provide a formal mechanism for dealing with ethical issues in clinical settings (Akabayashi et al. 2008;Aulisio and Arnold 2008). Generally, the members of clinical ethics committees have various professional backgrounds such as: bioethicists/ethics consultants, clergy, social workers, lawyers, nurses, physicians, psychologists, therapists and community representatives (Akabayashi et al. 2008;Schick and Guo 2001). The goals and responsibilities of the clinical ethics committee are to protect the rights, safety and well-being of the patient in the health care setting or human subjects in research projects (Borovecki et al. 2010;Gaudine et al. 2010;Slowther et al. 2011). In addition, they are to identify and analyze ethical issues in clinical practice (Larcher et al. 2010;Slowther et al. 2001), promote training and education of health personnel, and provide guidance upon request (Caminiti et al. 2011). They commonly respond to requests to address ethical issues related to ongoing as well as retrospective patient cases; identify ethical needs within clinical settings; support healthcare personnel, patients and next-of-kin (Førde and Pedersen 2011), find agreements and make decisions; and review research protocols (Fukuyama et al. 2008;Gaudine et al. 2010). Clinical ethics committees are involved in responding to ethical issues, such as informed consent to treatment (Borovecki et al. 2010;Caminiti et al. 2011). Sometimes they provide decision-making support (Pedersen et al. 2009) in end-of-life situations or the continuation of life support (Slowther et al. 2004b). Clinical ethics committees can also provide education, seminars, workshops and training in ethics for hospital employees (Borovecki et al. 2010;Caminiti et al. 2011;Pedersen et al. 2009). They can review research protocols (except in the United States, where separate committees deal with this process), provide ethical input into hospital policies (Pedersen et al. 2009) and create guidelines (Slowther et al. 2001). Additionally, they can give an ''expert'' opinion regarding issues, such as the provision of treatment against a patient's will and the disclosure of medical information against a patient's wishes when it might be deemed necessary (Wenger et al. 2002). Clinical ethics committees promote an ethical dimension to health care and generate possibilities for improvement in care quality (Caminiti et al. 2011;Czarkowski et al. 2015). There is no formal legal or regulatory governing framework for clinical ethics committees, which is in contrast to research ethics committees worldwide (Larcher et al. 2010;Slowther et al. 2004b;Wenger et al. 2002). Clinical ethics committees vary in function, structure and goals worldwide, but there are some commonalities in terms of the provision of advice (Slowther et al. 2001) and recommendations concerning the best course of action or discussions that lead to a good decisionmaking process (Schick and Guo 2001;Slowther et al. 2004b). Some committees themselves assume the responsibility to develop and improve guidelines and policies regarding prospective ethical challenges in clinical practice, and others are mandated by the health care institutions to do so (Aulisio and Arnold 2008;Slowther et al. 2001). Clinical ethics committees seem to have formal authority and legitimacy (without generalizing) to provide advice and recommendations concerning ethical issues arising in healthcare institutions (Slowther et al. 2004b). Moral Case Deliberation The approach of Moral Case Deliberation (MCD) has been described in several ways. It is said to consist of a collaborative, systematic reflection on real clinical cases (Molewijk et al. 2008a, b;Weidema et al. 2012); methodological reflection on concrete cases among healthcare professionals; and facilitator-led collective dialogue (Gracia 2001) of healthcare personnel who reflect on a concrete moral question connected to real cases in their practice (Janssens et al. 2014;Molewijk et al. 2008a, b;Weidema et al. 2012). The goal of MCD is to support healthcare personnel to manage ethically difficult situations in their clinical practice (Svantesson et al. 2014), and to enhance ethical reflection among healthcare personnel concerning ethical issues and thus improve the quality of patient care. In other words, MCD can help deal with concrete problems and help train healthcare personnel so that they improve their ethical competencies. MCD is a ''deliberationist'' approach to ethical issues that supports the idea that reflection over ethically difficult situations is vital, and aims to make health personnel aware of ethical issues as well as related theories and how they might be applied in practice . It supports health personnel in managing ethical issues and making independent decisions from the standpoint that health personnel are entitled to make decisions about how to deal with issues in clinical practice (Molewijk et al. 2008a, b). During an MCD session, which can last from 45 minutes to one day and is led by an external facilitator, participants reflect individually and collectively about the moral aspects of a particular patient case (Molewijk et al. 2011a). The collective group discussion is facilitated by an ethicist or someone trained in some kind of conversation method relevant to ethics (Molewijk et al. 2008a, b;Svantesson et al. 2014;Weidema et al. 2012). MCD sessions are led by a facilitator who has no authority to decide on or to provide/recommend suggestions concerning the best course of action (Molewijk et al. 2008a, b;Weidema et al. 2012). The facilitator's main role is only to stimulate an ethical discussion (mutual reflection) and to illuminate the ethical aspects of the case Gracia 2001). In an MCD session, health personnel take the initiative to discuss a patient's case that they have found ethically difficult to manage (Molewijk et al. 2008a, b;Molewijk et al. 2011a). Each person participates in the MCD on equal terms regardless of his or her job title and all voices are to be listened to and respected. During the MCD session different kinds of emotion, e.g., frustration, anger, sadness can be expressed (Molewijk et al. 2011a). MCD is similar in many ways to ethics rounds (described below), but it is distinctive because it uses theoretically based conversation methods (Janssens et al. 2014), such as the Dilemma method or Socratic Dialogue (Svantesson et al. 2014;Weidema et al. 2012). The Dilemma method has been used in the Netherlands with the goal of helping healthcare personnel seek consensus regarding ethical issues (Molewijk et al. 2008a(Molewijk et al. , b, 2011aWeidema et al. 2012). In contrast, Socratic Dialogue aims to help healthcare personnel develop ethical skills and a reflective attitude towards the ethical issues they experience in their everyday clinical practice (Molewijk et al. 2008a, b). Ethics Rounds/Ethics Discussion Groups/Ethics Reflection Groups These three approaches overlap with each other to some extent since they have commonalities in terms of how they are constructed as well as in their functions and goals. Ethics rounds are a form of facilitator/ethicist-led reflection, which involves discussion of a particular patient's medical and ethical issues (Grönlund et al. 2016;Silén et al. 2014;Svantesson et al. 2008). During ethics rounds, healthcare personnel from different disciplines reflect over a patient's case that they are finding ethically difficult to resolve (MacRae et al. 2005;Svantesson et al. 2008), or over ethical challenges related to professional and organizational circumstances (Grönlund et al. 2016;Lillemoen and Pedersen 2012). The goal of the ethics round is to stimulate ethical reflection and promote mutual understanding between professional groups (MacRae et al. 2005;Silén et al. 2014), particularly through listening to each other's perspectives (Svantesson et al. 2008). Ethics rounds have been described as supporting healthcare personnel develop ethical competencies and gain insight into ethical issues (Silén et al. 2014;Sporrong et al. 2007). They help healthcare personnel to examine their own views and obtain a better awareness and understanding of their colleagues' ways of thinking and acting (Grönlund et al. 2016;Sporrong et al. 2007). Other kinds of ethics support alongside ethics rounds are ethics discussion groups and ethics reflection groups, which have been used by healthcare personnel in nursing homes to discuss issues regarding end-of-life care, lack of resources and coercion (Bollig et al. 2015;Lillemoen and Pedersen 2012). Ethics discussion groups have also been used to improve the work climate and job satisfaction among nursing staff (Forsgärde et al. 2000), while ethics reflection groups have been described as an approach in which healthcare personnel sit together and reflect over ethical challenges in their daily practice (Lillemoen and Pedersen 2015). Other related kinds of ethics support have been used to train and educate healthcare personnel to acquire skills needed to respond to ethical issues (Dörries et al. 2010;MacRae et al. 2005). Promotion of ethics support in health care can be fostered by focusing on the needs of the healthcare institution and using ethical theory to interpret the experiences of the healthcare personnel's everyday work (Dauwerse et al. 2011;Porz et al. 2011). Ethics support facilitated by a person with knowledge in ethics has been shown to help staff handle ethical challenges, improve cooperation between employees (Magelssen et al. 2016), increase healthcare personnel's awareness of ethical aspects, and improve relations between staff and patients/next-of-kin (Magelssen et al. 2016;Schlairet et al. 2012). To ensure that ethics support is improving the quality of care, evaluation is essential (Schildmann et al. 2013). These approaches to ethics support are all characterized by the involvement of inter-professional healthcare teams in collective group discussions to encourage reflection over ethical issues that occur in clinical practice from the perspectives of the healthcare personnel themselves (Schlairet et al. 2012). A bioethicist or facilitator has the role of creating an equal atmosphere for everyone during the ethics rounds/discussion (MacRae et al. 2005). It is common that the bioethicist/facilitator receives some information concerning the case in advance, or asks the participants to prepare a case or an ethical issue to reflect over (Grönlund et al. 2016). The bioethicist/facilitator has no authority and does not act as an expert in ethics, nor do they provide advice about what to do, but only stimulates group discussion (Magelssen et al. 2016;Silén et al. 2014). One fundamental idea with these kinds of discussions is that the personnel may stimulate each other, enhance critical thinking (Svantesson et al. 2008), or change their attitudes regarding the situations. These discussions have the goal of helping personnel deal with moral distress (Sporrong et al. 2007), getting them to help each other and to cooperate in order to find alternative ways of handling situations, and to ultimately improve the quality of patient care (Dauwerse et al. 2011;Janssens et al. 2014). In these types of ethics support, healthcare personnel are considered to be qualified (Dörries et al. 2010) and legitimate decision-makers in regard to ethically difficult situations. Ultimately, making decisions or reaching a consensus regarding the best course of action in a particular case remains with the healthcare personnel (Silén et al. 2014;Sporrong et al. 2007). Discussion The included articles (n = 54) covered a range of clinical ethics support. The study aim was to describe which clinical ethics support approaches are available to support healthcare personnel in clinical practice in terms of their construction, functions and goals. The existing literature clearly demonstrates the increased worldwide interest for clinical ethics support. There are similarities and differences among the established approaches. In the first approach, clinical ethics consultation, the ethicist assists the healthcare personnel with patient cases where there could be issues regarding patient autonomy, informed consent, confidentiality, and surrogate decision making (Aulisio et al. 1998(Aulisio et al. , 2000. Ethics consultation can sometimes be recognized as having an authoritarian ''top-down'' perspective (Adams 2013;Agich 1995;Aulisio et al. 1998), especially if the outcomes of the consultation are not beneficial for the patient or the healthcare personnel. Here the consultant has an influential advisory role and may propose solutions, or act as the primary ethical decision maker with respect to the outcome and the process (Aulisio et al. 2000;La Puma and Schiedermayer 1991). The second approach utilizes clinical ethics committees; it has many similarities to clinical ethics consultation. In this approach, instead of an individual person with expertise in ethics, a group of ''experts'' assists healthcare personnel by providing advice or recommendations from an ''expert'' point-of-view (Dauwerse 2013;Hoffman 1991). Traditionally, both clinical ethics consultation and clinical ethics committees have focused on providing advice or recommendations to healthcare personnel in clinical practice (Cranford and Doudera 1984). In addition, clinical ethics committees have been involved in supporting patients and their next-of-kin. Sometimes patient representatives are included in ethics committees (Førde and Pedersen 2011) when discussing patient cases. However, there seems to be variation in function and role regarding ethics support in ethics consultation and ethics committees both between countries and even within them. There is still no clear universal consensus of what ethics consultation should or should not provide during a case consultation. In a third approach to CES, MCD is sometimes described as a ''bottom-up'' approach, in which reflection starts from the healthcare personnel's experiences of everyday ethical issues related to clinical practice (Molewijk et al. 2008a, b;Spijkerboer et al. 2016). This approach seeks to increase the healthcare personnel's insight into their moral responsibility (Svantesson et al. 2008) and to broaden perspectives through reflection. MCD differs from clinical ethics consultation and clinical ethics committees in that it fosters dialogue on ethical questions and reflection on ethical dilemmas (Dauwerse 2013;Stolper et al. 2014) rather than on decision-making in ethically difficult situations. The reflection in MCD is usually guided by a facilitator (Molewijk et al. 2011a), who is trained in various conversation methods, e.g., the Dilemma method and Socratic Dialogue (Molewijk et al. 2008a, b;Plantinga et al. 2012). The facilitator in the MCD does not claim be an ''expert'' in ethics, but merely stimulates healthcare personnel by asking questions about the case like Socrates did in his time (Stolper et al. 2014). Finally, there are ethics rounds/ethics discussion groups/ethics reflection groups that are closely related to each other in how they are facilitated and the content they discuss. For this reason, we have chosen to discuss all three together as one approach. This fourth approach is also closely related to MCD. In a previous study, MCD was used as an umbrella term for ethics rounds/ethics discussion groups and ethics reflection groups (Svantesson et al. 2014). Previous studies described the group reflections as being positively regarded by healthcare personnel (van der Dam et al. 2011Dam et al. , 2013Verkerk et al. 2004). It was said to increase their awareness concerning ethically difficult situations they experience in their clinical practice (Lillemoen and Pedersen 2015). It also increased job satisfaction and was associated with lower burnout rates, while workplaces without reflection remained unchanged. Reflection is the act of sharing experiences and narratives in which the person is receptive for personal development. It has also been described as a process of learning and representation (Moon 2013). A reflective conversation for healthcare personnel in practice is fundamental in order to provide quality services (Schon 2003). The primary purpose of reflection is not to solve an issue arising in everyday practice, but to increase awareness of the various aspects of the issue. A positive side-effect of reflection could be that it leads to the ability to solve an issue that occurs. Reflection supports the idea of an enhanced critical thinking process. According to Dewey (1933), the purpose of reflection is to process knowledge in order to get a deeper understanding of a phenomenon. In health care, the personnel reflect on ethically difficult situations in order to learn more of what it means to act or not act in a certain way. Therefore, clinical ethics support in the form of reflection is vital for personnel working in health care settings. Personnel benefit from approaches that can create an atmosphere where they can have the freedom to express their feelings and emotions related to a case they are struggling with (Molewijk et al. 2011b). Clinical ethics support from a ''bottom-up'' perspective might give healthcare professionals opportunities to think and reflect on issues they are facing in their everyday work. A dominant ''top-down'' perspective could be a less risky approach if, and only if, it removes ethical responsibility from the healthcare personnel (Agich 1995;Hansson 2002). For example, if a consultant makes a decision, or gives advice or a recommendation that is not beneficial for either the patient or the personnel, but only beneficial from an economical perspective. If later on the consequences of that decision or advice/recommendation proved detrimental to the patient, the healthcare personnel involved could free themselves from guilt by placing the blame on the consultant. If a decision or recommendation was based on a ''bottom-up'' approach that involves the reflections of the healthcare personnel, they would need to assume greater ethical responsibility and perhaps wish to reflect more in such situations. Consequently, the status of a professional ''expert'' in ethics might lead to a risk, an undermining, or a challenge to the healthcare personnel's personal autonomy, e.g., a limitation on their autonomy when dealing with ethical issues. According to Schon (2003), professional practitioners are specialists that encounter certain types of situations again and again in their daily work. They learn what to look for and how to respond to those particular types of situations (Schon 2003). Even though many ethically difficult situations are unique, repeating patterns can be found. Therefore, the ethical responsibility and choice of what to do should remain with the healthcare personnel in clinical practice (Hansson 2002). To permit someone from the outside to make a decision or give a recommendation in a particular situation could be risky (Hansson 2002). Strengths and Limitations A strength of this study is that the conclusions are based on literature that allowed an analysis of the established clinical ethics support approaches worldwide. In this integrative review, both qualitative and quantitative studies as well as theoretical papers were included. An integrated review allows the inclusion of several methodologies and can take into account a broader range of studies to develop a more comprehensive understanding of a phenomenon (Whittemore and Knafl 2005). Another strength is that we have searched data systematically with Mesh terms as well as manually. There are limitations as well. It was difficult to decide which articles to exclude since there are no real definitions as to what clinical ethics consultation really is. We included articles that described a clinical ethics support approach empirically, or theoretical papers that discussed an established approach. It was difficult to decide where to draw the line between approaches aimed only at supporting healthcare personnel with ethically difficult situations and approaches that support healthcare personnel as well as patients and next-of-kin. For example, some clinical ethics committees do support the patients and their next-of-kin, while other clinical ethics committees do not involve patients and next-of-kin in their annual meetings or when considering different situations. Even though there are similarities in the ''top-down'' perspective of the clinical ethics consultation and clinical ethics committees, it does not mean they are similar worldwide. Clinical ethics consultation and clinical ethics committees in different countries can differ in their role and function as well. In addition, in this review we considered only English-language papers and it is possible that other methods of ethics support have been developed in other language cultures. Future studies need to focus on defining and characterizing what clinical ethics support actually is, how it should function and if it should function differently in different countries; and if so, what are the possible reasons since many of the ethical issues are similar globally. Lastly, an additional interesting avenue for future research would be to perform a large-scale study of what types of ethical support practitioners are receiving, if they are satisfied with it and their perceptions regarding any effects on patient outcomes. Conclusions and Implications Clinical ethics support in the form of reflection is an important approach in order to deal with ethical challenges in health care settings. Traditionally, clinical ethics committees and ethics consultations have been the two approaches that are sometimes recognized (without generalizing) as having a focus on providing suggestions or recommendations to healthcare personnel from an expert point-ofview, and often with a basis in different medical principles or theories. Moral Case Deliberation (MCD) and ethics rounds/ethics discussion groups/ethics reflection groups seem to focus on fostering dialogue on ethical questions and ethical inquiries that are brought up by the healthcare personnel. It has been argued that approaches based on reflection may generate insight (Moon 2013). In MCD for example, reflection from a ''bottom-up'' perspective may support healthcare personnel to process their own thoughts and feelings. This might further help them become aware of other aspects they were previously unaware of in ethically difficult situations and how to deal with them in clinical practice. MCD is an explicit form of clinical ethics support that gives professionals support to improve their moral reflection skills . Reflection in groups might in the long-term help healthcare personnel discover their own way of dealing with ethically difficult situations in order to act in the best interest of the patient. To summarize, clinical ethics support from a ''bottom-up'' perspective might provide healthcare personnel with opportunities to think and reflect more than from a ''top-down'' perspective. While a ''bottom-up'' approach leaves healthcare personnel with the moral responsibility for their choice of action in clinical practice, a ''top-down'' approach risks removing that responsibility.
v3-fos-license
2022-12-14T16:17:11.169Z
2022-12-01T00:00:00.000
254611719
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-6382/11/12/1797/pdf?version=1670667963", "pdf_hash": "c2c8c943654bf0bae074b8d0acc72fd703866949", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42218", "s2fieldsofstudy": [ "Biology" ], "sha1": "ff7a58b31224a9489996f4dde295a6079d463d8f", "year": 2022 }
pes2o/s2orc
Isolation and Identification of Bioactive Compounds from Streptomyces actinomycinicus PJ85 and Their In Vitro Antimicrobial Activities against Methicillin-Resistant Staphylococcus aureus Antibiotic-resistant strains are a global health-threatening problem. Drug-resistant microbes have compromised the control of infectious diseases. Therefore, the search for a novel class of antibiotic drugs is necessary. Streptomycetes have been described as the richest source of bioactive compounds, including antibiotics. This study was aimed to characterize the antibacterial compounds of Streptomyces sp. PJ85 isolated from dry dipterocarp forest soil in Northeast Thailand. The 16S rRNA gene sequence and phylogenetic analysis showed that PJ85 possessed a high similarity to Streptomyces actinomycinicus RCU-197T of 98.90%. The PJ85 strain was shown to produce antibacterial compounds that were active against Gram-positive bacteria including methicillin-resistant Staphylococcus aureus (MRSA). The active compounds of PJ85 were extracted and purified using silica gel column chromatography. Two active antibacterial compounds, compound 1 and compound PJ85_F39, were purified and characterized with spectroscopy, including liquid chromatography and mass spectrometry (LC–MS). Compound 1 was identified as actinomycin D, and compound PJ85_F39 was identified as dihomo-γ-linolenic acid (DGLA). To the best of our knowledge, this is the first report of the purification and characterization of the antibacterial compounds of S. actinomycinicus. Introduction Ehrlich and Sata introduced arsphenamine (salvarsan) in 1910, and it is considered to be the first synthetic antibiotic [1]. It was widely used against syphilis and tripanosomiasis [1]. However, the first naturally occurring antibiotic, penicillin, was discovered by Alexander Fleming in 1928 [2,3]. Penicillin was successfully used to control bacterial infections during World War II [4]. Since then, many antibiotics have been discovered and administered to humans and animals for therapy and prophylaxis [5]. The misuse and widespread use of antibiotic drugs have led to the emergence of resistant pathogenic microorganisms [6,7]. Antibiotic resistance emerged in early 1942 when Staphylococcus aureus became resistant to penicillin [6]. In 1960, methicillin was then used to cure penicillin-resistant S. aureus; Identification and Characterization of PJ85 Strain According to a previous study, a total of 123 bacterial soil isolates were isolated from dry dipterocarp forest soil around Suranaree University of Technology, Nakhon Ratchasima, Thailand (14.8729 • N, 102.0237 • E) [26]. The isolates were tested for antimicrobial activity against test pathogens including Gram-positive and Gram-negative bacteria. The results showed that the PJ85 strain showed a high antibacterial activity against test pathogens. Therefore, the PJ85 strain obtained from the previously study was used in this study [26]. The PJ85 strain is Gram-positive, aerobic, and filamentous in nature. Morphological observations of a 14-day-old culture grown on an ISP-2 agar medium revealed the rich growth of aerial and vegetative hyphae. The colors of the aerial and vegetative hyphae were light and strong yellow, respectively. The PJ85 strain also produced a strong yellow diffusible pigment after incubation for 14 days at 37 • C (Supplementary Figure S1 available in the online Supplementary Materials). The 16S rRNA gene of PJ85 was sequenced and compared with reference sequences from the EzBiocloud database (https://www. ezbiocloud.net, accessed on 9 November 2022). The results revealed that the 16S rRNA gene sequence of the PJ85 strain (1523 nt) was closely related to the members of the Streptomyces genus. The PJ85 strain shared the highest 16S rRNA gene sequence similarity with S. actinomycinicus RCU-197 T (99.86%), Streptomyces echinatus NBRC12763 T (98.74%), and Streptomyces graminisoli JR-19 T (98.48%). The sequence of the 16S rRNA gene of PJ85 was submitted to GenBank under accession number MK580459. In the neighbor-joining phylogenetic tree based on the 16S rRNA gene sequences, PJ85 formed a clade with its closest relative strain obtained from the EzBiocloud database. The PJ85 strain shared a node with S. actinomycinicus RCU-197 T , with a bootstrap value of 100% ( Figure 1). Therefore, PJ85 may be closely related to S. actinomycinicus. Antibiotics 2022, 11,1797 3 of 17 The maximum value of the inhibition zones (mm ± SD) of the antibacterial activity of Streptomyces sp. PJ85 was found against MRSA DMST20651 (50.00 ± 0.00), followed by S. epidermidis TISTR518 (48.33 ± 2.89), S. aureus ATCC29213 (46.67 ± 0.58), B. subtilis TISTR008 (45.00 ± 3.00), and B. cereus TISTR687 (38.33 ± 1.15). The antibacterial activity of Streptomyces sp. PJ85 against test pathogens according to the perpendicular streak method is shown in Table 1. According to our results, the zone of inhibition of the antibacterial activity of PJ85 against MRSA DMST20651 was significantly higher than that of B. subtilis TISTR008 and B. cereus TISTR687 (p < 0.05). Although the antibacterial activity of PJ85 against MRSA DMST20651 was not statistically different from that of S. aureus ATCC29213 and S. epidermidis TISTR518 (p > 0.05), the PJ85 strain exhibited a larger zone of inhibition than S. aureus and S. epidermidis (Table 1). Data are presented as mean ± standard deviation (n = 3); a, b and c : different letters represent significance (LSD, p-value < 0.05). The maximum value of the inhibition zones (mm ± SD) of the antibacterial activity of Streptomyces sp. PJ85 was found against MRSA DMST20651 (50.00 ± 0.00), followed by S. epidermidis TISTR518 (48.33 ± 2.89), S. aureus ATCC29213 (46.67 ± 0.58), B. subtilis TISTR008 (45.00 ± 3.00), and B. cereus TISTR687 (38.33 ± 1.15). The antibacterial activity of Streptomyces sp. PJ85 against test pathogens according to the perpendicular streak method is shown in Table 1. According to our results, the zone of inhibition of the antibacterial activity of PJ85 against MRSA DMST20651 was significantly higher than that of B. subtilis TISTR008 and B. cereus TISTR687 (p < 0.05). Although the antibacterial activity of PJ85 against MRSA DMST20651 was not statistically different from that of S. aureus ATCC29213 and S. epidermidis TISTR518 (p > 0.05), the PJ85 strain exhibited a larger zone of inhibition than S. aureus and S. epidermidis (Table 1). The cultural temperature for the growth and antibacterial activity of PJ85 was studied on an ISP-2 medium because the PJ85 culture on ISP-2 exhibited the highest antibacterial activity compared with other media (data not shown). In order to study the effect of incubation temperature on growth and antibacterial activity, PJ85 was incubated at two temperatures, 30 • C and 37 • C. The fermented broth of PJ85 was collected every day for 14 days. The cell biomass and cell-free supernatant were separated by filtration. The cell pellet of PJ85 was dried to obtain the dry cell weight, while a cell-free supernatant was used to prepare a crude extract of ethyl acetate. The results showed that the maximum growth was observed at 30 • C on day 4 of cultivation, with a biomass yield of 4.40 ± 0.25 mg/mL ( Figure 2). There was a statistically significant difference between incubation temperature (30 • C and 37 • C) and incubation period on the growth of PJ85 (Figure 2; p < 0.0001). On individual days of incubation, the cell biomass of PJ85 cultured at 30 • C was significantly higher than that cultured at 37 • C on days 4, 5, 6, 7, 8, 10, and 11 ( Figure 2). The extracts were then used to determine antibacterial activity using the disc diffusion method. The results revealed that the antibacterial activity of the PJ85 crude extract against Gram-positive bacteria was highest around day 5. The antibacterial activity was stable from days 5 to 9 of cultivation and began to decrease at day 10 ( Figure 3). A two-way ANOVA with post hoc Bonferroni correction revealed a significant difference between incubation temperature (30 • C and 37 • C) and time on the antibacterial activity of the crude extract of PJ85 against MRSA DMST20651, S. aureus ATCC29213, B. subtilis TISTR008, and B. cereus TISTR687 ( Figure 3; p < 0.0001). The antibacterial activity against MRSA DMST20651, S. aureus ATCC29213, and S. epidermidis TISTR518 of the crude extract of PJ85 cultivated at 37 • C was significantly higher than that at 30 • C on day 1 ( Figure 3A-C). Bonferroni multiple comparisons showed that the crude extract of PJ85 grown at 37 • C had a significantly higher antibacterial activity than that at 30 • C against B. subtilis TISTR008 (days 5-10; Figure 3D) and B. cereus TISTR687 (days 1-3 and days 10-14; Figure 3E). Incubation Temperature, Period Affect Growth, and Antibacterial Activity of PJ85 The cultural temperature for the growth and antibacterial activity of PJ85 was studied on an ISP-2 medium because the PJ85 culture on ISP-2 exhibited the highest antibacterial activity compared with other media (data not shown). In order to study the effect of incubation temperature on growth and antibacterial activity, PJ85 was incubated at two temperatures, 30 °C and 37 °C. The fermented broth of PJ85 was collected every day for 14 days. The cell biomass and cell-free supernatant were separated by filtration. The cell pellet of PJ85 was dried to obtain the dry cell weight, while a cell-free supernatant was used to prepare a crude extract of ethyl acetate. The results showed that the maximum growth was observed at 30 °C on day 4 of cultivation, with a biomass yield of 4.40 ± 0.25 mg/mL ( Figure 2). There was a statistically significant difference between incubation temperature (30 °C and 37 °C) and incubation period on the growth of PJ85 (Figure 2; p < 0.0001). On individual days of incubation, the cell biomass of PJ85 cultured at 30 °C was significantly higher than that cultured at 37 °C on days 4, 5, 6, 7, 8, 10, and 11 ( Figure 2). The extracts were then used to determine antibacterial activity using the disc diffusion method. The results revealed that the antibacterial activity of the PJ85 crude extract against Gram-positive bacteria was highest around day 5. The antibacterial activity was stable from days 5 to 9 of cultivation and began to decrease at day 10 ( Figure 3). A twoway ANOVA with post hoc Bonferroni correction revealed a significant difference between incubation temperature (30 °C and 37 °C) and time on the antibacterial activity of the crude extract of PJ85 against MRSA DMST20651, S. aureus ATCC29213, B. subtilis TISTR008, and B. cereus TISTR687 ( Figure 3; p < 0.0001). The antibacterial activity against MRSA DMST20651, S. aureus ATCC29213, and S. epidermidis TISTR518 of the crude extract of PJ85 cultivated at 37 °C was significantly higher than that at 30 °C on day 1 ( Figure 3A-C). Bonferroni multiple comparisons showed that the crude extract of PJ85 grown at 37 °C had a significantly higher antibacterial activity than that at 30 °C against B. subtilis TISTR008 (days 5-10; Figure 3D) and B. cereus TISTR687 (days 1-3 and days 10-14; Figure 3E). . The statistical significance of the differences between the cell biomass of PJ85 cultured at 30 °C and 37 °C was estimated using a two-way ANOVA with a Bonferroni multiple comparison test, ** p < 0.0001, compared on the same incubation day. . The statistical significance of the differences between the cell biomass of PJ85 cultured at 30 • C and 37 • C was estimated using a two-way ANOVA with a Bonferroni multiple comparison test, ** p < 0.0001, compared on the same incubation day. . The statistical significance of the differences between antibacterial activity of the crude extract of PJ85 cultured at 30 °C and 37 °C was determined using a two-way ANOVA with a Bonferroni multiple comparison test, ** p < 0.0001, compared on the same incubation day. According to Student's t-test (Table 2), the antibacterial activity of the PJ85 crude extract obtained from cells grown at 37 °C possessed a significantly larger zone of inhibition against B. subtilis TISTR008 and B. cereus TISTR687 than that grown at 30 °C (two-tailed t-test, p < 0.05). Although the antibacterial activity against MRSA DMST20651, S. aureus ATCC29213, and S. epidermidis TISTR518 was not found to be statistically different at both cultural temperatures, the activities were somewhat higher when the cells were cultured at 37 °C than at 30 °C. Thus, a cultivation temperature of 37 °C and incubation period of 5 days were applied for the preparation of crude extracts in order to obtain the maximal yield of antibacterial activity. . The statistical significance of the differences between antibacterial activity of the crude extract of PJ85 cultured at 30 • C and 37 • C was determined using a two-way ANOVA with a Bonferroni multiple comparison test, ** p < 0.0001, compared on the same incubation day. According to Student's t-test (Table 2), the antibacterial activity of the PJ85 crude extract obtained from cells grown at 37 • C possessed a significantly larger zone of inhibition against B. subtilis TISTR008 and B. cereus TISTR687 than that grown at 30 • C (two-tailed t-test, p < 0.05). Although the antibacterial activity against MRSA DMST20651, S. aureus ATCC29213, and S. epidermidis TISTR518 was not found to be statistically different at both cultural temperatures, the activities were somewhat higher when the cells were cultured at 37 • C than at 30 • C. Thus, a cultivation temperature of 37 • C and incubation period of 5 days were applied for the preparation of crude extracts in order to obtain the maximal yield of antibacterial activity. Crude Compound Preparation and MIC Values The cultural conditions of PJ85 that yielded the highest antibacterial activity were used to prepare the crude compounds. Therefore, PJ85 was inoculated in an ISP-2 medium (200 mL in a 1000 mL Erlenmeyer flask without baffled) and incubated at 37 • C and 200 rpm for 5 days. After incubation, the cell-free supernatant was collected to extract the crude compounds. In order to extract the crude compounds of PJ85, different solvents such as nhexane, n-butanol, chloroform, ethyl acetate, ethanol, and methanol were tested. The ethyl acetate crude extract showed the highest antibacterial activity of the tested solvents (data not shown). Therefore, ethyl acetate was used for the preparation of the crude compounds. The crude compounds of PJ85 were yellowish-orange in color. The yield of the crude compounds was 246.54 ± 17.12 mg/g of dry cell weight. The yellowish-orange crude ethyl acetate extract of PJ85 was used for an evaluation of MIC by using the two-fold macro-dilution method. The MIC values of the crude ethyl acetate of PJ85 against MRSA DMST20651, S. aureus ATCC29213, S. epidermidis TISTR518, B. subtilis TISTR008, and B. cereus TISTR687 were 2, 2, 16, 2, and 1 µg/mL, respectively ( Table 3). The assay was carried out in triplicate, during in which the same MIC values were attained. Purification of the Active Compounds of Streptomyces sp. PJ85 with Thin-Layer Chromatography, Column Chromatography, and Bioautography Analysis The separation of the antibacterial metabolites present in yellowish-orange crude compounds was performed with TLC. The mobile phase used to develop the plate was chloroform:n-hexane (9.5:0.5, v/v). After running, the TLC plates were dried and used to detect active bands on chromatogram using contact bioautography. This assay has been successfully used to determine active spots with an inhibitory effect on microbial growth [27][28][29]. Bioautography revealed two active bands, compound 1 and compound 2, on the TLC plate. These compounds exhibited antibacterial activity against Gram-positive bacteria, including MRSA ( Figure 4). Based on LC-MS analysis, compound 1 was identified as actinomycin D (Supplementary Figure S2). Actinomycin D is a secondary metabolite produced by many streptomycetes, including S. actinomycinicus RCU-197 T , the closest related strain of PJ85 (Figure 1). It has been reported that the sole antibacterial agent produced by S. actinomycinicus RCU-197 T is actinomycin D [30]. However, an unidentified bioactive agent, compound 2, was also detected from the PJ85 crude extract (Figure 4). Bioautography revealed two active bands, compound 1 and compound 2, on the TLC plate. These compounds exhibited antibacterial activity against Gram-positive bacteria, including MRSA (Figure 4). Based on LC-MS analysis, compound 1 was identified as actinomycin D (Supplementary Figure S2). Actinomycin D is a secondary metabolite produced by many streptomycetes, including S. actinomycinicus RCU-197 T , the closest related strain of PJ85 (Figure 1). It has been reported that the sole antibacterial agent produced by S. actinomycinicus RCU-197 T is actinomycin D [30]. However, an unidentified bioactive agent, compound 2, was also detected from the PJ85 crude extract (Figure 4). In order to identify compound 2, the band consistent with compound 2 was carefully scrapped and used for purification with silica gel column chromatography. About 3.0 mg of compound 2 scrapped from 25 TLC plates was chromatographed on a silica gel column and eluted with a stepwise solvent system consisting of chloroform:n-hexane. A total of 121 fractions of 5 mL each were collected and concentrated. All fractions were tested for antibacterial activity with the agar well diffusion method. Then, the fractions showing antibacterial activity against MRSA were analyzed with TLC and bioautography. The fractions exhibiting anti-MRSA (Rf 0.68) were pooled and designated as compound PJ85_F39. Liquid Chromatography-Mass Spectrometry (LC-MS) Analysis LC-MS was used to identify compound PJ85_F39 of Streptomyces sp. PJ85. Based on LC-MS analysis, the ESI-MS spectra showed one major peak at m/z 307.2172 [M + H] + , leading to a monoisotopic mass of 306.2172 g/mol ( Figure 5). The peak was analyzed and identified by matching the mass spectra with the MassBank Europe Mass Spectral Data Base (https://massbank.eu/MassBank/Search, accessed on 26 November 2022). According to a MassBank library search, the peak at m/z 307.2172 [M + H] + was matched to dihomoγ-linolenic acid, epigallocatechin, 2,3-trans-3,4-cis-leucocyanidin, eremofortin A, fenazaquin, feruloyl agmatine, fluconazole, and koumine. A summarization of the molecular weights and nearest compounds hit for the peak is shown in Table 4. However, In order to identify compound 2, the band consistent with compound 2 was carefully scrapped and used for purification with silica gel column chromatography. About 3.0 mg of compound 2 scrapped from 25 TLC plates was chromatographed on a silica gel column and eluted with a stepwise solvent system consisting of chloroform:n-hexane. A total of 121 fractions of 5 mL each were collected and concentrated. All fractions were tested for antibacterial activity with the agar well diffusion method. Then, the fractions showing antibacterial activity against MRSA were analyzed with TLC and bioautography. The fractions exhibiting anti-MRSA (Rf 0.68) were pooled and designated as compound PJ85_F39. Liquid Chromatography-Mass Spectrometry (LC-MS) Analysis LC-MS was used to identify compound PJ85_F39 of Streptomyces sp. PJ85. Based on LC-MS analysis, the ESI-MS spectra showed one major peak at m/z 307.2172 [M + H] + , leading to a monoisotopic mass of 306.2172 g/mol ( Figure 5). The peak was analyzed and identified by matching the mass spectra with the MassBank Europe Mass Spectral Data Base (https://massbank.eu/MassBank/Search, accessed on 26 November 2022). According to a MassBank library search, the peak at m/z 307.2172 [M + H] + was matched to dihomo-γlinolenic acid, epigallocatechin, 2,3-trans-3,4-cis-leucocyanidin, eremofortin A, fenazaquin, feruloyl agmatine, fluconazole, and koumine. A summarization of the molecular weights and nearest compounds hit for the peak is shown in Table 4. However, none of these compounds have been documented as antibacterial agents except for dihomo-γ-linolenic acid [31][32][33]. The data on the molecular weight and antibacterial activity of compound PJ85_F39 provided evidence that this compound could be dihomo-γ-linolenic acid (DGLA). none of these compounds have been documented as antibacterial agents except for dihomo-γ-linolenic acid [31][32][33]. The data on the molecular weight and antibacterial activity of compound PJ85_F39 provided evidence that this compound could be dihomoγ-linolenic acid (DGLA). Discussion The emergence of antibiotic resistance has been recognized as a worldwide healththreatening problem. Therefore, there is a need for novel therapeutics to replace ineffective antimicrobial drugs. Natural products are the main source of antimicrobial agents, most of which are produced by Streptomyces [6,[41][42][43]. In this study, an antibacterial-producing Streptomyces PJ85 was isolated from forest soil in Nakhon Ratchasima, Thailand [26]. Molecular analysis was used to identify the PJ85 strain. The identification of microorganisms with molecular techniques has many advantages over others, such as being rapid, less Antibiotics 2022, 11, 1797 9 of 16 laborious, sensitive, specific, and efficient [44][45][46][47][48]. Based on the study of the 16S rRNA gene sequence and phylogenetic relationship, PJ85 was found to belong to the same clade as S. actinomycinicus RCU-197 T (JCM 30864 T ). The full-length 16S rRNA gene of PJ85 shared a 99.86% sequence similarity with S. actinomycinicus RCU-197 T , indicating that they belong to the same species. Although 16S rRNA gene sequences are conventionally analyzed in bacterial systematics, their resolution may not be sufficient for species identification [49]. The multilocus sequence analysis (MLSA) of core housekeeping genes is recognized as a powerful tool for species identification in bacteria including the Streptomyces genus [49]. Gene sequences of gyrB (DNA gyrase beta subunit), rpoA (RNA polymerase alpha subunit), atpD (ATP synthase subunit b), and rpoB (DNA-directed RNA polymerase subunit beta) are generally used in MLSA for Streptomyces species [49,50]. In this study, the sequences of gyrB, rpoA, atpD, and rpoB of PJ85 and S. actinomycinicus RCU-197 T were compared (Supplementary Table S1). The results showed a high degree of identity ranging from 98.99% to 99.58% (gyrB: 98.99%; rpoA: 99.51%; atpD: 99.58%; and rpoB: 99.08%). The similarity of these genes of PJ85 and S. actinomycinicus exceeded the traditionally accepted threshold of 97% for species identification [51,52]. The high degree of 16S rDNA similarity and conservation of four key housekeeping genes prompted us to classify PJ85 as S. actinomycinicus. S. actinomycinicus RCU-197 T was firstly discovered in 2016 [30]. It was isolated from a soil sample of forest in Rayong, Thailand. Based on the perpendicular-streak method, the RCU-197 T strain was found to be active against Micrococcus luteus, S. aureus, B. subtilis, E. coli, P. aeruginosa, and Candida albicans (unpublished results). To date, there have no available reports regarding the isolation and characterization of antibacterial agents of S. actinomycinicus. In this study, S. actinomycinicus PJ85 was isolated and tested for antibacterial activity against Gram-positive and Gram-negative bacteria. The PJ85 strain showed antibacterial activity against MRSA DMST20651, S. aureus ATCC29213, S. epidermidis TISTR518, B. subtilis TISTR008, and B. cereus TISTR687. Our results revealed that the antibacterial activity of PJ85 cultured at 37 • C was higher than that at 30 • C. The exhibition of a higher antibacterial activity of PJ85 at a slightly elevated temperature might be useful in industrial processes. The advantages of industrial fermentation under high temperatures include faster reaction times and reduced cooling costs for large-scale fermentation [53]. Thus, PJ85 could be a potential candidate for the production of low-cost industrial antibiotics. The MIC value of crude compounds of PJ85 against tested pathogens was evaluated with the broth dilution method. The results indicated that the MIC of the crude compounds ranged from 1 to 16 µg/mL. The lowest MIC value of 1 µg/mL was observed against B. cereus TISTR687, while the highest MIC value of 16 µg/mL was observed against S. epidermidis TISTR518. The crude compounds of PJ85 also inhibited the growth of MRSA DMST20651, S. aureus ATCC29213, and B. subtilis TISTR008 with an MIC value of 2 µg/mL. The genus of Streptomyces has been shown to produce several secondary bioactive metabolites possessing antibacterial activity. Substantial reports are associated with the study of LC-MS for the chemical analysis of Streptomyces spp. [28,[54][55][56][57][58]. For example, Awla et al. (2016) identified several antifungal agents such as ergotamine, amicoumacin, fungichromin, rapamycin, and N-acetyl-D, L-phenylalanine produced from Streptomyces sp. UPMRS4 by using LC-MS [59]. Bibi et al. (2017) reported the presence of different active compounds, including sulfamonomethoxine, sulfadiazine, ibuprofen, and metronidazole-OH, in culture extracts of Streptomyces sp. EA85 based on LC-MS analysis [60]. In this study, active compounds present in the ethyl acetate extract of Streptomyces sp. PJ85 were also identified by using LC-MS analysis. The results revealed that PJ85 produced two active compounds, actinomycin D and an unidentified compound PJ85_F39. It has been shown that S. actinomycinicus RCU-197 T , the closest related strain of PJ85, only produces actinomycin D [30]. Actinomycin D is one of the oldest chemotherapy drugs, and it has been used as an anti-tumor drug to treat childhood rhabdomyosarcoma and Wilms' tumor [61,62]. Similar to S. actinomycinicus RCU-197 T , PJ85 was found to produce actinomycin D as a major product. The antibacterial activity of actinomycin D against MRSA was previously reported by Khieu et al. with an MIC value of 0.04 µg/mL [63]. Moreover, PJ85 was able to generate a second active compound, compound PJ85_F39, that exhibited strong antibacterial activity against MRSA (Figure 4). We were able to identify the PJ85_F39 compound as dihomo-γ-linolenic acid (DGLA). DGLA is a polyunsaturated fatty acid that plays an essential role as a precursor for the biosynthesis of arachidonic acid (ARA) [31]. It was reported to exhibit many activities, such as antimicrobial, antiinflammatory, and antiallergic activities [31]. In 2013, Desbois and Lawlor reported the antibacterial activity of DGLA against Gram-positive bacteria such as Propionibacterium acnes and S. aureus with MIC values of 128 mg/L and 1024 mg/L, respectively [32]. DGLA has been reported to have antibacterial action, although there is no evidence of its anti-MRSA activity. Thus, this is the first study to reveal that DGLA is also effective against drug-resistant microorganisms. Previously, it has been recognized that soil fungi belonging to the Mortierella genus, such as M. alpina, M. clonocystis, M. elongate, M. gamsii, M. humilis, M. macrocystis, and M. globulifera, are effective producers of DGLA [33]. However, there have been no reports of DGLA produced by the Streptomyces genus. This study demonstrates the first isolation and identification of DGLA from S. actinomycinicus. Thus, S. actinomycinicus PJ85 may be a potent source of DGLA since the natural source of DGLA is limited. Moreover, the production of DGLA through S. actinomycinicus PJ85 could play an important role in the pharmaceutical industry. Cultural and Morphological Characteristics The cultural morphology of PJ85 was determined on the ISP-2 medium. Morphological characteristics, such as the aerial-mass color, substrate mycelial pigmentation, and diffusible pigment production, were observed. Amplification and Sequencing of the 16S rRNA Gene Universal primers, 27F (5 -AGAGTTTGATCCTGGCTCAG-3 ) and 1525R (5 -AAAGGAGGTGATCCAGCC-3 ), were used for the PCR amplification of 16S rDNA of PJ85 [41]. Amplification was conducted in a thermal cycler (Thermo Scientific, Waltham, MA, USA). The PCR reaction conditions were initial denaturation at 95 • C for 5 min followed by 30 cycles of denaturation at 95 • C for 60 s, annealing at 55 • C for 60 s, and extension at 72 • C for 60 s. A final extension was conducted at 72 • C for 7 min. The amplicons were purified from 0.8% agarose gel using a FavorPrep™ GEL/PCR Purification Kit (FAVORGEN, Pingtung City, Taiwan). The purified PCR product was ligated to the terminal transferase activity (TA) cloning vector, and the recombinant plasmid was transformed into Escherichia coli JM109. The recombinant plasmid harboring 16S rDNA was extracted and purified with the FavorPrep™ Plasmid DNA Extraction Kit (FAVORGEN, Taiwan). The purified product was submitted for Sanger sequencing at Macrogen, Korea. Phylogenetic Tree Analysis The 16S rDNA sequence of PJ85 was compared with the EzBiocloud database (https: //www.ezbiocloud.net, accessed on 9 November 2022). CLUSTAL W was used to align the 16S rRNA gene of PJ85 with its closely-related species. The neighbor-joining method was applied for phylogenetic analyses using Molecular Evolutionary Genetics Analysis (MEGA) version 10.0 software. The confidence level of each branch (1000 replications) was tested with bootstrap analysis [68]. The EzTaxon-e server (https://www.ezbiocloud.net, accessed on 9 November 2022) was used to determine sequence similarities. Perpendicular Cross Streak Method The perpendicular cross streak method was used to determine the antibacterial activity of PJ85 [48,69,70]. The PJ85 strain was inoculated as a straight line on one side of the MHA medium. The plates were then incubated at 37 • C for 5 days in order to allow the antibacterial agents produced by PJ85 to diffuse into an agar. After incubation, test pathogens were perpendicularly crossed (T-streak) to the line of the PJ85 colony. The plates were then incubated at 37 • C for 24 h. Antibacterial activity was measured based on the distance of inhibition between the colony margin of PJ85 and the test microorganisms. Preparation of Crude Compounds of PJ85 In order to prepare crude compounds, the PJ85 isolate was grown at 37 • C and 200 rpm for 5 days. After incubation, the culture was filtered through Whatman No.1 filter paper (Whatman TM , Maidstone, UK). The fermented broth containing antibacterial was mixed with a solvent such as ethyl acetate. The ethyl acetate layer was collected and concentrated using a rotary evaporator under reduced pressure at 45 • C. The crude compounds were obtained via freeze-drying and used for the evaluation of MIC and purification. Disc Diffusion Method The antibacterial activity of the crude compounds was tested using the standard disc diffusion method [71,72]. The filter paper disks (6 mm in diameter) containing crude compounds at 50 µg/disc were placed on the MHA lawn with test microorganisms (0.5 Mc-Farland standard). The antibacterial activity was determined by measuring the size of the inhibition zone in millimeters after incubation at 37 • C for 24 h. Determination of the Minimum Inhibitory Concentration (MIC) of Crude Compounds of PJ85 The determination of the MIC value of the crude compounds of PJ85 was performed using the dilution method [73]. An inoculum of test pathogens in the mid-log phase was transferred to a series of tubes containing serial two-fold dilutions of the crude compounds in a liquid medium (256,128,64,32,16,8,4, 2, 1, 0.5, 0.25, 0.125, 0.0625, and 0.03125 µg/mL). The bacterial suspensions were added to each tube to yield approximately 5.0 × 10 5 CFU/mL. The tubes were incubated at 37 • C for 16-18 h. A positive control and a solvent control (dimethyl sulfoxide: DMSO) were included. The MIC value was recorded as the lowest concentration of the crude compounds that inhibited the visible growth of the test organisms. The Study of Incubation Temperature and Incubation Period on Growth and Activity of Antibacterial Agents The PJ85 isolate was grown on ISP-2 broth and incubated at 30 • C and 37 • C, 200 rpm, for 14 days. The biomass and cell-free supernatant of the culture were harvested every day during the 14 days of incubation by filtration through Whatman No.1 filter paper (Whatman TM , UK). The bacterial cells were dried in a hot air oven, and the dry cell weight was recorded. The cell-free supernatants containing antibacterial compounds were tested for antibacterial activity using the disc diffusion method. Thin-layer chromatography (TLC) was applied to separate bioactive compounds from the crude ethyl acetate. The crude compounds were spotted on TLC silica gel 60 F254 aluminum sheets (Merck, Darmstadt, Germany) and left for drying. The TLC plate was vertically placed in a developing tank containing chloroform and hexane (9.5:0.5 v/v). The solvent was allowed to run until it moved up to 80% of the TLC plate. The chromatogram was left to dry and visualized under UV light at 254 nm. The antibacterial compounds on the chromatogram were identified with the contact bioautography method. Purification of Antibacterial Compounds with Column Chromatography The purification of the antibacterial compounds was performed by using silica gel column chromatography. The 230-400 mesh silica gel (Merck, Germany) was suspended in n-hexane to pack the column. The column consisted of a 40-cm-long corning glass tube with an internal diameter of 1.5 cm. The final size of the column was 30 cm. A sample not exceeding 5 mL was passed through the column while keeping the flow rate at 0.36 mL/min with a stepwise chloroform:n-hexane gradient solvent system (0.00-100). Fractions of 5 mL with each solvent system were collected, and all the individual fractions were tested against MRSA with the agar well diffusion method and the contact bioautography method. Contact Bioautography Analysis Bioautography analysis was used to detect the antibacterial activity of the bioactive compounds separated on a chromatogram [56,57]. The chromatogram was placed over MHA seeded with test pathogens (0.5 McFarland standard) and left for 30 min to allow the compounds from the TLC sheet to diffuse into the agar medium. The MHA plate was then incubated at 37 • C for 24 h. After incubation, the bands of the antibacterial agent were indicated by the zone of inhibition on the medium. The active band was scrapped from the TLC plate and dissolved in methanol. The mixture was then centrifuged and filtered to remove the residual silica. The supernatant containing an antibacterial compound was used for characterization. Identification of Active Compounds with Liquid Chromatography-Mass Spectrometry (LC-MS) The mass spectrum of the active compounds was assessed using LC-MS analysis. An active compound solution was subjected to LC-ESI-QTOF-MS spectroscopy. The chemical compound was infused (without column) into a liquid chromatography (LC) coupled with an electrospray ionization quadrupole time-of-flight mass spectrometer (LC-ESI-QTOF-MS). The sample was placed in an Agilent 1260 Infinity Series HPLC System (Agilent Technologies, Waldbronn, Germany). The mobile phase was 0.1% (v/v) formic acid in water (A) and 0.1% (v/v) formic acid in acetonitrile (B) using an isocratic method at 50% (A). The flow rate was 0.5 mL/min, the injection volume was 1 µL, the column compartment was set at 35 • C, and the run time was 1 min. Mass detection was carried out with a 6540 Ultra High Definition Accurate Q-TOF-mass spectrometer (Agilent Technologies, Singapore). It was operated with electrospray ionization (ESI) in the positive ion mode in the m/z range of 100-1000 amu. The mass spectrometric conditions were set as follows: drying nitrogen gas (N 2 ) flow rate of 10 L/min, gas temperature of 350 • C, nebulizer gas pressure of 30 psi, capillary voltage of 3.5 kV, fragment potentials of 100 V, skimmer of 65 V, Vcap of 3500 V, and Octopole RFP of 750 V. All mass acquisition and analysis were performed using Agilent MassHunter Data Acquisition Software version B.05.01 and Agilent MassHunter Qualitative Analysis Software B 06.0, respectively (Agilent Technologies, Santa Clara, CA, USA). Agilent calibration standard References A and B for MS were used to calibrate and tune the system before use. The peak was analyzed and identified by matching the mass spectra with the mass bank database. Statistical Analysis Statistical analyses were performed using GraphPad Prism 8 software. Data are presented as the mean ± SD of three replicates. All data were checked for normal distribution. The statistical difference in the mean zone of inhibition of PJ85 for individual test bacterium was carried out by using a one-way analysis of variance (ANOVA) followed by Fisher's post hoc LSD test. Significant differences between incubation temperature and period were compared using a two-way ANOVA followed by Bonferroni correction. Significant differences in the maximum antibacterial activity between 30 • C and 37 • C were compared using Student's t-test. A p-value < 0.05 denoted the presence of a statistically significant difference. Conclusions In the present study, Streptomyces sp. PJ85 was isolated from forest soil in Thailand and identified as S. actinomycinicus. The extracellular metabolites produced by Streptomyces sp. PJ85 exhibited a narrow spectrum of antibacterial activity against Gram-positive bacteria, including MRSA. Two bioactive components of PJ85 that were isolated and described based on LC-MS analysis were identified as actinomycin D, the primary compound, and DGLA, the minor compound. In this regard, it should be highlighted that this is the first report on the isolation and identification of bioactive substances from S. actinomycinicus. Evidence on the production of DGLA from S. actinomycinicus was also presented for the first time. However, future studies should include NMR characterization to validate the structure of PJ85_F39.
v3-fos-license
2023-01-12T18:39:40.288Z
2022-12-30T00:00:00.000
255696232
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1073/16/1/428/pdf?version=1672389088", "pdf_hash": "1c21a6f915348a2240719a35bf4c7ca6b8e686be", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42221", "s2fieldsofstudy": [ "Environmental Science", "Economics", "Business" ], "sha1": "24b9277af179a975ce712d56ad3f504a60e5cc41", "year": 2022 }
pes2o/s2orc
Review of the Impact of Biofuels on U.S. Retail Gasoline Prices : This study aims to provide a review of the state-of-the-art literature regarding the impacts and contributions of corn ethanol on retail gasoline prices in the US. For this, a systematic literature review following PRISMA statement was carried out, seeking to answer four research questions: (1) What are the main characteristics of the literature regarding the impact and contributions of ethanol on US retail gasoline prices? (2) What are the main article clusters identified in the evaluated literature? (3) What was the numerical impact of the Volumetric Ethanol Excise Tax Credit/Renewable Fuel Standard (VEETC/RFS) mandate on the price of gasoline and what are the main methods used for calculation in the literature? (4) What are the main trends and possibly new research directions for this literature? As a result of the characterization of the sample, driving themes, such as energy policy, costs, price dynamics, trade and energy market, were identified. Furthermore, three main clusters were identified in the sample: (i) impacts of biofuels on commodity prices and general price dynamics; (ii) impacts of public policies on the implementation of ethanol and flexibility in formulating fuel blends; and (iii) impact of biofuels on environmental aspects. As a practical implication, the prevailing result in the analyzed literature is that the addition of ethanol reduces the price of gasoline at the pump, and estimates range from no effect to nearly 10% off the price of gasoline. Finally, the topic on the impacts of biofuels on commodity prices and on the general dynamics of prices is the most relevant research line and the trend suggested by the proposed research agenda. Introduction The biofuel industry has been significantly growing in recent years around the world, most prominently in USA, EU, and Brazil.Originally, biofuels sparked the interest of agricultural economists and policymakers in the last century in the context of replacing fossil fuels and providing energy security, and later also to address climate change, food security, and rural development [1].Since the turn of the century, biofuels have become a controversial topic in the public domain and in agricultural and energy research, which evolves into two main trends.The first main body of literature concerns food security and crop prices [2,3], since the primary use of agricultural production has been food consumption.The second concerns ecology and environmental topics [4][5][6][7], such as greenhouse gas emissions (GHG), use of land and water compared to just using conventional fossil fuels, and leaving land for food production or provision of environmental services. The literature on commodity food prices is mostly concerned with econometric analysis and investigates relationships and common dynamics between the prices of food and biofuels.The main concern is that using agricultural production as a feedstock for biofuels rather than food consumption drives food prices up and causes nutrition crises, particularly in low-income countries.The food crisis between 2008 and 2010 motivated extensive research on this topic [8][9][10][11].The literature generally finds that the relationship between food and ethanol prices is relatively weak, but ethanol prices are affected by both food and fuel prices.Reference [12] offers a comprehensive review of studies and critically compares their results.The authors of [12] argue that standard time-series analysis does not capture the effect of biofuels on food well and that the impact is, in fact, quite heterogeneous across crops and geographical locations.The presented review further argues that the impact of biofuels on food commodities is, in fact, lower than the impact of economic growth and can be well offset by using genetically modified crops. Condon et al. [13] provides a meta-analysis of estimates of corn-ethanol on corn prices and shows that increasing the production of corn-ethanol by one billion gallons increases corn prices by three to four percent.Persson [14] then presents a systematic review of the literature similar to ours but explores the effect of biofuels' energy demand on agricultural commodities, whereas we focus on the so far much-less-investigated effect of ethanol on gasoline prices. Recently, Lark et al. [15] assessed the environmental effects of the Renewable Fuel Standard (RFS) program, which is the main policy driver behind the increased biofuel production since 2005, even more so after the expansion of the program in 2007.Lark et al. [15] calculated that the mandates motivated higher use of fertilizers and reduced the diversity of U.S. soil by reducing rotation in favor of producing corn.This, in turn, produced substantially greater GHG emissions.Additionally, Lark et al. [15] estimated that higher demand for corn caused inflation of soybean and wheat prices and disputed the potential of the current corn-ethanol production in mitigating climate change.This study, along with [16,17], forms strong criticism of the RFS program, which is well summarized in [18].These studies argue that while corn-ethanol provides profits for corn farmers and ethanol producers, it comes at a much greater expense to the U.S. taxpayer in the form of financing the subsidies, higher gasoline and food prices, and the overall high costs of climate change and other environmental damage, such as that to water and air quality.Those recent studies presented contradictory conclusions to the meta-analysis presented by [19].Consider also the GHG discussion in [20].One of the substantial changes in time between the studies is the shift in the U.S. position from a net oil importer to an exporter in 2020, which according to [18], reduces the necessity of the RFS program. The biofuel policy debate is ongoing and evolving rapidly and substantially.We take the rich discussion presented above as evidence not only of the complexity of the biofuel topic but also of the evolution of results over time.In this article, we add to the discussion on price impacts; more specifically, we review the literature concerning the impact of blending ethanol into gasoline in the U.S. Our systematic literature review identifies the methods used in the research and their contribution to modeling ethanol's effect.This study aims to provide a review of the state-of-the-art literature regarding the impact and contributions of corn ethanol on retail gasoline prices in the US.To assist in achieving this goal, we propose four research questions (RQ): 1. What are the main characteristics of the literature regarding the impact and contributions of ethanol on US retail gasoline prices? 2. What are the main article clusters identified in the evaluated literature?This article is structured into four sections.In Section 2, we present the used methodology, along with the descriptors.Each step of the methodology and the descriptors are carefully explained.The results and a discussion are presented in Section 3, which is divided into four subsections.Finally, conclusions and corresponding recommendations are provided in Section 4. Materials and Methods The systematic literature review (SLR) can be defined as a structured review process that allows others to replicate and validate the research conducted and exactly follow the path chosen for the research [21].In this way, SLR differs from a traditional exploratory review, reducing the researcher's subjectivity, and resulting in a scientific, transparent, and replicable process [22].In the SLR proposed in this study, we followed the instructions of the PRISMA statement, in addition to five steps recommended in the literature [23]: In simple terms, SLR can be defined as a systematic process composed of three phases: input (i), processing (ii), and output (iii) [24,25]; as shown in Figure 1.In the input phase, we define the research problem and objectives.During the processing part, we search for studies in the databases, construct search strings, and define exclusion or inclusion criteria, using which, we then apply filters to assist us in the analysis of results.We then proceed to document the results.In the output phase, we produce tables and figures which summarize the obtained results. Output phase Figure 1.Model for conducting a systematic literature review.Adapted from [25,26]. This section is dedicated to providing a detailed description of the steps we followed in conducting the SLR used to answer the research questions (RQ) presented in the previous section. In the Input phase, we define the research problem and its objectives along with studies relevant to the literature.We identify the main keywords of the publications that would contribute to the discussion about the appropriate search strings for performing the SLR.It is important to note that the proposed research questions serve to guide the development of the research and the presentation of results.For this, due to its sufficient acceptance and breadth, the Scopus database (from Elsevier) was selected. After carrying out exploratory attempts, we adopted the search strings presented below, considering the Boolean logic "and" between levels (1.), (2.), and (3.).The use of the symbol " " guarantees the exact sequence of words.Finally, some variations as plural and singular were considered. 3. Paper title, keywords or abstract ("gasoline price" or "fuel price" or "gas price" or "petrol price" or "petroleum price" or "retail price" or "gasoline market" or "fuel market" or "gas market" or "petrol market" or "petroleum market" or "petroleum product market" or "wholesale" or "price support") It is pertinent to point out that we used the term "corn", since the research focuses on North American ethanol, along with the use of "Midwest".In this way, we used the term "corn" in the geographic section of the filter to capture studies that deal with corn ethanol and that, for some reason, do not have the U.S. (or similar) as a descriptor in the title, abstract, or keywords.We used the bibliometric analysis software VOSviewer and the R package Bibliometrix [27]; for evaluation, synthesis of results and information, and graphical interpretation of the results, we used Microsoft Excel. In the processing phase, we proceeded to define the eligibility criteria while ensuring that the sample responds adequately to the formulated RQs.The inclusion and exclusion filtering procedure was conducted by all co-authors of this study in sequence, thereby ensuring the quality of the final sample. Figure 2 illustrates the delimiting filters of the sample used.In a search carried out in September 2022, the search strings resulted in 202 publications in the Scopus database.After reading the title, abstract, keywords, and search results, we reduced the list to 130 articles, since part of the initial sample was outside the scope of the research.After an initial read of the results and conclusions, we applied the second filter and obtained a sample of 112 articles.Finally, the articles were subjected to a complete reading, and we narrowed down the sample to 109 articles.We list the most important exclusion criteria used in the processing phase: (a) Studies from foreign countries (such as Brazil, Argentina, Mexico, EU, Thailand, etc.) whose ethanol comes primarily from sugar-related feedstocks; (b) Evaluation of different biofuel feedstock (cellulosic, lignocellulosic, agricultural biomass, oilseeds, etc.); (c) Studies focused on other issues (food price impact, greenhouse gas impact, ethanol blending, government impact and opinions about subsidies, etc.); (d) Studies of other fields (chemistry, the technology of production, etc.). The output phase is dedicated to the analysis and synthesis of the results, which we interpret and discuss in detail in the following section. Sample Characterization To answer RQ1 (what are the main characteristics of the literature regarding the impact and contributions of ethanol on US retail gasoline prices), we start with the temporal distribution of the articles.Figure 3 presents the annual distribution of articles in the sample.This figure also displays the percentage of the sample in the general literature on the topic, that is, when search string (ii) is removed, without any restriction by country or area (obtaining the ratio of the publications related to the U.S. to the World).It is important to highlight the interest in the subject in the U.S. in comparison to the worldwide literature.Even though we can observe a greater interest in the topic between 2009 and 2012, the following analysis will show that this topic is still very relevant and important to researchers.Figure 4 presents the main scientific journals that have at least three articles present in our sample.The journals with the highest number of publications are Energy Policy, Energy Economics, and the American Journal of Agricultural Economics.There is an evident dominance of journals in the area of energy, agriculture, and others more specific to ethanol and biofuels.Interestingly, the shortlists also include the Journal of Environmental Economics and Management, which has a broader scope and is not exclusively focused on the above-mentioned areas.Figure 5 represents the fourteen most cited articles in the sample.The average citation per year provides a view of citations over time and interprets the results in a way that shows the most recently published articles.he authors Hill [6] and Demirbas [7] dominate the figure, surpassing more than 2000 and 800 citations, respectively.Studies as Zilberman [12] and De Gorter and Just [28] are also very relevant, with over 140 citations each. In view of the extensive number of citations of the articles presented in Figure 5, we present below a brief summary of their contents.These include different scopes, such as existing relationships and the impact of biofuels on commodity food prices [12,[29][30][31], the environmental impacts of biofuels [6,32,33], policy issues and their implications [13,34].1. [6] • The study carried out an environmental and economic assessment of energy costs and the benefits of biodiesel and ethanol biofuels.Through life cycle assessment, the study evaluated corn ethanol and soybean biodiesel.The main finding is that compared to fossil fuels, biofuels have a lower environmental impact.However, no biofuel had the ability to replace oil without affecting food supplies, and subsidies are needed to make biofuels profitable. 2. [7] • The manuscript presents definitions, details, compositions, production information, use, and future perspectives that address biofuel sources, biofuel policy, biofuel economy, and global biofuel projections.The study considers scenarios of the impacts of biomass on the world economy. 3. [35] • The authors argue using the conceptual model with back-of-the-envelope estimates that ethanol subsidies in the short run actually pay for themselves and that the impact of the production of biofuels from food feedstock will be bigger on food prices rather than energy prices. 4. [12] • The study used time series econometrics to assess the impact of biofuels on commodity food prices.The main finding is that the price of ethanol increases as the prices of corn and gasoline increase.The study also found that ethanol prices are positively related to sugar and oil prices in equilibrium. 5. [28] • The study presents a conceptual framework that allows analyzing the economics of a mandate for biofuels and evaluates the economic implications of the combination with a tax credit.Results indicate that tax credits result in lower fuel prices than under a mandate for the same level of biofuel production.If tax credits are implemented along with mandates, tax credits would subsidize fuel consumption instead of biofuels, thereby creating a contrary effect to the energy policy objectives. 6. [29] • The study evaluated price relationships and transmission patterns in the US ethanol industry between 1990 and 2008.The research describes the relationships between corn, ethanol, gasoline, and oil prices.Overall, the results indicate a strong relationship between food prices and energy. 7. [36] • In an extensive literature review, the article assesses the impacts of biofuel production and other supply and demand factors on rising food prices.The results indicate that the production of biofuels had a smaller contribution to the increase in the prices of food commodities until 2008. 8. [32] • The study assessed the environmental impacts of biofuels.The results indicate that ethanol produced from biomass offers environmental and economic benefits and is considered a cleaner and safer alternative than fossil fuels. 9. [30] • The study proposes a multivariate modeling framework to assess short and longterm relationships among corn, soybean, ethanol, gasoline, and oil prices.The paper evaluates if these relationships change over time.The results indicate that in recent years, there have been no long-term relationships between agricultural commodity prices and fuel prices. [34] • This study proposes a framework to assess the effects of a tax exemption on the biofuel consumer and the interaction effects with a price-contingent agricultural subsidy.The authors found that the tax credit reduces the costs of the loan fee program, but this increased the costs of the tax credit. 11. [37] • This study analyzed whether farmers prefer a direct subsidy for corn production or rather a subsidy for the ethanol produced from corn.The study used a vertical model of ethanol, byproducts, and corn and found that farmers are better off with direct corn subsidies. 12. [33] • The authors propose the use of economic models applied especially in the US to assess the effects of biofuel policies on petroleum product markets and their consequences for greenhouse gas emissions. [13] • The study proposes a literature review and a meta-analysis model to assess the impacts of ethanol policy on corn prices between 2007 and 2014.The results indicate that an expansion of the corn ethanol mandate can lead to an increase of 3 to 4 percent in next year's corn prices. 14. [31] • The study, through a literature review, evaluated the corn ethanol industry, its impacts on food prices, and the role of biotechnology in the U.S.Among their findings, the authors identified that biotechnology had little impact on the biofuel sector. We consider a number of citations of each publication in Figure 6, where the Citation Treemap presents hierarchical data (structured tree) as a set of nested rectangles.The area of each rectangle is proportional to the number of citations the manuscript has in the sample.This map aims to visually represent the disproportion in the number of citations of the two most cited articles in the sample and the other included studies.The discrepancy shown in Figure 6 justifies the removal of the studies proposed by [6,7] for the elaboration of Figure 7, whose objective is to present the distribution of citations over time of the most cited articles in the sample, complementing the information provided in the enumeration above.For example, authors such as Rajagopal et al. [35] de Gorter and Just [28,34] have high numbers of absolute citations but have lost their influence in more recent publications, given the reduction in citations per year.Another example is a study by [32], which received a large number of citations in 2011 and 2012, establishing itself among the most cited in the sample.However, in recent years, it has received a low number of citations.At the same time, other authors, such as [29,36], have maintained their influence in recent publications.Finally, ref. [12], and more recently ref. [13], has stood out in recent years.Differently from the previous graphs that were dedicated to publications, Figure 8 presents the authors or co-authors (individually) most representative in the sample with the largest number of publications.Among these, Zilberman D. and Thompson W. stand out, with ten and eight articles each, respectively.In the sequence, Hochman G., and Rajagopal D., present in seven publications each, are identified.Figure 9 shows the tree-field plot, establishing relationships between the most frequent journals in the sample, the main authors, and the keywords.Thompson, one of the most relevant authors in the sample, has had his studies published in journals such as Energy Policy, Eurochoices, and The Economics of Alternative Energy Sources and Globalization.This author has used terms such as "ethanol", "greenhouse gas emissions", "renewable fuel standard", "biofuel mandates", and "gasoline" as keywords in his studies.From the same perspective, Zilberman, another relevant author on the topic, has published in journals such as Agricultural Economics, the American Journal of Agricultural Economics, and Agbioforum.The main keywords included in his works are "biofuels", "greenhouse gas emissions", "energy prices", "energy policy", "climate change", and "corn ethanol".Figure 10 represents the thematic mapping, allowing the visualization of different types of themes [38].In the thematic map, we use keywords of the articles in the sample, where the keywords are defined by a semi-automated algorithm under the responsibility of Thomson Reuter's specialists, which is capable of capturing the content of an article with greater variety and depth [39].The upper right quadrant of Figure 10 represents themes with a higher degree of development (density) and relevance (centrality), seen as key themes in the literature, among which "Energy Policy" and "costs" stand out.As expected, another key theme found in this analysis was "United States", defined as one of the keywords in the search strings.Apart from those, other driving themes are "price dynamics", "commerce", and "energy market".Declining or emerging themes are located in the lower left quadrant.In this research, the results suggest that the topic "energy utilization" is an emerging topic.The lower-right quadrant shows sample basic themes.These themes refer to general themes in the different areas of investigation.They include "ethanol", "biofuel", "zea mays", "biomass", "carbon dioxide", and "biodiesel" from our sample.Finally, the upper-left quadrant shows themes of high density but of lesser importance to the sample or limited importance to the field (low centrality).Within these themes, "agriculture", "economic development", "energy independence", "energy security", "Environmental Protection Agency", and "fuel prices" are the ones that stand out. In sequence, we created Figure 11 using the VOSviewer software, and it is based on the co-occurrence information of the authors' keywords [40].In this figure, the node sizes represent the number of times these keywords were used by the articles in the sample; the connecting lines indicate that these keywords were used in the same publication, and the colors are related to the year of publication.The relevance of the topics "Renewable Fuel Standard" and "policy" protrudes, even though they were not included in the search strings.This network also allows the identification of trending topics for the area, as they represent interests in recent research, such as "retail fuel spreads", "pass-through", "fuel markets", "E85", or even "energy prices" and "meta-analysis".Finally, Figure 12 was elaborated from a multiple correspondence analysis, an exploratory multivariate technique of the keywords and the articles that make up the sample.The conceptual structure map identifies clusters from articles that express interrelated concepts [27].The results of this figure are to be interpreted based on the distribution of points and their positions along the dimensions.The closer the keywords are in the figure, the greater their similarities in distribution.The figure allows the identification of new latent variables from the formation of clusters in a set of categorical variables.In this way, we identify two distinct clusters.The first cluster (in red), seems to be more relevant due to its size and centrality in relation to dimensions.The red cluster contains important keywords, such as "price dynamics", "commodity price", "gasoline prices", "blending", "taxation", and "subsidy system", which are terms associated with the price and market dynamics of biofuels in the U.S. In the second cluster (in blue), keywords such as "economics", "energy security", "public policy", and "gas emissions" are highlighted as terms associated with the development of public policies for the implementation of biofuels and their environmental impact.This split corresponds to the exploratory and introductory review we provide in the Introduction. Predominant Cluster Structure In order to answer RQ2 (what are the main article clusters identified in the evaluated literature?), content analysis and mapping and clustering techniques were used, as they are frequently used in SLR studies [41,42]. Through the use of clustering techniques, it is possible to present a map that highlights areas corresponding to the clusters of nodes identified.Using VOSviewer software, we calculated a bibliographic coupling network (for more, see [41]), whose graphical results are shown in Figure 13.In this analysis, the relationship between studies was determined based on the degree to which these articles are cited in the same publication.Upon establishing the clusters, we analyzed the content of the articles and focused on the title, abstract, introduction, and conclusion.This analysis aims to identify common interests and themes, from which the following predominant clusters were identified: 1. Impacts of biofuels on commodity prices and overall price dynamics; 2. Impacts of public policies on the implementation of ethanol and flexibility in the formulation of fuel blending; 3. Impact of biofuels on environmental aspects. It is important to note that, as the clustering technique was elaborated from the use of coincidental references, articles located in the transition region between the main clusters can be dedicated to evaluating themes inherent to more than one cluster. Impacts of Biofuels on Commodity Prices and Overall Price Dynamics Among the authors of the first cluster, those of [43] considered the North American scenario and evaluated how the increase in corn-ethanol production impacts natural gas prices.The authors presented a two-stage least squares structural model for projecting two scenarios: (i) current policies, including tariffs, tax credits, and mandates, were disregarded; (ii) established the production of ethanol only for the use of mandatory additives.The results indicate that the price of natural gas can be increased by up to 0.25% and 0.5% for the first and second scenarios, respectively. In another study, Whistance et al. [44] analyzed the effects of the ethanol policy on the prices and quantity of natural gas, especially focusing on the impacts of the ethanol tariff, mandates, and tax credits.The results indicated an increase in corn production, which will consequently tend to raise natural gas prices. Zilberman et al. [12] investigated the relationship between food and fuel markets.According to the authors, the ethanol market provides a strong link between the corn and energy markets, and the price of ethanol increases as corn and gasoline prices increase.Finally, the study concludes that ethanol prices are positively related to sugar and oil prices. Whistance and Thompson [45] also analyzed the price relationship between ethanol and that between gasoline and between corn and gasoline in the scenarios of a mandatory and non-mandatory RFS.The authors found evidence that price relationships are weaker when RFS is mandatory. Another example of a study that makes up this cluster is that of [46], which assesses the impacts on fuel prices and compliance costs associated with the RFS.In this article, a regional market model is proposed to quantify the impacts of prices for several market variables.Among the results, Christensen and Siddiqui [46] identified that the RFS not have a substantial impact on the retail prices of gasoline and diesel. Impact of Public Policies for the Implementation of Ethanol and Flexibility in the Formulation of Fuel Blending Based on the second cluster identified, Liu and Greene [47] argues that a good understanding of the factors that affect demand for E85 is needed in order to develop effective policies for promoting E85 and to develop models that predict sales of this product in the U.S. In this way, the authors estimated the sensitivity of aggregate demand for E85 to the prices of E85 and gasoline, and the relative availability of E85 versus gasoline, and concluded that the latest data allow for a better estimation of demand and indicate that the price elasticity of E85 is substantially higher than previously estimated. Lade and Bushnell [48] studied the pass-through of the E85 subsidy to U.S. retail fuel prices.The authors argued that the RFS relies on taxes and subsidies to be passed on to consumers to stimulate demand for biofuels and decrease demand for gasoline and diesel.They concluded that between 50% and 75% of the E85 subsidy was passed on to consumers and that the pass-through takes approximately 6 to 8 weeks, with retailers' market structure influencing both the speed and level of pass-through. Ghoddusi [49], through a quantitative assessment, measured the risks of price changes for biofuel producers in a deregulated market.The authors presented a set of risk management strategies that are fully applicable to the protection of the biofuels sector. From a different perspective, Westbrook et al. [50] assessed whether the U.S. is able to meet the RFS targets without an enforcement mechanism.The authors proposed a parametric analysis of ethanol use for the domestic vehicle sector.The results indicate that the RFS program's goals to reduce fossil-fuel consumption, and consequently, GHG emissions, can be achieved by improving vehicle efficiency. Impact of Biofuels on Environmental Aspects Allocated to the third cluster, Sexton et al. [51] analyzed the impacts of increased production of biofuels on food and fuel markets.They argue that the current production of biofuels generates a conflicting relationship between food and fuel, as it generates an increase in the cost of food and a reduction in the cost of gasoline.In this way, the study concludes that agriculture has to provide food and fuel, generating a need for constant improvement in its productivity.They argue that biotechnology has a fundamental role in allowing the achievement of this improvement. Acquaye et al. [52] used four scenarios to analyze the potential of biofuels to reduce UK emissions.The authors used a hybrid lifecycle assessment developed in a multi-regional input-output (MRIO) framework and concluded that in order to achieve the emission reduction determined by the Low Carbon Transition Plan (LCTP), it would be necessary that 23.8% of the transport fuel market would be served by biofuels by the year 2020. Piroli et al. [53] applied a time-series analysis for the five main agricultural commodities, the cultivated area, and the price of crude oil in order to study the impacts of changes in land use caused by the production of biofuels in the US.The authors conclude that the markets for crude oil and cultivated agricultural land are interdependent.Apart from that, the authors claim that the increase in biofuel production causes changes in land use, which subsequently causes food commodities to be replaced by crops intended for biofuel production. More recently, Suh [54] examined the effects of replacing fossil fuels with biofuels on carbon dioxide emissions in the U.S. transportation sector.The author proposes that ethanol is a substitute for oil and a complement to natural gas, while natural gas is a substitute for oil.Furthermore, the author concludes that the price-induced substitution of fossil fuels for biofuels is a critical factor in predicting biofuel-related carbon-dioxide emissions. Numerical Estimates We now turn to our sample to analyze numerical estimates of changes in gasoline prices caused by changes, or rather a lack of changes, to ethanol mandates.We extracted 20 articles that provide numerical results that relate to our research question.After the initial inspection, we noticed many of the articles included in our sample are also included in the meta-analysis article by [19].Consequently, we have decided to include four missing articles that were not a part of our sample but were included in [19] to further our understanding of the numerical interpretation of the results.It is important to highlight that these four studies are relevant and recognized for the field of research, but they were not identified in the search due to the fact that they were not present in the Scopus database. First, we briefly discuss the approach, methodologies and models that were used in the aforementioned articles.Figure 14 shows the most frequent models used.The most popular are general and partial equilibrium models, biofuel and environmental policy analysis models (BEPAM), and supply-demand models.When it comes to the policies that affect the price of gasoline, the articles mostly use the Volumetric Ethanol Excise Tax Credit (VEETC) created by the American Jobs Creation Act of 2004 and the Renewable Fuel Standard for corn ethanol established in 2007 as the driver of the change in the price of gasoline.Some studies, such as [55], inspected many possible outcomes based on different scenarios where either there are no mandates in place for the baseline price or where VEETC or RFS or their combination are introduced, changing the outcome by 1-2 percentage points.Some other articles, such as [56], took into account only the RFS ethanol mandate and its impact on gasoline prices. Overall, we managed to identify 13 papers that provide us with exact numerical results for the answer to our research question RQ3 (what was the numerical impact of VEETC/RFS mandate on the price of gasoline and what are the main methodologies used for calculation in the literature).Detailed information about the papers in our sample coming from SCOPUS database is summarized in Table 1, and Table 2 presents the four papers not included in the SCOPUS database. Table 1.This table summarizes publications providing numerical estimates of the impact of ethanol on fuel price.The first column references the publication and the second column the inspected time period.The third column reports on the model used, and the Relation column suggests whether ethanol and gasoline are considered to be substitutes (Sub), complements (Comp), or perfect substitutes (pSub).The prevailing result is that the addition of ethanol cuts down the price of gasoline at the pump.However, there is no direct consensus on the discount being provided, not even in proportional expression.The estimates vary from no effect up to an almost 10% discount in the gasoline price, as shown in Figure 15. Publication Table 2.This table summarizes publications in the analysis of [12] concerned with the impact of ethanol on fuel price or welfare.The first column references the publication, and the second column the inspected time period.The third column reports on the model used.The Relation column suggests whether ethanol and gasoline are considered to be perfect or imperfect substitutes, and Results summarizes the respective study. Research Agenda To answer RQ4 (what are the main trends and research opportunities for this literature?), we propose a possible open research agenda based on the results of our SLR.We notice that the term bioethanol has been present in the analyzed sample since 2012, remaining until now, especially when associated with the use of the terms "commerce" and "energy market", which shows that this type of study is still interesting to the current research.Corroborating this statement, Figure 10 (thematic map) presented the driving themes of the studied area, which include, in addition to the terms "commerce" and "energy market" already mentioned, "costs", "energy policy", "price dynamics", and "renewable resource".In this way, it is possible to mention some research topics that have been little explored and that have started to draw attention more recently, standing out as hot topics for future research.It is possible to propose the development of research focused on advanced biofuels, biofuels supply chains, transportation biofuels, and issues of budget control and cost management, both in production and in the management of the biofuels supply chain.Additionally, an analysis of the thematic evolution allows the identification of research opportunities that involve the control of greenhouse gas emissions, and other environmental and climatic aspects. Still discussing research trends, Figure 11 (keyword co-occurrence map) corroborates previous discussions and opens horizons for new research opportunities on retail fuel spreads and on the e85 composition. Moreover, Figure 12 (conceptual structure map) points out opportunities for research in public policies related to climatic and environmental issues, and energy security.Topics such as sustainable development, price dynamics, blending, demand analysis and biofuel production have greater centrality-that is, they tend to continue to be study opportunities. A clear possible research opportunity of filling a noticeable and perspective gap in the literature is indicated by what is rather missing in the keywords discovered by our search.It is an issue of electro-mobility.The analysis of interplay between biofuels and electrical vehicles should belong to the "environmental", cluster 3 in Figure 13.As we already noted, this "environmental" cluster temporarily precedes the other two clusters.This expresses the shifting emphasis from the beliefs on the strong positive environmental impact of biofuels to a rather skeptical evaluation of this impact of biofuels.Additionally, the missing connection between biofuels and electric cars is caused by a fact that the focus on electric cars is rather a recent phenomenon, not overlapping in time with the early biofuel literature assembled in cluster 3 in Figure 13.However, the research questions of possible synergies in combination of advantages of renewable biofuels provided by agriculture and advantages of electric cars definitely deserve research attention. Another interesting research opportunity indicated by missing connections in our bibliometric figures is an issue of bioethanol as a dominant technological fuel additive.While technologically oriented literature clearly shows that ethanol is a dominant gasoline oxygenate, there is still missing (not written so far) a potentially sizeable body of literature dealing with the question of what is the technological and economical lower bound on the share of ethanol in the U.S. car fuels if the ethanol would be used mainly as an oxygenate. Finally, Figure 16 shows the evolution of the representativeness of each cluster over time.We note that at the beginning of the research on the subject, the most influential cluster was the one that addressed the impact of biofuels on environmental aspects (cluster (iii)).However, this scenario has changed, and the figure makes it possible to identify that studies that assess the impacts of biofuels on commodity prices and overall price dynamics (cluster (i)) have been of greatest recent interest, followed by the assessment of the impacts of public policies on the implementation of ethanol and flexibility in the formulation of fuel blending (cluster (ii)).In this way, the topics associated with clusters (i) and (ii) will represent the greatest opportunities for future research. Conclusions This article proposes a review of the state-of-the-art of the literature regarding the contributions of ethanol to retail gasoline price changes in the US.For this, we conducted a systematic literature review which follows guidelines from the literature.We extracted a sample of 109 articles and analyzed it using bibliometric quantitative techniques associated with qualitative content analysis.The novelty of this article is evident, since no systematic literature review with the objective of evaluating the impact of ethanol on the retail price of gasoline was identified. At first, a characterization of the sample was presented through bibliometric techniques, allowing the identification of trends in the explored topic.Furthermore, thematic, conceptual, and co-occurrence maps were constructed and analyzed, in which topics such as energy policy, costs, price dynamics, commerce, and energy market stand out.Additionally, the most significant terms recently have been "retail fuel spreads", "fuel markets", "E85", and even "energy prices" and "meta-analysis". Second, considering the selected sample and based on grouping techniques, the predominant cluster structures were identified and briefly analyzed, which led to three lines of research: (i) impacts of biofuels on commodity prices and overall price dynamics; (ii) impacts of public policies on the implementation of ethanol and flexibility in the formulation of fuel blending; and (iii) impact of biofuels on environmental aspects.The definitions of these clusters are not given a priori, neither in the specific literature, nor even through the use of software, demanding an in-depth analysis of the articles present in the sample. Third, the general and partial equilibrium model stood out in the sample as the most used to capture changes in gasoline prices caused by changes in ethanol mandates.There is no consensus on the impact of ethanol on the price of gasoline in the US retail market; however, the most frequent results show that the addition of alcohol reduces the price of gasoline at the pump. In a fourth moment, we show that currently, the topic concerning the impacts of biofuels on commodity prices and overall price dynamics is the most relevant and trending avenue of research suggested by the analysis of our sample of publications. Finally, the limitations of the present study involved methodological choices, such as: (1) only one database for extracting articles and (2) the definition of search strings that could exclude works relevant to the study.These limitations were minimized by the following strategies: (1) by choosing the largest database of academic works in the world (Web of Science), and (2) using many attempts to adapt the search strings to the most relevant works for the studied topic.Another limitation is related to the inclusion and exclusion criteria of each article to form the final sample, which we sought to mitigate with the participation of four different researchers. (a) Formulate research questions that can guide the study.(b) Identify the most relevant studies from the literature of interest.(c) Evaluate the quality and relevance of the articles.(d) Identify and summarize the scientific evidence.(e) Interpret the results found. Figure 2 . Figure 2. Summary of articles filtering after reading. Figure 3 . Figure 3. Annual distribution of publications from 1988 to September 2022. Figure 4 . Figure 4. Most frequent journals in the sample. Figure 5 . Figure 5. Main and most cited publications in the sample. Figure 7 . Figure 7. Distribution of citations over time for ten of the most cited articles in the sample. Figure 8 . Figure 8. Distribution of citations over time for ten of the most cited articles in the sample. Figure 14 . Figure 14.Count of models used in the literature. Figure 16 . Figure 16.Evolution of the number of publications by clusters.
v3-fos-license
2019-06-13T13:16:25.527Z
2015-09-11T00:00:00.000
189416143
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://nphj.nuph.edu.ua/article/download/nphj.15.2017/85974", "pdf_hash": "11a6c06130831862352d989b8b40fc3cadd8044e", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42227", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e5b1ea7583b19778b7e5019593fc3390a64a24ca", "year": 2015 }
pes2o/s2orc
ANALYSIS OF THE ASSORTMENT OF ANTIDIABETIC DRUGS AT THE PHARMACEUTICAL MARKET OF UKRAINE The marketing research of antidiabetic drugs presented at the pharmaceutical market of Ukraine has been conducted. The product range of the group of antidiabetic drugs has been studied according to the ATC classification, manufacturing countries and dosage forms. According to the results of the analysis 49 drugs based on insulin and its analogues for injection, and 157 oral hypoglycemic drugs have been characterized. It is noted that insulins are unevenly divided by the duration of action. Most insulin drugs are presented by human insulin. Oral hypoglycemic drugs are characterized by a high saturation within subgroups. Drugs of aldose reductase inhibitors (Isodibut under the trade name Isodibut®) and other medicines, including 8 herbal medicines, are also used for treatment of diabetes. It has been found that 68% of antidiabetic drugs are imported from 21 countries of the Eurasian and South American continents. India occupies the leading positions. A wide range of antidiabetic drugs is presented by Germany, Poland, France, Denmark and Italy. The range of domestic drugs for treating diabetes is formed by 12 manufacturers. Domestic manufacturers offer only generic replacement of some active substances. At the pharmaceutical market of Ukraine antidiabetic drugs are available in the form of tablets, granules, powders, solutions and suspensions for injection. The research performed has shown that the market of antidiabetic drugs is characterized by heterogeneity of product groups, high concentration and monopolization of production, low competition and a small share of production of drugs attributable to domestic producers. Diabetes is recognized as infectious epidemic of the XXI-th century, it takes the third place of the world's prevalence after cancer and cardiovascular diseases. The number and prevalence of people with diabetes is increasing rapidly. According to the International Diabetes Federation (IDF) in 2013 about 381.8 million people in the world had diabetes, and till 2035 this index will increase by 55% up to 591.9 million [6]. Recent data indicate that people in the countries with low and middle income represent the largest share of the epidemic (80%), and this disease affects much more people of the working age than previously thought. The largest number of people with diabetes is of the age from 40 to 59 years. According to the WHO data, two new cases of diabetes are diagnosed every six seconds, and one person dies because of its complications [8,11]. In 2013, diabetes caused about 5.1 million of deaths [12]. According to the WHO in 2030 diabetes will be the seventh leading cause of death [9,13]. According to the Ministry of Public Health of Ukraine in 2013 the number of patients with diabetes was over 1.3 million people, 212 134 of them require daily injections of insulin [2]. Increase in prevalence of diabetes in Ukraine reached 26% in the last 5 years [5]. A significant growth in the number of new registered cases of diabetes (primary disease) of the Ukrainian population is also observed: from 194.8 per 100 thousand of the population in 2005 to 249.8 in 2010, i.e. 23.7% within last 5 years. However, the number of patients is increasing mainly due to diabetes mellitus type 2 [4]. In connection with the abovementioned our country intensively searches not only more effective methods of diagnosis and treatment, but also more perfect organizational methods of treatment. It creates the opportunity to reduce the incidence of adverse long-term complications on the basis of improving detection of diabetic patients. Nowadays, one of these activities being of a special attention is the rational use of modern methods of treatment and extension of the range of antidiabetic drugs [15]. Materials and Methods The aim of this study was to conduct the marketing analysis of the range of antidiabetic drugs presented at the pharmaceutical market of Ukraine. The object of the study was the information concerning the market structure of antidiabetic drugs registered in Ukraine. The graphical methods and analysis are used in the article. To solve this goal the study of a conjuncture of domestic market of antidiabetic drugs was conducted. According to the results of the analysis the assortment of medicines of the Ukrainian pharmaceutical market was described. Results and Discussion According to the international ATC classification antidiabetic drugs are referred to group A "Medicines affecting the digestive system and metabolism" and constitute a subgroup A10 "Аntidiabetic drugs" [3]. According to the State Register of Medicines of Ukraine [1] it has been found that as of 01.10.2014 antidiabetic drugs comprise 206 trade names (without regard to the amount in the pack) among 12 745 medicines registered in Ukraine. This group includes 49 drugs based on insulin and its analogues for injection and 157 oral hypoglycemic drugs. Only some representatives of each ATC-group for treating diabetes are registered at the pharmaceutical market of Ukraine. It should be noted that insulins are unevenly divided by the duration of action: 16 shortacting drugs, 14 drugs of the medium duration, and only 4 long-acting insulin medicines [14]. There are also 15 combinations of insulins with a short and medium duration of action for injection at the market. The overwhelming majority of all groups of insulins are presented by human insulin. Groups of insulin with a short and medium duration of action are characterized by individual representatives of porcine insulin (Monodar ® ), insulin lispro (Humalog ® ) and insulin asparagine (Novorapid ® FlexPen ® ). Long-acting insulins and their analogues for injection are presented by 3 drugs based on insulin glargine and one on insulin detemir (Levemir ® FlexPen ® ). Oral hypoglycemic drugs are characterized by more saturation within subgroups. There are 38 drugs based on metformin at the Ukrainian pharmaceutical market of biguanides. The drugs of glimepiride (about 50 names) are the widest presented as derivatives of carbamide. There is a sufficiently wide assortment of drugs based on gliclazide (16 positions). Sulfonamides are presented by 5 drugs of glibenclamide and 1 of gliquidone (Glyurenorm ® ). There are often combinations of oral hypoglycemic drugs [7] such as metformin and sulfonamides (12 medicines), metformin with sitagliptin (Yanumet™) and metformin with vildagliptin (GalvusMet ® ). Thiazolidinediones are presented by 10 drugs on the basis of pioglitazone. Among the inhibitors of dipeptidyl peptidase-4 (dpp-4) [10] sitagliptin (Januvia™) appears three times, there is one vildagliptin (Galvus ® ), and saxagliptin (Ongliza) is presented twice. Other hypoglycemic drugs, excluding insulins, also include drugs of guar gum (Guarem), repaglinide (NovoNorm ® ) and liraglutide (Victoza ® ). Drugs of aldose reductase inhibitors (Isodibut under the trade name Isodibut ® ) and other medicines, including 8 herbal medicines (based on fruits of Saint-Mary-thistle, the valves of the bean fruit, blueberry shoots or the mixture of crushed medicinal plant raw material), are also used to treat diabetes. The results of the study of the assortment structure of antidiabetic drugs indicate that 68% of drugs are imported from 21 countries. Geography of manufacturing countries is quite extensive and includes the countries of the Eurasian and South American continents (Fig. 1). India occupies the leading positions and supplies 35 drugs of the group studied to Ukraine. A wide assortment of antidiabetic drugs is presented by Germany (16 trade names), Poland (16 names), France (14), Denmark (10) and Italy (8). Data of the analysis in Fig. 1 indicate that the share of the Ukrainian producers of antidiabetic drugs is 32%. However, most subgroups of antidiabetic drugs have no domestic analogues. The range of domestic drugs for treating diabetes is formed by 12 manufacturers. Oral hypoglycemic drugs are mainly supplied at the Ukrainian market from abroad. The largest part of imported drugs for treatment of type 2 diabetes is offered by India, these are 32 drugs. A large number of drugs are offered by several companies from around the world, mainly from the European region, for which a high level of development of the pharmaceutical industry is typical. Poland takes 8.28% of the market segment of oral hypoglycemic drugs, Germany -6.37%, Italy and France -5.10% each, Switzerland and Hungary -2.55% each. The Jordanian firm "Al-Hikma Pharmaceuticals" PLC manufactures drugs based on glimepiri-de (Glianov ® ). Drugs of metformin hydrochloride are imported in Ukraine by the Israel manufacturer "TEVA Pharmaceutical Industries Ltd". Three trade names of drugs are imported from Argentina, Luxembourg, Korea, Netherland, Slovakia, Turkey. "Galenika a.d." supplies 2 drugs from Serbia. Oral hypoglycemic drugs are imported in Ukraine from Greece, Denmark, Slovenia, Finland. Along with highly effective medications the patients are deprived of the opportunity to take other drugs since there is a monopoly of some imported drugs. Domestic manufacturers offer only generic replacement of some active substance. The Ukrainian enterprises produce only 46 drugs on the basis of metformin hydrochloride, glibenclamide, gliclazide, glimepiride, a combination of metformin hydrochloride and glibenclamide, pioglitazone, isodibut and the medicinal plant raw material. Groups of isodibut and other drugs are formed only by domestic producers. The domestic enterprises producing drugs of these groups are: "Farmak" OJSC, Kyiv; "Kusum Pharm" LLC, Sumy; "Technologist" PJSC, Uman, Cherkasy region; "Pharmex Group" LLC, Boryspil, Kyiv region; "Pharma Start" LLC, Kyiv; Pharmaceutical company "Zdorovye" Ltd., Kharkiv; "Indar" PJSC for production of insulin, Kyiv; "Luhansk Chemical and Pharmaceutical Plant" JSC, Luhansk; "Liktravy" PJSC, Zhitomir; Pharmaceutical factory "Viola" PJSC, Zaporizhzhya; "Lubnypharm" JSC, Lubny, Poltava region. At the pharmaceutical market of Ukraine antidiabetic drugs are available in a variety of dosage forms such as tablets, granules, powders, solutions and suspensions for injection. A tablet form is presented by common tablets, coated tablets, tablets with sustained release and tablets with modified release. All insulins and analogues for injection are produced in the form of solutions or suspensions. The results of analysis of the drug assortment presented in Fig. 2 indicate the dis- tribution of drugs by the dosage form and the route of administration taking into account the number of proposals for this form at the market. Drugs of insulins and analogues for injection are available in two dosage forms. The suspension for injection is 59.18% of the assortment, 40.82% is in the form of the solution for injection. The suspension for injection is dispensed in bottles (17 positions), cartridges a carton box (20), cartridges inserted in multi-dosage disposable syringe pen (7). Among the solutions for injection 16 short-acting insulins and 4 long-acting drugs are proposed. Most of the drugs are produced simultaneously in several forms of packaging: in bottles -9 trade names, in cartridges -10, in cartridges built in sealed disposable syringe pens -5 and in syringe pens -3. Among the oral hypoglycemic drugs registered the solid dosage forms quantitatively prevail: tablets constitute 93.63%, granules -1.91%, powders -3.82%, while the liquid dosage forms are only 0.64%. This is due to ease of use and accuracy of dosing of tablets. Currently, there is a clear tendency to increase the efficiency of pharmacotherapy in the treatment of type 2 diabetes. Offers of tablet hypoglycemic drugs are divided between tablets (83 drugs), coated tablets (46), tablets with sustained release (6) and tablets with modified release. Twelve names of tablets based on gliclazide are produced with modified release of the active substance. In a powder form there are 6 drugs based on the medicinal plant raw material, including the valves of the bean fruit, blueberry shoots, and 4 herbal teas. Granules are presented by 3 trade names. Guarem is produced in 5 g granules in packs and 500 g containers with a measuring spoon. Granules from fruits of Saint-Mary-thistle are produced under the brand names of Silysem® and Gipoglisil ® . They are dispensed in single-dose bags, single-dose coupled bags and in jars with a measuring spoon. At the same time, hypoglycemic drugs are produced in the form of a solution for injection. Thus, Victoza ® is offered in cartridges inserted in pre-filled multi-dosage disposable syringe pen and in filled syringe pens in a carton box. Thus, the studies conducted have shown that the market of antidiabetic drugs is characterized by heterogeneity of the product lines, high concentration and monopolization of manufacture, low competition and a small share of production of domestic drugs. CONCLUSIONS 1. The marketing research of antidiabetic drugs presented at the pharmaceutical market of Ukraine has been conducted, and the structure of the product groups registered in Ukraine has been described. 2. The market of drugs for treating diabetes has been characterized according to the pharmacotherapeutic groups, manufacturing countries and dosage forms. 3. Quantitative and qualitative diversity of the current assortment of antidiabetic drugs presented by foreign companies and domestic manufacturers has been determined.
v3-fos-license
2019-03-15T02:58:03.171Z
2019-03-13T00:00:00.000
76661483
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-019-40978-9.pdf", "pdf_hash": "6f520df9d7ed3533f6b0d3952c165e7152f21e1c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42229", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "22d5b88f496907250ac55f72a3837ae1e51ce019", "year": 2019 }
pes2o/s2orc
Alteration of GLP-1/GPR43 expression and gastrointestinal motility in dysbiotic mice treated with vancomycin Gut microbiota plays a pivotal role in various aspects of host physiology, including metabolism, gastrointestinal (GI) motility and hormonal secretion. In the present study, we investigated the effect of antibiotic-associated dysbiosis on metabolism and GI motility in relation to colonic expression of glucagon-like peptide-1 (GLP-1) and G protein coupled receptor (GPR)43. Specific pathogen-free (SPF) mice (ICR, 6 weeks old, female) were orally administered vancomycin (0.2 mg/ml) in drinking water for 7 days. In another experiment, germ-free (GF) mice (ICR, 6 weeks old, female) were subjected to oral fecal transplantation (FT) using a fecal bacterial suspension prepared from SPF mice that had received vancomycin treatment (FT-V) or one from untreated control SPF mice (FT-C). The gastrointestinal transit time (GITT) was measured by administration of carmine red (6% w/v) solution. The expression of GLP-1 and GPR43 was examined by immunohistochemistry and realtime RT-PCR, and the plasma GLP-1 level was measured by ELISA. In vancomycin-treated SPF mice, the diversity of the gut microbiota was significantly reduced and the abundance of Lactobacillus was markedly increased. Significant increases in body weight, cecum weight, plasma GLP-1 level and colonic GLP-1/GPR43 expression were also noted relative to the controls. These alterations were reproducible in GF mice with FT-V. Moreover, FT-V GF mice showed a significantly increased food intake and a significantly prolonged GITT in comparison with FT-C GF mice. Vancomycin-induced dysbiosis promotes body weight gain and prolongs GITT, accompanied by an increase of colonic GLP-1/GPR43 expression. Antibiotic treatment and fecal transplantation. To create dysbiotic conditions for gut microbiota, SPF mice were orally administered vancomycin (0.2 mg/ml; Sigma, Saint Louis, MO, USA) in drinking water for seven days, whereas controls were supplied with untreated water 9,10 . To examine the effect of dysbiotic flora on host physiology, fecal transplantation (FT) was performed as reported previously 11,12 . The fecal suspensions were freshly prepared from SPF mice after seven days of vancomycin treatment by 10-fold dilution of colonic content with saline, and then orally administered to GF mice to reconstitute the dysbiotic intestinal flora. As controls, fecal suspensions from SPF mice that had not received vancomycin treatment were similarly administered to GF mice. After FT, the GF mice were housed under SPF conditions for five weeks. Body weight and 24-h food intake were monitored weekly. To measure the amount of food intake for mice, the experimental mice was housed and feed in a cage separately for 24 hours, the weight of food was measure before and after. The 24 h food intake was calculated as the difference between before and after food weight. At the end point of the experiments, the mice were fasted for 4 h before sacrifice. The length of the small intestine and colon, and the weight of the cecal content, were measured. The GI tissues were removed from the mice, cut open along the longitudinal axis, rinsed with saline, and fixed in neutral aqueous phosphate-buffered 10% formalin for histological examination or stored in nitrogen liquid for real-time RT-PCR. Real-time Rt-pCR. Total RNA was isolated from GI tissues with Trizol reagent (Invitrogen, Carlsbad, CA). Total RNA (4 ug) was reverse-transcribed using oligo-dT primer (Applied Biosystems, Branchburg, NJ), and real-time RT-PCR was performed using 7900 H Fast Real-Time PCR System (Applied Biosystems) as previously described 13 . The set of primers for mouse proglucagon, GPR43, and GAPDH were prepared as shown in Table 1. Figure 1. Effect of treatment with vancomycin for seven days on gut microbiota. (a) Alpha-diversity of the gut microbiota. Shannon index calculated from the observed OTU numbers of intestinal microbiota samples from control and vancomycin-treated mice. (b) Relative abundance of intestinal bacteria. The relative abundance of each bacterial genus was analyzed by next-generation sequencing of bacterial 16S rDNA. The results are presented as the mean ± SE (n = 3 in each group). Significant differences between the control and vancomycintreated groups at *P < 0.01 and **P < 0.001. Statistical significance was determined by Welch's t test with Benjamini-Hochberg correction. Immunohistochemistry. Immunohistochemical staining for GLP-1 and GPR43 was performed with an Envision Kit (Dako, Kyoto, Japan) according to the manufacturer's protocol, using anti-GLP-1 antibody (dilution 1:1000; Abcam, Cambridge, UK), anti-GPR43 antibody (dilution 1:50; MyBioSource, Diego, USA). In brief, the sections were deparaffinized, rehydrated, and treated by microwave heating for 20 min in 1 Dako REAL Target Retrieval Solution (Dako Denmark, Glostrup, Denmark) as previously described 14 . To quench endogenous peroxidase activity, the sections were preincubated with 0.3% H 2 O 2 in methanol for 20 min at room temperature. The sections were then incubated with primary antibodies for 60 min at room temperature. Thereafter, the slides were washed in PBS, incubated with horseradish peroxidase-conjugated secondary antibody for 30 min, visualized by 3,3′-diaminobenzidine tetrahydrochloride with 0.05% H 2 O 2 for 3 min, and then counterstained with Mayer's hematoxylin. The number of GLP-1-positive and GPR43-positive epithelial cells were evaluated as follows: Five sections in each mouse were prepared for the small intestine and colon, respectively. The positive cells were counted in at least five different visual fields in a 1,000-μm stretch of the entire length with well-oriented tissue sections, and the average was calculated in each mouse. www.nature.com/scientificreports www.nature.com/scientificreports/ eLIsA assay. Blood samples were collected into 1.5-ml tubes containing 2 mg EDTA-2Na (Wako, Osaka, Japan) and 15 μl dipeptidyl peptidase IV inhibitor (Merck, NJ, USA), an enzyme that degrades active GLP-1 into its inactive form. Blood samples were centrifuged at 1300 × g for 10 min at 4 °C to isolate the plasma. ELISA assay kits for active GLP-1 were obtained from IBL (Gunma, Japan) and utilized according to the manufacturer's instructions to determine active GLP-1 levels using a SpectraMax Plus 384 Microplate Reader (Molecular Devices, California, USA). Gastrointestinal transient time. GI transient time (GITT) was measured as previously described 11,15 . In brief, the mice received orally 0.3 mL of 0.5% methylcellulose solution including 6% carmine red (Wako, Osaka, Japan). After administration of the solution, mice were left free for food and water ad libitum until the first red fecal pellet appeared. GITT was determined as the time period between the gavage and the appearance of the first red fecal pellet 16 . extraction of DNA from fecal samples. Extraction of bacterial DNA was performed as described previously 17 . In brief, the fresh fecal samples were resuspended in a solution containing 450 μl of extraction buffer (100 mM Tris-HCl, 40 mM EDTA; pH 9.0) and 50 μl of 10% sodium dodecyl sulfate. Then, 300 μg of glass beads (diameter, 0.1 mm) and 500 μl of buffer-saturated phenol were added to the suspension, and 400 μl of the supernatant was collected. The DNA was eluted from the supernatant by phenol-chloroform method. Illumina library generation and DNA sequencing. Analysis of the 16S rDNA of the microbial community present in feces was performed in accordance with a method described previously 18 with minor modifications. In brief, the V3-V4 region of 16S rDNA was amplified using the primers as previously reported 18 , and then ligated with overhang Illumina adapter consensus sequences. After PCR reactions,the amplicon was purified using AMPure XP magnetic beads (Beckman Coulter, Brea CA, USA). The Illumina Nextera XT Index kit (Illumina) with dual 8-base indices was used to allow for multiplexing. To incorporate two unique indices to the 16S amplicons, PCR reactions were performed as previously described 19 . The libraries were purified by AMPure XP beads, quantified fluorometrically using a QuantiT PicoGreen ds DNA Assay Kit (Invitrogen, Paisley, UK) and then diluted to 4 nM using 10 mM Tris-HCl (pH 8.0), followed by pooling of the same volume for multiplex sequencing. The multiplexed library pool (10 pM) was spiked with 40% PhiX control DNA (10 pM) to improve base calling during sequencing. Sequencing was conducted using a 2 × 250-bp paired-end run on a MiSeq platform with MiSeq Reagent Kit v2 chemistry (Illumina). DNA sequence analysis. Demultiplexing and removal of indices were performed using the MiSeq Reporter software (Illumina) as previously reported 19 . Filtering out of low-quality sequences, removal of chimera www.nature.com/scientificreports www.nature.com/scientificreports/ sequences, construction of operational taxonomic units (OTUs), and taxonomy assignment were conducted using the Quantitative Insights Into Microbial Ecology (QIIME) pipeline (http://qiime.org/) 20 . In brief, 30000 raw reads were randomly obtained from the sequence files for each sample and merged by fastq-join with the default setting. The sequence reads with an average quality value of <25 were removed, and then chimera-checked. Five thousand high-quality sequence reads were randomly obtained for each sample, and OTUs for total high-quality reads were constructed by clustering with a 97% identity threshold. The representative reads of each OTU were then assigned to the 16S rRNA gene database by using UCLUST with ≥97% identity. Each taxon in gut microbiota was compared at genus level. The Shannon index was calculated to investigate the alpha diversity of microbiota in the samples. statistical analysis. All statistical analyses were conducted with the R statistical software version 3.1.3 21 . Data are expressed as means ± SE. Significance of differences between two animal groups was analyzed by Mann-Whitney U-test. Difference were considered to be significant at P < 0.05. In the analyses of gut microbiota, statistical significance was determined by Welch's t test with Benjamini-Hochberg correlation. Results Effect of vancomycin treatment on the structure of gut microbiota in mice. To confirm whether vancomycin treatment caused dysbiosis in the experimental mice, we analyzed gut microbiota profile. The alpha-diversity of the gut microbiota was significantly lower in vancomycin-treated mice than in the controls (Fig. 1a). Moreover, we examined the genera of gut microbiota present in the experimental mice. Among 10 major genera, Lactobacillus was markedly increased in the vancomycin-treated mice (Fig. 1b). In addition, Escherichia was significantly increased, whereas Blautia was less abundant in vancomycin-treated mice than in the controls (Fig. 1b). Effect of vancomycin treatment on body weight and intestinal morphology in mice. Body weight increased according to body growth in both the control and vancomycin-treated groups. The percentage increase in body weight was significantly greater in vancomycin-treated mice from 3 days after the start of the experiment (Fig. 2a). Observation of intestinal morphology demonstrated that the cecum was apparently enlarged www.nature.com/scientificreports www.nature.com/scientificreports/ in the vancomycin-treated mice relative to the controls (Fig. 2b). Although the lengths of the small intestine and colon did not differ between the two groups, cecum weight was significantly greater in the vancomycin-treated mice (Fig. 2c-e). The amount of food-intake was greater in the vancomycin-treated mice compared with control (Fig. 2f). GITT was significantly prolonged in the vancomycin-treated mice relative to control (Fig. 2g). Effect of vancomycin treatment on expression of GLP-1 and GPR43 in the colon of mice. GLP-1 was expressed in colonic epithelial cells with an ovoid or pyramidal shape (Fig. 3a). The number of GLP-1-positive cells in the colonic mucosa was significantly greater in vancomycin-treated than in untreated mice (Fig. 3a,b). Consistent with this finding, the expression of mRNA for proglucagon (the gene encoding GLP-1) was significantly increased in the mice that had received vancomycin (Fig. 3c), and moreover, the plasma GLP-1 level was significantly elevated in those mice relative to the controls (Fig. 3d). Immunoreactivity for GPR43 was also localized in the ovoid or pyramidal epithelial cells of the colonic mucosa, the morphology being consistent with gut endocrine cells (Fig. 4a). The number of GPR43-positive cells in the colonic mucosa was significantly increased in the mice treated with vancomycin (Fig. 4a,b), and the level of expression of GPR43 mRNA tended to be higher in those mice relative to the controls (Fig. 4c). www.nature.com/scientificreports www.nature.com/scientificreports/ Effect of vancomycin-induced gut microbiota alteration on gastrointestinal morphology and physiology. To examine whether the characteristic features evident in vancomycin-treated mice were due to alterations of gut microbiota, we transplanted the gut flora from those mice into GF mice. From 2 weeks after the start of the experiment, GF mice that had undergone FT using samples from vancomycin-treated mice (FT-V) showed a greater gain in body weight. At 5 weeks after FT, the gain of body weight was significantly greater in GF mice with FT-V than in those that had undergone FT using samples from control mice (FT-C) (Fig. 5a). We then studied the changes in food intake in the two groups. Similarly to body weight, food intake became greater in GF mice with FT-V from 2 weeks after the start of the experiment, and these mice subsequently showed a significant increase at 5 weeks (Fig. 5b). Moreover, we found that the GITT was significantly prolonged in the GF mice with FT-V relative to GF mice with FT-C (Fig. 5c). Histological investigation demonstrated enlargement of the cecum in GF mice with FT-V, being similar to that in the vancomycin-treated mice (Fig. 5d). The lengths of the small intestine and colon did not differ between GF mice with FT-V and GF mice with FT-C (Fig. 5e,f), but cecum weight was significantly greater in the former (Fig. 5g), being compatible with the relationship between the vancomycin-treated mice and the controls. Effect of gut vancomycin-induced microbiota alteration on expression of GLP-1 and GPR43 in the colon. The number of GLP-1-positive cells in the colonic mucosa was significantly higher in GF mice with FT-V than in GF mice with FT-C (Fig. 6a,b). The expression of proglucagon mRNA in the colon was also significantly increased in GF mice with FT-V (Fig. 6c), and in fact the plasma GLP-1 level was significantly elevated in those mice relative to GF mice with FT-C (Fig. 6d). We also investigated the expression of GPR43 in the colonic mucosa of GF mice with FT. GPR43 immunoreactivity was observed in epithelial cells such as endocrine cells, and the number of immunoreactive cells was significantly higher in GF mice with FT-V than in those with FT-C (Fig. 7a,b). Although the difference was not statistically significant, the level of expression of GPR43 mRNA tended to be higher in GF mice with FT-V than in those with FT-C (Fig. 7c). www.nature.com/scientificreports www.nature.com/scientificreports/ Discussion It has recently been reported that commensal gut microbiota are involved in the regulation of GLP-1 6,11 , which plays a pivotal role in not only insulin-associated energy metabolism but also GI motility 22 . In the present study, we investigated the effect of dysbiosis on GLP-1 expression and found that the expression of GLP-1 was increased in the colon of mice that had been treated with vancomycin. Supporting our data, a few studies have reported that the number of GLP-1-positive cells and/or the plasma GLP-1 level is increased in mice after treatment with vancomycin alone or a combination of vancomycin and other antibiotics 10,23 . GLP-1 is produced by enteroendocrine L cells and its production and secretion are regulated by carbohydrates, fatty acids, amino acids and hormonal factors 24 . Therefore, such factors are likely involved in the enhancement of GLP-1 expression resulting from vancomycin treatment. In particular, since short-chain fatty acids (SCFAs) are produced mainly by gut microbiota and become the energy source for colonic epithelial cells 1 , SCFAs may play a key role in mechanism by which vancomycin-induced dysbiosis causes enhancement of GLP-1 expression in the intestinal tract 10 . In this study, we were unable to evaluate the specific SCFAs involved because of methodological limitations; however, we have clarified that the expression of GPR43, a possible receptor for SCFA in GLP-1-producing L cells, is enhanced in colonic epithelial cells. This finding suggests that GLP-1-producing cells might be sensitive to extracellular stimuli, and partly involved in the enhancement of GLP-1 expression. Although several phenotypic characteristics, including enhancement of GLP-1/GPR43 expression, an increase of body weight gain and enlargement of the cecum, was observed in dysbiotic mice after vancomycin treatment, it was still debatable whether those characteristics were in fact due to alteration of the gut microbiota. Therefore, we subjected GF mice to transplantation of material from vancomycin-treated mice to clarify whether the above features were reproducible. This revealed that GF mice with FT-V not only showed an increase in the basal GLP-1/GPR43 level and body weight gain but also enlargement of the cecum, supporting the contention that vancomycin-induced dysbiosis was related to those characteristics. Examination of the gut microbiota profile revealed a marked increase of Lactobacillus in mice after vancomycin treatment, being compatible with the findings of a few recent studies 8,9,25 . Interestingly, it has been reported that administration of probiotic Lactobacillus strains promotes not only SCFA production 26 but also GLP-1 secretion 27,28 . Together, these findings suggest that www.nature.com/scientificreports www.nature.com/scientificreports/ the increase of GLP-1 expression in vancomycin-treated mice is linked to the marked increase of gut Lactobacillus strains in those mice. What is the role of enhanced GLP-1 expression in mice with vancomycin-induced dysbiosis? GLP-1 plays a role in not only energy metabolism but also GI motility, and therefore we investigated the effect of vancomycin-induced dysbiosis on the GLP-1/GI motility axis. This revealed that the GF mice with FT-V had a suppressed GI motility accompanied by up-regulation of GLP-1. It still remains unclear whether these alterations of GI motility and gut hormone balance are functional disorders resulting from gut dysbiosis or simply a reaction to dysbiosis-associated pathophysiology. Although we are unable to address this significant issue, the metabolic disorders such as increased body weight and food intake in GF mice with FT-V are of interest. It is known that antibiotic treatment, especially in early life, alters the structure of the gut flora and is frequently linked to the development of obesity 29 . Indeed, vancomycin treatment appears to lead to an increase of body weight and/or body fat in mice 25,29 , consistent with our data. In the present study, we found that Lactobacillus is increased in the vancomycin-treated mice whose body growth is promoted. In this context, it is interesting that Lactobacillus is increased in obese with insulin resistance 30,31 and moreover, Lactobacillus species are widely used as growth promoters in the farm industry 32 . On the other hand, it has been known that the increase of GLP-1 is likely found in obese patients with insulin resistance 33 . Although we have no exact answer for the discrepancy between the promotion of food intake/body weight gain and the increase of appetite suppressive GLP-1, it is tempting to speculate that the up-regulation of GLP-1 may be a protective reaction against the dysbiosis-associated glucose and/or lipid metabolism dysfunction. If the increase in expression of GLP-1 is a reactive response to obesity, GLP-1-associated suppression of GI motility would be useful to discourage food intake. On the other hand, it is still unclear whether the amount of SCFA, which acts as an energy source for colonic epithelial cells, is increased or decreased in vancomycin-treated mice 6,29 . From the view point of energy harvest in the colonic lumen, GLP-1-associated suppression of GI motility may be helpful to reduce intake of any source material for bacterial fermentation when the amount of SCFA is increased in the colon 1 . In contrast, when the amount of colonic SCFA is decreased, suppression of GI motility may advantageous for absorption of SCFA by colonic epithelial cells 6 . In this context, it seems very difficult to interpret the significance of the altered GLP-1/GI motility axis in GF mice with FT-V. In summary, we have shown that treatment of mice with the antibiotic vancomycin causes dysbiosis of gut microbiota, and increases the expression of GLP-1 and GPR43 in the colonic mucosa. Moreover, we have demonstrated that the enhancement of GLP-1 and GPR43 expression is reproducible in GF mice with FT-V, accompanied by an increase of body weight gain and prolongation of the GITT. These findings confirm that vancomycin-induced dysbiosis is responsible for the increase of GLP-1 expression and development of an obese phenotype, although it is still unclear whether the alteration of GI motility represents a protective reaction against dysbiosis-associated pathophysiology. In this context, in further studies, we will need to investigate the metabolites present in the GI tract and/or the effect of probiotics on vancomycin-associated dysbiosis and its related pathophysiology. Data Availability All data generated or analyzed during this study are included in this published article.
v3-fos-license
2016-03-22T00:56:01.885Z
2011-09-01T00:00:00.000
11957169
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6694/3/3/3601/pdf", "pdf_hash": "8b0447cc21b10d39616c9a8d84d8394e9af6bb43", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42230", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8b0447cc21b10d39616c9a8d84d8394e9af6bb43", "year": 2011 }
pes2o/s2orc
Type I Collagen Synthesis Marker Procollagen I N-Terminal Peptide (PINP) in Prostate Cancer Patients Undergoing Intermittent Androgen Suppression Intermittent androgen suppression (IAS) therapy for prostate cancer patients attempts to maintain the hormone dependence of the tumor cells by cycles alternating between androgen suppression (AS) and treatment cessation till a certain prostate-specific antigen (PSA) threshold is reached. Side effects are expected to be reduced, compared to standard continuous androgen suppression (CAS) therapy. The present study examined the effect of IAS on bone metabolism by determinations of serum procollagen I N-terminal peptide (PINP), a biochemical marker of collagen synthesis. A total of 105 treatment cycles of 58 patients with prostate cancer stages ≥pT2 was studied assessing testosterone, PSA and PINP levels at monthly intervals. During phases of AS lasting for up to nine months PSA levels were reversibly reduced, indicating apoptotic regression of the prostatic tumors. Within the first cycle PINP increased at the end of the AS period and peaked in the treatment cessation phase. During the following two cycles a similar pattern was observed for PINP, except a break in collagen synthesis as indicated by low PINP levels in the first months off treatment. Therefore, measurements of the serum PINP concentration indicated increased bone matrix synthesis in response to >6 months of AS, which uninterruptedly continued into the first treatment cessation phase, with a break into each of the following two pauses. In summary, synthesis of bone matrix collagen increases while degradation decreases during off-treatment phases in patients undergoing IAS. Although a direct relationship between bone matrix turnover and risk of fractures is difficult to establish, IAS for treatment of biochemical progression of prostate tumors is expected to reduce osteoporosis in elderly men often at high risk for bone fractures representing a highly suitable patient population for this kind of therapy. Introduction Prostate cancer is among the most common types of malignancies and causes of cancer-related deaths in men worldwide. Patients with tumors, which are advanced at presentation or relapsed following radical prostatectomy have a dismal prognosis [1,2]. Treatment traditionally consists of androgen suppression (AS) of the growth of cancer cells either by orchidectomy or the use of LHRH analogs and steroidal or nonsteroidal antiandrogens, respectively [3]. AS is conventionally performed in a continuous regimen and results in apoptotic regression of the tumors in most cases. However, continuous androgen suppression (CAS) controls tumor growth for only two to three years until hormone-resistant cancers, which respond poorly to any further therapy including treatment with chemotherapeutics, appear [4,5]. Therefore, intermittent androgen suppression (IAS) was proposed as a novel clinical concept assuming that tumorigenic stem cells are residing in an androgen-sensitive state during limited regrowth in treatment cessation periods [6,7]. Since then, regrowing tumors in patients undergoing IAS were consistently shown to be sensitive over several cycles of androgen withdrawal and this kind of therapy resulted in improved quality of life [7][8][9]. Meanwhile, phase III studies established IAS as therapy equivalent to CAS in respect to survival leading to the proposal of IAS as standard therapy for progressive prostate cancer [10,11]. Besides other side effects, CAS results in increased incidence of osteoporosis and concomitant bone fractures [12,13]. Therefore, it was expected that off-treatment periods of IAS would allow for recovery of testosterone levels and cessation of bone matrix degradation. Higano et al. observed that loss of bone matrix density (BMD) after nine months of AS was significantly greater than the expected 0.5-1% annual decrease; however, interruption of AS attenuated the rate of bone degradation without full recovery and other clinical studies were inconclusive [14,15]. Apart from clinical assessments of BMD no reports of biochemical studies on bone metabolism were available prior to our publication on collagen degradation products. In analogy to other bone matrix turnover investigations of we quantified the degradation using the serum marker CrossLaps ® in IAS patients and found increased breakdown at the end of the AS phase in contrast to reduced anabolism during the treatment cessation period [16]. In order to assess synthesis of bone matrix, levels of PINP, a maturation peptide of collagen I, were retrospectively analyzed in serum samples retrospectively in the present study to obtain a more complete characterization of bone turnover during IAS [17]. Individual Course of Testosterone and PINP Levels under IAS Individual time courses of concentrations of testosterone and PINP for a representative patient undergoing IAS are depicted in Figure 1. The figure shows the values of the laboratory parameters for cessation and treatment periods, respectively, which were measured in monthly intervals. Testosterone repeatedly dropped during the AS periods and recovered significantly during the treatment breaks. Concentrations of PINP began to rise between three to four months of AS and showed a decrease, which was incomplete during treatment cessation periods and revealed lowest values concomitant with the respective testosterone peaks. Mean Course of PSA under IAS The mean time course of serum concentrations of PSA (mean ± SEM) for a total of 105 IAS cycles observed in 58 patients is shown in Figure 2. AS triggered decreases of testosterone to values <1 ng/mL, which was followed by recovery to baseline levels in the third month of treatment cessation (data not shown). During AS all patients showed reversible declines in PSA production to a mean level of <2 ng/mL for four cycles. Treatment cessation led to reappearance of PSA for four cycles to 16.3 ± 3.1, 13.0 ± 2.8, 10.8 ± 3.1 and 15.8 ± 5.0 ng/mL, respectively. Since some observations are lacking for the treatment cessation periods, the duration of the cycle lengths specified in Section 2.1 is actually longer than presented in Figure 2. Mean Course of PINP under IAS The mean time course of PINP concentrations analyzed for a total of 105 cycles of 58 patients is shown in Figure 3 (mean ± SEM). Measurements for observations 7-17, 28, 35-39 (except 36) and 52-55 were significantly different from the pretreatment value (P < 0.05). Therefore, bone matrix anabolism increased three months before the first AS period ended and peaked in the treatment cessation period (months 4-6) prior to a return to a minimum just ahead of the second AS phase. Further significant elevations were observed during the second AS phase (months 2 and 9) as well as in the following treatment pause (months 4-9) and, finally, during the third treatment cessation period (months 2-5). Discussion Advanced stage adenocarcinoma of the prostate is treated by surgical and/or antiandrogenic hormone ablation [2,18]. CAS provides selective pressure on the tumor cells, invariably resulting in outgrowth of variants adapted to very low androgen concentrations or relying on androgen-independent proliferative stimuli [19]. In contrast, IAS attempts to prolong the hormone dependence of tumor cells by allowing for limited regrowth of hormone-sensitive cells between suppression periods to hold the tumor at bay [6,7]. Phase III studies provide solid evidence that IAS is not inferior to CAS in terms of survival for selected patient groups, albeit hormone dependence of the tumor cells may not be lengthened [10,20]. Assuming equipotency of these therapies, the nature and severity of side effects and costs of each regimen will be decisive for its clinical use. In detail, adverse effects of CAS include skeletal, metabolic and cardiovascular complications, sexual dysfunction, hot flashes as well as cognition and mood disorders [13]. In particular, it was demonstrated that CAS reduced BMD, which led to increased risk of skeletal fractures [12,21]. In the largest study of men receiving CAS (390 patients), the prevalence of osteoporosis was 35% in hormone-naïve patients, 43% after two years of CAS and 81% after ten or more years [22]. Several groups investigated the effects of IAS on BMD and reported reduction of bone loss upon prolonged treatment. Higano et al. described increased bone loss during the AS phase of IAS and partial recovery during cessation [14]. Spry et al. reported significant improvement of hip BMD following two years of IAS, which was dependent on testosterone recovery [15]. Malone et al. did not notice any increase of osteoporosis in patients under IAS compared to data from age-matched individuals without prostate cancer [23]. Hence, from the clinical measurements of BMD in limited groups of IAS patients the reversal of AS-induced bone loss during the off-treatment periods is still not clear. The effect of IAS on BMD may be studied quantitatively using biochemical markers of bone metabolism, as far as there is no interference from bone metastatic lesions [17,24]. Collagen I accounts for more than 90% of the organic matrix of bone and is synthesized by osteoblasts and degraded by osteoclasts during remodeling. Crosslinked degradation telopeptide fragments of collagen I can be measured by a CrossLaps ® ELISA [25,26]. We recently published collagen degradation, marked by elevated levels of CrossLaps ® at the end of the AS phases, was reduced during the treatment cessation periods of IAS below pretreatment concentrations [16]. Synthesis of collagen can be assessed through measurements of PINP, which is cleaved from newly formed procollagen chains [27,28]. Surprisingly, the first AS cycle stimulated collagen production during the last months of hormone ablation and this effect continued well into the first treatment cessation period until it returned to baseline levels prior to the next AS phase. Sporadic elevations of PINP were detected during the subsequent AS periods followed by significant increases during off-treatment phases in the second and third IAS cycle. This combination of decreased degradation of collagen I, as indicated by measurements of CrossLaps ® and increased production, proved by quantitation of PINP, is expected to limit bone matrix anabolism and reduce AS-induced osteoporosis. These findings confirm the positive effects reported from other clinical IAS studies on BMD [29]. IAS seems to be most suitable for elderly men that show biochemical progression following prostatectomy and/or irradiation, likewise comprising the population at greatest risk for bone fractures. Furthermore, a significant fraction of the same group of patients exhibited prolonged responses to the first AS phase of IAS and accomplished a subsequent treatment cessation period of up to several years [30]. Limiting the exposure to AS constitutes the most simple method to reduce side effects and to avoid problems and costs associated with the medical treatment of osteoporosis by calcium/vitamin D supplementation or drugs like bisphosphonates or a monoclonal antibody [21]. Additionally, intermittent hormone deprivation may result in a reduction of further side effects, especially metabolic and cardiovascular complications [11]. Study Population and Treatment All patients gave written informed consent according to approval guidelines of the ethics committee. Between June 1993 and August 2003 all patients with disseminated adenocarcinoma of the prostate fulfilling the inclusion criteria of histologically confirmed tumors of stage ≥T2 not having received pretreatment with either hormone ablation or chemotherapy and rising PSA levels were recruited for our nonrandomized open IAS trial. Treatment consisted of an initial nine-months course of AS (LHRH agonist goserelin acetate/Zoladex ® and antiandrogen cyproterone acetate/Androcur ® ) followed by treatment cessation and resumption of the therapy as soon as PSA increased above 4 or 20 ng/mL for local or metastatic disease, respectively [31]. LHRH agonist Leuprorelin/Trenantone ® was used for nine months of AS from 2003 on. Follow-up examinations included digital rectal examination, transrectal sonography, yearly chest X-rays and bone scans, respectively. Laboratory Measurements Blood samples were taken from each patient prior to treatment and at monthly intervals thereafter and serum was stored at −80 °C. Serum testosterone concentrations were measured using an ELISA assay (Biomar Diagnostics, Marburg, Germany) according to the manufacturer's instructions. PSA levels were determined by a microparticulate enzyme immunoassay (MEIA, AxSYM PSA assay, Abbott, Wiesbaden, Germany) and CrossLaps ® ELISA was obtained from Nordic Bioscience Diagnostics (Herlev, Denmark) and used according to the manufacturer's instructions. All determinations were done in duplicate. Statistical Analysis Student's t-test was used for the statistical analyses. * P < 0.05 was considered statistically significant. All calculations were done using the Statistica software package (Statsoft, Tulsa, OK, USA). Conclusions In conclusion, measurements of serum CrossLaps ® levels revealed significant BMD in prostate cancer patients at the end of the AS phases of IAS cycles, which was rapidly reversed during the treatment cessation periods [16]. According to the results from our present study, tracking of the course of PINP levels shows increased synthesis of bone matrix collagen I in response to limited periods of hormone deprivation in IAS. These findings indirectly corroborate the decreased loss of BMD in bone scans in prostate cancer patients under IAS therapy, and determinations of markers of bone turnover are recommended to be performed along with bone scans in IAS patients to establish a quantitative relationship. Thus, elderly patients with a prolonged off-treatment interval and a good long-term prognosis are expected to have reduced bone losses, without additional medication under IAS therapy [21,30].
v3-fos-license
2019-05-03T13:06:56.222Z
2014-08-08T00:00:00.000
54220290
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://revistas.um.es/analesps/article/download/analesps.30.3.154691/165071", "pdf_hash": "97d915fa3e1a196a7144f2cc2a6d9e1315e80953", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42231", "s2fieldsofstudy": [ "Psychology" ], "sha1": "559beb4aebf83ca9deb7ad3159fa47b5535f7bd2", "year": 2014 }
pes2o/s2orc
Social support and psychological well-being as possible predictors of complicated grief in a cross-section of people in mourning Título: Apoyo social y bienestar psicológico como posibles predictores de estado de duelo complicado en población de personas en duelo. Resumen: Objetivo: Analizar las variaciones en estado de duelo complicado (DC) a través de variables sociodemográficas y de funcionamiento óptimo: bienestar psicológico (BP), apoyo social disponible (ASD) y satisfacción con el apoyo social disponible (SASD). Método: Estudio transversal, N = 110 personas que acuden a un centro gratuito de escucha de duelo (CE), a través de cuestionario que incluía aspectos sociodemográficos Inventario de Duelo Complicado de Prigerson (IDC), adaptación española del Cuestionario de Bienestar Psicológico de Ryff y versión abreviada del Cuestionario Sarason de Apoyo Social (SSQSR). Se realizó análisis descriptivo y correlacional con Pearson. Se utilizó regresión lineal múltiple por método paso a paso de eliminación hacia atrás. Resultados: El estado medio DC 40.91 (DT = 11.89), de BP 119.23 (DT = 18.75), de ASD 10.56 (DT = 6.31) personas y SASD 13.48 (DT = 4.17). Las variables predictivas del nivel de DC fueron: BP, ASD, SASD, meses desde la pérdida, recibir ayuda farmacológica previa y parentesco progenitor. El Rcuadrado ajustado resultó de 42.4%. Conclusiones: Podemos considerar BP y el SASD predictores del estado de DC. Sería adecuado esclarecer el efecto de sintomatología depresiva en la percepción de ASD. Este estudio contribuye a aumentar eficiencia del recurso, al poder predecir en parte el DC mediante variables que no implican deterioro del deudo. Palabras clave: duelo Complicado; deudo; apoyo social; satisfacción; bienestar psicológico. Abstract: Objective: To analyze variations in complicated grief (CG) across sociodemographic variables and variables of optimal functioning: psychological well-being (PWB), available social support (ASS) and satisfaction with available social support (SASS). Method: A cross-sectional study was done with N = 110 people going to a free bereavement listening center (LC). They were given a questionnaire that included sociodemographic aspects. The Inventory of Complicated Grief by Prigerson (ICG), a Spanish language adaptation of Ryff's Psychological Well-Being Questionnaire and the abbreviated version of Sarason's Social Support Questionnaire (SSQSR) were used. A descriptive and correlational (Pearson) analysis was carried out. Multiple linear regression was done by using a method of a step-by-step backwards elimination. Results: The average CG 40.91 (SD = 11.89), PWB 119.23 (SD = 18.75), ASS 10.56 (SD = 6.31) people and SASS 13.48 (SD = 4.17). The predictive variables of CG level were: PWB, ASS, SASS, months since loss, receiving pharmacological assistance and parental relationship. The adjusted R-squared was 42.4%. Conclusions: We can consider PWB and SASS predictors of CG. It would be appropriate to clarify the effect of depressive symptoms on the perception of ASS. This study contributes to increasing efficiency of resources, to be able to partially predict CG through variables that do not involve the worsening of a mourning relative's state. Introduction The loss of a loved one is always accompanied by a subsequent period of adjustment.This is normal and is called mourning (Bermejo, 2005;Howarth, 2011).Sometimes the intensity or the course of the grieving process is altered.In this case we speak of complicated grief (Horowitz, Wilner, Marmar, & Krupnick, 1980;Simon et al., 2011;Vargas, 2003). The process of working through grief, according to Worden (1997;2009), includes four main tasks: I.) Accepting the reality of the loss.During the first few weeks, the prevailing emotions are confusion and bewilderment, along with denial of what has happened.One must accept that the death is real and cannot be changed.II.) Working through emotions and associated pain, i.e., feeling, expressing and talking about all of the emotions and pain that accompany the loss.III.) Adapting to changes in the environment, especially when the bereaved previously lived with the deceased on a day-to-day basis and there was a division of responsibilities / tasks and consequently, an increase in demands upon the bereaved.At this time social support is especially important.IV.) Emotionally relocating the loved one, and continuing to live, that is, accept that memories do not disap-pear but that the person will not return and recognize the need to redirect affection and to love again.Failure to complete these steps increases the risk of suffering from complicated grief. In this way, the work of processing grief is directed not at the reduction of pain associated with the loss but rather, in line with what Worden suggests, to achieve an optimal level of well-being.In the literature on well-being, there is a distinction between two traditions or approaches to the concept of wellness: hedonic well-being, which includes positive feelings and emotions associated with satisfaction and pleasure, and eudaimonic well-being, focusing on optimal functioning on both the individual and social levels (Ryan & Deci, 2001).The concept of eudaimonia goes back to Aristotle, for whom the good life not so much about achieving pleasant feelings but rather realizing one's own potential (Keyes & Annas, 2009). Ryff subscribed to this view and called it psychological well-being.He found 6 basic elements for development and empowerment of the individual: self-acceptance, positive relationships, autonomy, mastery of one's environment, having a purpose in life and experiencing personal growth (Ryff & Singer, 2008). Self-acceptance is one of the main criteria of well-being.People should try to feel good about themselves while being aware of their own limitations.Having a positive attitude toward oneself and making a positive assessment of one's own history itself is a fundamental characteristic of positive anales de psicología, 2014, vol.30, nº 3 (octubre) psychological functioning (Keyes, Ryff & Shmotkin, 2002).The ability to maintain positive relationships with others is another criteria (Ryff & Singer, 1998).People need to have close, loving relationships with others, the ability to empathize, express affection and experience intimacy, they need friends whom they can trust and with whom they can work with towards achieving well-being.The ability to love is a fundamental component of well-being (Allardt, 1996) and consequently mental health (Ryff, 1989).Another key dimension is autonomy.In order to sustain their own individuality in different social settings, people need to settle into their own convictions (self-determination), and maintain independence and personal authority (Ryff and Keyes, 1995).Autonomous people can better withstand social pressure and are better at self-regulation of their behavior (Ryff & Singer, 2002).Having control of one's environment, i.e., the ability to choose or create enabling environments to meet one's own needs and desires, with the consequent sense of competence and control that it generates, is another feature of healthy functioning.People with a high mastery of their environment have a greater sense of control over the world and are able to influence the things that surround them.Finally, people need to set goals and define a set of objectives that will allow them to give their lives some meaning.They need, therefore, to have a purpose in life.Positive optimal functioning requires not only the above-mentioned traits, but one also the strength and endurance to develop one's potential, to continue growing as a person and to maximize one's capabilities (Keyes, 2003).This dimension is called personal growth. The sphere of social relationships is presented as a relevant dimension, both in the 3 rd and 4 th tasks of working through grief (Worden, 1997), as is the concept of psychological well-being through positive relationships (Ryff & Keyes, 1995).Thus, social support (SS) counted on by a person in mourning, may be a matter of much importance in the early stages, of softening and cushioning the impact of the death of a loved one (damping effect), as well as in the final stages of assisting the improvement and recovery of the bereaved (recovery effect) (Lobb et al., 2010;Mancini, Prati & Bonanno, 2011;Stroebe, Wech, Stroebe & Abakoumkin, 2005). It proves difficult to find a uniform definition of SS (Sarason, Sarason, Potter & Antoni, 1985;Sarason & Sarason, 2009) and therefore multiple measuring instruments exist (Sarason, Sarason, Shearin & Pierce, 1987;Sherbourne & Stewart, 1991).Based on the definition of social support (SS) that Sarason (1985) offers, it consists of two basic elements: the number of people available to contact in case of need (ASS) and the degree of satisfaction with the ASS (SASS).Also from the perspective of attachment theory, SS focused on the availability of support (or available SS) despite special circumstances (Bowlby, 1982).Larger studies collect what has been defined as negative SS, or number of people who could (or who in fact do) make the bereaved angry or upset, from the size of the social network, represented in the num-ber of people who have provided support in the past month (Burke, Neimeyer & McDevitt-Murphy, 2010;Groot & Kollen, 2013). One of the first attempts to clarify the relationship between SS and psychological well-being in a sample of people in mourning was a study in the early eighties performed on a group of widows during which it was concluded that SS was helpful, harmful or neutral in terms of: 1.) the time at which it occurs during the grieving process, 2.) the type of support and 3.) the source of support (Bankoff, 1983).In this way, we distinguish three areas within the scientific literature: The first includes studies that support the contribution of SS in decreasing the symptoms of grief: reduction of depression (Burke et al., 2010), anxiety (Somhlaba & Wait, 2008) and/or post-traumatic stress (Murphy et al., 2003;Vanderwerker & Prigerson, 2003).For example, people bereaved because of homicide (Burke et al., 2010), and bereaved mothers (Riley et al., 2007) both showed fewer symptoms associated with grief. The second, studies that support the idea of SS and grief being independent of one another: Burke et al. (2010), while evaluating different dimensions of SS, found that neither the perceived SS in general nor the specific SS during mourning had any relation to complicated grief, nor to post-traumatic stress or depression.In the same vein, in a study of spousal bereavement of retirees, although SS was associated with a decrease in depressive symptoms, no effect or buffer was found (that decreases CG) nor recovery (an increase in wellbeing) by means of SS in the grieving process (Stroebe et al., 2005). The third, those who collect evidence of the negative relationship between SS and higher levels of complicated grief: Burke et al. (2010) also found major depression and posttraumatic stress.In addition, depression was associated with the anticipation of negative SS.In this regard, two independent studies have linked a higher state of grief to a lower perception of SS (Kristensen et al., 2010;Ott, 2003). Considering how important it is to address the functional aspects of people in vulnerable situations, the main objective of this study is to analyze the variations in the state of mourning in a sample of people grieving the loss of a loved one and try to determine whether variables such as psychological well-being and social support, essential to explaining optimal functioning, and others related to the loss, predict individual differences. Method Design A correlational study with data collected between March 2010 and March 2011 through a self-reported questionnaire in a listening center (LC) or grief counseling. The main variables of the study were state of mourning, psychological well-being and social support.State of mourning encompasses the extent to which changes could occur in the course or intensity of grief (Limonero, Lacasta, García, Maté & Prigerson, 2009).Psychological well-being, expresses the level of optimal and healthy functioning (Ryff, 1989).Social support, expressed by the available social support (ASS) and satisfaction with available social support (SASS), meaning the number of ASS people you know, excluding yourself and whom you can count on to help or support you and by SASS or the satisfaction you feel with the support you have (Sarason, 1999). In addition, demographic variables (age and gender) were added as well as others related to the loss of the loved one (months since loss, relationship to the deceased and type of emotional bond, and assistance previously received, both psychological and pharmacological). Participants Participation in the questionnaires was offered to 182 grief counseling and listening center users.The center offers free help to those experiencing difficulty due to loss or for other reasons.The first contact is made via telephone by the person who wants to receive help.The LC is known as a center for all types of grief support (migratory, separation or divorce and unemployment among others) through its website, informational brochures, or word-of-mouth and is connected to a religious order.The usual format is approximately 20 sessions.It is carried out by a team of 100 volunteers with university degrees and specific qualifications obtained through courses in helping relationships, basic counseling skills, guidance and intervention in processes of grief and counseling training practices.There is evidence of similar initiatives elsewhere with satisfactory results (Altmaier, 2011;Gallagher, Tracey & Millar, 2005;Ober, Granello & Wheaton, 2012). Of the 182 persons in mourning who had not yet started support, 130 responded, therefore the response rate was 71.42%.Males comprised 23.8% (31) and females comprised 76.2% (99).The average age was 55 years (standard deviation of 15.59, with a range of 64 years.Minimum 19, maximum 83). Inclusion criteria were: having suffered the loss of a loved one (whether family or not), being at least 18 years of age, living in Madrid and wanting to start counseling sessions. Instruments used The state of grief variable was measured with the Spanish language adaptation of Prigerson's Inventory of Complicated Grief (ICG).The questionnaire consists of 19 items with good internal consistency (Cronbach's alpha coefficient = .94)and temporal stability (test -retest reliability at 6 months = 0.80).The results range from 0 to 76 points, with the highest scores corresponding to an increased likelihood of developing complicated grief (Limonero et al., 2009). For the psychological well-being (PWB) variable the Spanish language adaptation of the Ryff Psychological Well-Being Questionnaire (Diaz et al., 2006) was used.This questionnaire consists of 29 items divided into the following Likert-type scales with response options from 1 to 6: selfacceptance, positive relationships with others, autonomy, control over one's environment, purpose in life and personal growth.The minimum total score is 29 and the highest is 174.All of the scales have good internal consistency, with Cronbach alpha values greater than or equal to .70. To collect variables relating to social support a revised short version of the Sarason Questionnaire (Sarason, 1999) (SSQSR) was used.It assesses two dimensions of social support: first, the available support (ASS), i.e., the number of people they know and with whom they could expect to receive help or support on a scale of 0 to 9, and the other, satisfaction with available support (SASS), i.e., the degree of satisfaction felt towards the ASS with a scale of 0 to 6.The ASS score is calculated by adding the number of guests in each item, the total ranging from 3 to 27.The SASS score results from adding the scores relating to the degree of satisfaction for each of the items, the total ranging from 3 to 18. Higher scores indicate greater social support received and satisfaction with it. The Spanish version has good internal consistency for both ASS (Cronbach's alpha = .90)and for SASS (Cronbach's alpha = .93).Temporal stability is also good with ASS (stability coefficient = .90)and SASS (stability coefficient = .83). In addition, questions to collect information regarding age, gender, months since loss, relationship and type of bond were included.Questions such as: How long has it been since you lost your loved one?(in months), What was your relationship to the deceased?(Response options: spouse, son / daughter, father / mother, other), How would you rate your relationship with your relative before he/she died?(Response options: positive, normal and negative).They were also asked about prior assistance received, both psychological and pharmacological using the following questions: Have you received counseling before now?(therapy with a psychologist or psychiatrist) (Yes / No) and Are you receiving pharmacological help?(antidepressant, ansiolytic) (Yes / no). Procedure Upon receiving a call from the bereaved person requesting assistance, the person in charge of making appointments assigned an identification number so that in the case of participation in the study, data collection would be anonymous. After the first interview the person was assigned to a volunteer who had been trained in data collection.The volunteer then explained to the user the possibility of participating in the study and in the case of acceptance, the user completed the questionnaire, preferably in that first session.When the anxiety level was very high, they waited until the second or third session.When finished, the volunteer handed it to the person responsible for making appointments. Data analysis A descriptive analysis was used for the following variables: state of complicated grief, psychological well-being, social support, satisfaction with social support, variables related to the loss and demographic variables.The correlation between variables was examined using the Pearson correlation coefficient.For variables with more than two levels (such as relationship with the deceased and emotional bond) mean differences between groups were compared using one way Anova and after verifying homoscedasticity, multiple DHS Tukey comparisons.For two-level variables (such as gender, pharmacological and psychological assistance) Student T test for independent samples was used, establishing a confidence interval of 95%. The possible prediction of complicated grief by the main study variables was explored using step-by-step backwards elimination multiple linear regression analysis.Using the method of least significant difference (confidence interval 95%) through mean diagrams it was analyzed which non quantitative variables were statistically significant.Prior to the multiple linear regression analysis, the hypothesis of linearity, homoscedasticity and independence were tested, being confirmed the assumption of normality a posteriori by analyzing the residuals (QQ Plot). Having analyzed the relationships between variables and having performed simple regressions (with state of mourning as the dependent variable), it was deemed necessary to use the logarithmic transformation of the variable "months since loss" in order to meet the assumption of linearity.The SPSS version 20.0 was used. Ethics On the first page of the questionnaire, participants were explained who was conducting the research, the purpose of the study and how to respond.Confidentiality and adequate data protection were guaranteed as well as an explanation of usage, which was limited to research.Prior to data collection, the layout of the study was approved by the Ethics Committee of the Center. Results The average of the months since loss was 12.36 (SD = 19.53) in 85% of the sample (N = 110), so of the 130 participants, 20 were removed from the analysis because more than 2 years had passed since they had lost their loved ones.From the total of the selected sample (N = 110), 77.2% were women with a mean age of 54.3 years and the remaining 22.8% were men with a mean age of 55.1 years, yielding a sample mean of 55 years (SD = 15.5). Regarding the state of complicated grief (CG) the mean was 40.91 (SD = 11.89, with a range of 59, minimum 9 and maximum 68).The internal consistency obtained through Cronbach's alpha was .794. The average of psychological well-being (PWB) was 119.23 (SD = 18.75, range of 88, minimum 78, maximum 166).The internal consistency of the scale of psychological well-being, as measured by Cronbach's alpha was .847. With regards to social support variables, the mean of the ASS variable was 10.56 (SD = 6.31, range 27, minimum 0 and maximum 27).With respect to satisfaction with help offered (SASS) the mean was 13.48 (SD = 4.17, range 16, minimum 2 and maximum 18).Using Cronbach's alpha an internal consistency of .796was obtained for the scale of support received and .818for satisfaction with support received. Regarding the type of emotional bond with the deceased, 67.3% (74) reported having had a positive relationship, 26.4% (29) had a normal relationship and 5.5% (6) had a negative relationship. Correlations Regarding the state of CG, we can say that there is a statistically significant inverse and moderate association between state of CG and PWB (r = -.298,p = .007),meaning that the higher the psychological well-being of the patient, the lower the level of complicated grief.There is also a slight, direct connection which is statistically significant (r = .190,p < .05) between CG and months since loss. In relation to PWB, there is also a significant association with the number of months since loss (r = -.225,p = .045)in reverse and to a mild degree.PWB also maintains substantial direct and moderate association with ASS (r = .381,p < .000). In turn, ASS is also associated with the variable months since loss in reverse and moderately (r = -.302,p < .001),and finally, there is a substantial direct and moderate association between ASS and SASS (r = .380,p < .000). Predictors of Complicated Grief The mean of CG between groups of variables was contrasted and statistically significant differences were found (p = .003)between the mean of the group who had lost a parent (M = 45.34,SD = 12.264 found) compared to those who had lost a child (M = 35.23,SD = 9.676) and between pharmacological support (p = .009)received (M = 43.59,SD = 11,805) compared to none received (M = 38.02,SD = 11,621).In all cases the assumption of variance homogeneity was met. The rest of the categorical variables (relationship) and dichotomous variables (gender and psychological support) did not show statistically significant differences in any of the cases to explain complicated grief. Thus the qualitative variables that were significant in explaining differences in state of grief were the death of a parent in respect to that of a child and having received pharmacological help. The R-squared statistic indicates that the model explains 65.1% of the variability of complicated grief, adjusted Rsquared (more suitable for comparing models with different numbers of independent variables) is 42.4% with an explanation of (F (6,72) = 8.845, p < .000).Finally, the following multiple linear regression model was extracted: The analysis of these revealed that they are in accordance with normal distribution. Discussion The descriptive results can be translated as follows: The person asking for help in the grieving process is a woman who lost her husband or a child about a year ago.She maintained a positive bond with them and can count to least 10 people when she needs help, but the important things is that satisfaction with the people she goes to is 13.48 out of a possible 18.00 or put another way, out of a score of 100, she would be satisfied 75% of the time (75% satisfaction). The average ASS (10.56 people) is similar to that found in Burke's study (2010) (average of 9.04), despite using a different measurement instrument, SSQSR (Sarason et al., 1987.)against ASSIS (Barrera, 1981), yet the results are comparable since both counted the number of people who provide support.The partial overlapping with Burke's et al. (2010) findings (similar mean ASS and no influence of SS on the lowering of CG) is a strong point of this study, given the rigorousness and strength of the findings in this study, which also uses a complete non-clinical sample population all mourning the death of a loved one. Regarding the regression model, it becomes clear that the study variables which help to explain CG are: ASS, SASS, PWB, months since loss and previous pharmacological assistance received, while at the same time it is noted from the available sample that the death of a parent has relatively less impact than the death of a child or spouse.This model explains 42.4% of the variability in scores of CG.The variables which did not explain CG are: age, gender, having received previous counseling and type of emotional bond with the deceased.These are discussed below. An inverse relationship between PWB and CG is observed.The Pearson correlation reinforces that.It is a fact fully justified by the circumstances of the study: participants were all newcomers to the LC and had not yet started the counseling sessions to help them through the mourning process, therefore, it is reasonable that having not yet completed Worden proposed tasks, they maintained high levels of grief with low PWB, especially considering that the PWB is a score that quantifies the optimal functioning of the individual dimensions of autonomy, control over one's environment, purpose in life, personal growth, etc., areas that deteriorate during bereavement, and where it is possible to recover previous levels through a proper process of working through grief. The variables of prior pharmacological assistance received and parental relationship obtained a negative value in the regression equation, which means that if we kept all other variables fixed, by means of any of them, we get a decrease in CG.That is, we can predict that people who have had previous pharmacological help (assuming the level of the rest of the explanatory variables is constant) obtained lower scores in CG, thus the loss of the parent also predicts lower scores on the CG level compared to other relationships.In relation to pharmacological assistance and in view of the support that is provided at the LC, we recommend looking at this with caution because taking psychotropic drugs in the absence of developmental work may constitute avoidance behavior, which will decrease the symptoms but anales de psicología, 2014, vol.30, nº 3 (octubre) will leave the emotional work and readjustment undone (Naranjo-Vila, Gallardo-Salce & Zepeda-Santibañez, 2010). The relationship between ASS and CG is something that stands out.As already mentioned in the introduction, the relationship of ASS regarding CG is difficult to qualify and the evidence points in different directions.In our study, holding the other explanatory variables constant, an increase in one unit of ASS predicts an increase of 0.73 units of CG.This result would place it within the third group of scientific evidence linking CG and SS, in particular the data supporting that it is negative SS which predicts an increase in CG (Burke et al., 2010).However, the measurement taken of SS, as opposed to that used by Burke to achieve that result, at any time includes negatives.Furthermore, this study highlights the direct association between ASS and SASS, which means that those who are still grieving and have a greater number of people who support them are more satisfied and those who have fewer people are less satisfied.Also, ASS and PWB maintained a direct and positive association, ie the higher the ASS, the greater the well-being.Therefore, it seems appropriate to think that despite agreement upon the end result (greater ASS predicts greater CG), ASS works differently in either study. In turn, the SASS has a negative weight in the regression equation, which means that keeping the other variables constant, an increase in one unit of the SASS predicts a decrease of .54units of CG.In line with the above, it would reinforce what has been said: results depend on ASS with a different valuations (positive vs. negative), and leads to the conclusion that the SS participants received was considered neither harmful nor bothersome, as was in the study of Burke et al. (2010), but rather the opposite. This leads us to a new group of three different results already discussed in the introduction.Perhaps in our study, elevated symptoms that have been associated with grief and which are obvious to others in the form of behaviors such as crying, comments about hopelessness, etc., could be something that generates an increase in supportive behaviors and therefore greater ASS.This would have special significance to those in mourning who have already accepted the reality of their loss (Worden's 1 st task) and are now dealing with the associated pain and changes in their lives that this involves (Worden's 2 nd task), and above all be consistent given the study sample: people who have lost a loved one and after some time ask for outside help to work through their grief. But there is one fact that could be interfering and which a partly similar study notes (Somhlaba & Wait, 2008): depressive symptoms caused an increased awareness of SS.Keeping this in mind, we must be cautious in our interpretation, because with this data we cannot distinguish whether a higher level of CG causes the grieving person to receive more SS, or if the person notices it because they are under the influence of depressive symptoms.It would be a question to clarify in the future for two reasons.First, because there are at least two independent studies that had the opposite result: a heightened state of CG was associated with lower perceived SS (Kristensen et al., 2010;Ott, 2003.).Second, in light of grief counseling, the overall score of the bereaved may be helped be better by knowing the real SS that they count on or if they rate it higher as a consequence of associated depressive symptomatology Taking all of these results into account, and especially the relationship found between SS and CG, the first step in the evidence for the damping effect that SS would have on CG (Stroebe et al., 2005) could be: at first they are expressions of pain which increase social support and later on, social support influences the lowering of CG.Due to the cross-sectional design used we can only state what could be happening first.One could verify the latter using a longitudinal study. From a broader perspective, this understanding fits the model proposed by Shah & Meeks (2012), which gives the social SS a mediating role between the above contextual factors to the loss and the end result is a resilience-CG continuum. In relation to time since the loss of a loved one, it is noteworthy that a positive proportion in relation to CG is maintained, i.e., a positive percentage variation in the number of months since loss (logarithmic scale) predicts greater variation of CG.Intuitively, one would expect that as time passes, the discomfort decreases, a fact that even some publications support (Meert et al., 2011.);however, bearing in mind that denial is a mechanism that happens naturally and which initially is a functional aspect in the face of the progressive assimilation of the loss (Worden, 1997), it may also be an attitude that continues in some people and that could impede healthy processing of grief (Cabodevilla, 2007).From another point of view, and in line with what Worden (1997) proposes, initiating the grieving process often involves guilt, because it implies leaving the loved one behind, being aware of the disappearance of that person, and sometimes in a way that is not correctly understood, it is considered disloyal to the deceased.The guilt associated with what they are doing could cause the bereaved to halt the grieving process, and what happens is that with the passage time there is no improvement.In any case, it should be noted that the mere passage of time does not only act in favor of overcoming the loss, but rather acts against it when it does not go hand in hand with properly completing the tasks of working through one's grief. Looking to future studies and given that there is still a significant percentage of CG to explain, it would be appropriate to include measures to collect more aspects of SS, and negative SS, and assess their differential influence as the possible moderating role of symptoms, especially the depressive type, in the perception of ASS.In addition, because the study center is linked to a religious order and helps relatives experiencing problems with processing their grief, the sample may be biased in its selection and it would be desirable to establish a comparison group with another center where grieving relatives also go and / or try to replicate the study in another center. However, this research provides interesting suggestions in the field of grief counseling for those who have lost a loved one, especially in respect to the role of psychological well-being and social support received (valued as satisfactory) as predictors of complicated grief, and that the information provided by maintained optimal functioning and the social network that is counted on respectively open the possibility of preventing complicated grief through variables that do not involve suffering or impairment of the person, which is a desirable objective for an efficient use of the resources offered at the grief counseling center.
v3-fos-license
2020-04-08T20:53:29.491Z
2020-04-08T00:00:00.000
215411506
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00167-020-05958-x.pdf", "pdf_hash": "6242be9fe9f0e80606576c58a43a915af75915bc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42233", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6242be9fe9f0e80606576c58a43a915af75915bc", "year": 2020 }
pes2o/s2orc
Young men in sports are at highest risk of acromioclavicular joint injuries: a prospective cohort study Purpose To study the incidence of acromioclavicular joint injuries in a general population. Methods All acute shoulder injuries admitted to an orthopaedic emergency department were registered prospectively, using electronic patient records and a patient-reported questionnaire. The regional area was the city of Oslo with 632,990 inhabitants. Patients with symptoms from the acromioclavicular joint without fracture were registered as a dislocation (type II–VI) if the radiologist described widening of the joint space or coracoclavicular distance on standard anteroposterior radiographs. Patients without such findings were diagnosed as sprains (type I). Results Acromioclavicular joint injuries constituted 11% of all shoulder injuries (287 of 2650). The incidence was 45 per 105 person-years (95% confidence interval [CI] 40–51). 196 (68%) were diagnosed as sprains and 91 (32%) as dislocations. Median age of all acromioclavicular joint injuries was 32 years (interquartile range 24–44), and 82% were men. Thirty percent of all acromioclavicular joint injuries were registered in men in their twenties. Sports injuries accounted for 53%, compared to 27% in other shoulder injuries [OR 3.1 (95% CI 2.4–4.0; p < 0.001)]. The most common sports associated with acromioclavicular joint injuries were football (24%), cycling (16%), martial arts (11%), alpine skiing and snowboarding (both 9%), and ice hockey (6%). Conclusion Our study suggests that in the general population, one in ten shoulder injuries involves the acromioclavicular joint and young men in sports are at highest risk. A prognostic level II cohort study. Introduction The first classifications of acute acromioclavicular joint (ACJ) injuries were introduced by Tossy et al. and Allmann [1,37]. They classified the injuries from grade I to III based on radiological examination. Rockwood et al. established a more detailed classification that graded injuries from type I to VI [32]. Treating ACJ injuries is still an area with controversies. There are no evidence-based guidelines, and there is a lack of evidence-based knowledge concerning these injuries and chronic shoulder pain [2]. Due to the lack of evidence-based guidelines, expert shoulder groups have published guidelines based on clinical experience in an attempt to fill the gap [4]. Shoulder injuries are common in young men, and the increased risk is mainly attributable to sport-related injuries [13]. ACJ injury has been reported as the most common upper extremity injury in sports and the one most frequently leading to time loss from sports [5,16,24]. Existing data refer either to a limited group of patients or to a specific sport [5,16,24,38]. ACJ sports injuries are also included in large population-based registry studies [6,10,35]. The Department of Orthopaedic Emergency at Oslo University Hospital treats a wide range of injuries in a large regional area and can, therefore, contribute with 1 3 epidemiological data that are more representative and generalizable regarding this type of injury. The aim of this study was to investigate the incidence of ACJ injuries in a general population cohort of all ages, and to describe in which sports activities and age groups, these injuries occur. This study will provide new knowledge about the presence of these injuries, and contribute to an increased awareness of specific sports and age groups that are at higher risk of ACJ injury. Materials and methods The present study was accepted as an internal audit project with anonymous data by the Office of the Privacy and Data Protection Officer of Oslo University Hospital on the 17/02/2013. According to Norwegian legislation, internal audits are exempt from approval by The Regional Committee for Medical and Health Research Ethics. All shoulder injuries admitted at the Department of Orthopaedic Emergency, Oslo University Hospital, were registered prospectively from May 2013 to April 2014. 58,158 patients with acute physical injury were admitted during the study period. In October 2013, the population of Oslo was 632,990. A total of 3031 shoulder injuries were registered, 2650 of which were for Oslo residents (Fig. 1). The overall epidemiology of acute shoulder injuries as well as an overview of sports-related acute shoulder injuries have recently been published from these data [12,13]. The present study is an in-depth analysis of the acromioclavicular injuries in this cohort. The Department of Orthopaedic Emergency provides services for the majority of injured patients in Oslo. It is a first-line, walk-in clinic as well as a secondary care diagnostic unit for all hospitals in Oslo. Severely injured patients are, however, brought directly to the regional trauma center [12,18]. Between 83 and 86% of the population attended the facility after an upper-extremity injury, according to two previous studies that also obtained data from private emergency centers and the three public hospitals of Oslo [21,22]. Data source When admitted, patients with a suspected shoulder injury completed a questionnaire containing items from the national accident registration regarding injury time and mechanism. The national accident registration is a mandatory structured element of the electronic patient record. In patients who had not completed the questionnaire, the physician entered the data based on the patient history. The arrival lists were sorted by the International Classification of Diseases (ICD-10) S4 diagnoses (injuries of shoulder and upper arm). The patient records with ICD-10M-codes (diseases of the musculoskeletal system and connective tissue) and all that had completed the questionnaire were examined, to find missed cases and coding errors. The first and second authors (S.A.S or M.E) reviewed the questionnaires and patient records, including radiology reports and follow-up, and entered the data in the database. Participants Inclusion criteria were acute shoulder injury within the last 3 months with a coinciding onset of symptoms. Injury to the clavicle, scapula, proximal third of the humeral bone, their articulations and surrounding soft tissues was included; whereas, injury to the middle and distal third of the humeral bone and adjacent soft tissues was excluded. Patients were excluded from registration if there was doubt regarding whether there had been an acute trauma causing the shoulder symptoms. Variables Age, gender, city district, date and time of injury and primary visit, activity, injury mechanism, multiple and concomitant injuries and MRI were registered. Conventional radiographs in two views were performed in all patients according to the clinical findings. The department's standard projections for the ACJ were 15° craniocaudal and caudocranial view. A panorama (bilateral Zanca) view was performed when requested by the physician. Supplementary modalities were computed tomography (CT) and magnetic resonance imaging (MRI). The initial diagnosis was corrected when imaging and/or clinical examination during follow-up undoubtedly concluded differently. Patients with pain in the anterosuperior part of the shoulder, point tenderness over the ACJ and normal radiographs described by the radiologist regarding acute injury were classified as sprain (S435) (Rockwood type 1) [3,17,32]. Patients with similar symptoms but abnormal widening of the ACJ or coracoclavicular distance were classified as separation/dislocation (S431) (Rockwood types II-VI) [3,17,32]. Bias City district residency was recorded to control for potential bias. Although the Norwegian society is relatively homogenous, the districts vary somewhat regarding age distribution and socioeconomic parameters. The standard deviation of the mean acute shoulder injury incidence rates in the 15 districts did not indicate socioeconomic bias [12]. The Oslo University Hospital Trauma Registry reported that only 25 Oslo residents with a shoulder injury diagnosis were treated in the unit without prior triage and consultation at the Department of Orthopaedic Emergency during the registration period [12]. Statistical analysis IBM SPSS Statistics Version 23 was used for the statistical analysis. Results were considered statistically significant if p < 0.05. Incidence rates were calculated as the number of shoulder injury incidents divided by the person-years at risk, each person counting 1 year. In patients who experienced multiple episodes of shoulder injury during the year of registration, each episode was registered. OpenEpi.com using the Mid-P exact test with Miettinen's (1974d) modification was used to calculate the 95% confidence interval (CI) for incidence rates. Because the age distribution was skewed, we have reported medians, interquartile ranges (IQR) and used the Mann-Whitney U test to compare age in two groups. Categorical data were compared using the Chi-square test. Population data on the 15 districts were supplied by the City of Oslo, and all other population data were extracted from Statistics Norway [34]. Results An ACJ injury was registered in 287 patients (11%), corresponding to an incidence of 45 per 10 5 person-years (95% CI 40-51) in the general population. Of these, two-thirds were diagnosed as sprains and one-third as dislocations. Median age was 32 years (interquartile range (IQR) 24-44, minimum 6, maximum 91), and 82% were men. The highest incidence was found in men in their twenties, who accounted for 30% of ACJ injuries (Figs. 2 and 3). In this group, 22% of the shoulder injuries were ACJ injuries. Women had low and more evenly distributed incidence rates in the ages between 20 and 60 years. Discussion The most important findings of the present study were that patients with ACJ injuries were younger and more often men compared with the total shoulder injury cohort. One in ten of all shoulder injuries were ACJ injuries, and more than half of the ACJ injuries were sports injuries. Approximately, twothirds were sprains, and one-third were dislocations. The overall incidence of ACJ injuries in this study was 45 per 10 5 person-years. The numbers are two to ten times higher compared with previous studies [7,9,20,26]. A study from an orthopaedic emergency department in Italy on the incidence of ACJ injuries found 108 patients with ACJ injury over a period of 5 years, despite a population at risk just below that of Oslo's [7]. Type III injuries were most common and comprised 40% of the injuries. In our study, type I accounted for two-thirds of the ACJ injuries; this difference may be attributed to the low threshold for attendance at our walk-in clinic. The rates of ACJ injuries among women were low in our study. The distribution according to age was from the teens until 60 years of age. In men, there was a peak in the twenties, and in this group, every fifth shoulder injury was an ACJ injury, compared to every tenth in the original cohort of all shoulder injuries. The same pattern was found for type 1 injuries in women. A peak for dislocations was observed in the twenties; whereas, the incidence was similar from the teens until 60 years of age for sprains. These observations support the findings of other studies describing a majority of young men in sports with ACJ injuries [23,24,27,28]. Men with acute Rockwood types III-VI do also have more associated articular lesions [33]. The majority of the present literature on ACJ injuries is primarily focused on surgical techniques and results [25,36,39]. To know what is best for patients, those who are not operated upon must also be mapped out, as it is done here. A better classification is also required to know that the same patients are being compared. The main strength of the present study is that the majority of shoulder injuries occurring during one year in a population of > 600,000 people were examined. The study should be interpreted in the light of both numerator and denominator considerations [11]. We have used numbers from Oslo's only public walk-in emergency facility. The shoulder injury incidence rates in each of the 15 city districts did not indicate socioeconomic selection bias. Although the Department of Orthopaedic Emergency is an integrated part of the Division of Orthopaedic Surgery and the radiology reports have been reviewed, there is a risk of misdiagnosis. One of the challenges in treating ACJ injuries is the lack of reliable classification. Injuries were classified according to ICD-10, and the radiology reports were reviewed for every diagnosis; however, inter-and intra-rater reliability testing was beyond the scope of this study. The classification system should be examined in future studies because several reports conclude that two-dimensional radiological classification has a poor inter-and intra-observer agreement [8,29,30]. Patients with shoulder injuries that occurred elsewhere and did not require follow-up on the return to Oslo might have been missed. In cases where the patient did or could not complete the questionnaire, the physicians might have missed out on the correct injury mechanism when completing the structured national accident registration and writing the physicians note. Finally, a possible limitation of this study may be seen in the population analyzed. Even if it is definitely more generalized than what is found in other papers [14,15,19,31], it still refers to a specific area of Europe, thus reflecting physical activities (habits and behaviors) that in some ways are different from other countries. This study provides new knowledge regarding the presence of ACJ injuries in the general population. In daily clinical work, the diagnosis should be suspected in active young men in particular. The data are also important for the planning of injury prevention programs, for federations in sports associated with ACJ injuries, helped by medical research centers in cooperation with a group of international experts. Conclusions In this cohort, ACJ injuries represent one out of ten of all shoulder injuries. Every third had radiological widening or dislocation of the ACJ. Young men were at high risk, and more than half were sports related. This study provides new knowledge about the presence of ACJ injuries in the general population. Informed consent Written consent was not necessary for this study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
v3-fos-license
2016-05-12T22:15:10.714Z
2013-05-20T00:00:00.000
9420125
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/aorth/2013/806267.pdf", "pdf_hash": "4fbc443f017697a0509cbd320acc6ac4780c2f33", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42237", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b4a64b9c87d9e2127ae7d2749bd6f01c424fe1dc", "year": 2013 }
pes2o/s2orc
Dynamic Stabilisation in the Treatment of Degenerative Disc Disease with Modic Changes Objective. Posterior dynamic stabilization is an effective alternative to fusion in the treatment of chronic instability and degenerative disc disease (DDD) of the lumbar spine. This study was undertaken to investigate the efficacy of dynamic stabilization in chronic degenerative disc disease with Modic types 1 and 2. Modic types 1 and 2 degeneration can be painful. Classic approach in such cases is spine fusion. We operated 88 DDD patients with Modic types 1 and 2 via posterior dynamic stabilization. Good results were obtained after 2 years of followup. Methods. A total of 88 DDD patients with Modic types 1 and 2 were selected for this study. The patients were included in the study between 2004 and 2010. All of them were examined with lumbar anteroposterior (AP) and lateral X-rays. Lordosis of the lumbar spine, segmental lordosis, and ratio of the height of the intervertebral disc spaces (IVSs) were measured preoperatively and at 3, 12, and 24 months after surgery. Magnetic resonance imaging (MRI) analysis was carried out, and according to the data obtained, the grade of disc degeneration was classified. The quality of life and pain scores were evaluated by visual analog scale (VAS) score and Oswestry Disability Index (ODI) preoperatively and at 3, 12, and 24 months after surgery. Appropriate statistical method was chosen. Results. The mean 3- and 12-month postoperative IVS ratio was significantly greater than that of the preoperative group (P < 0.001). However, the mean 1 and 2 postoperative IVS ratio was not significantly different (P > 0.05). Furthermore, the mean preoperative and 1 and 2 postoperative angles of lumbar lordosis and segmental lordosis were not significantly different (P > 0.05). The mean VAS score and ODI, 3, 12, and 24 months after surgery, decreased significantly, when compared with the preoperative scores in the groups (P = 0.000). Conclusion. Dynamic stabilization in chronic degenerative disc disease with Modic types 1 and 2 was effective. Introduction Chronic low back pain (LBP) has been one of the most common causes of disability in adults and is a very important disease for early retirement in industrialized societies. Degenerative disc disease (DDD) is the most frequent problem in patients with LBP. The prevalence of Modic changes among patients with DDD of the lumbar spine varies between 19% and 59%. Type 1 and 2 Modic changes are more common than type 3 and mixed changes [1][2][3][4][5][6][7][8][9][10][11][12][13]. Degenerative vertebral endplate and subchondral bone marrow changes were first noted on magnetic resonance imaging (MRI) by Roos et al. in 1987 [1]. A formal classification was subsequently provided by Modic et al. in 1988, based on a study of 474 patients, most of whom had chronic LBP [2]. They were found to be associated with DD [1][2][3]. Three different types have been described [2,3]. Type I lesions (low T1 and high T2 signals) are assumed to indicate an ongoing active degenerative process. Type II lesions (high T1 and T2 signals) are thought to manifest a 2 Advances in Orthopedics more stable and chronic degeneration. Type III lesions (low T1 and T2 signals) are associated with subchondral bone sclerosis. Modic changes are interesting because an association between Modic changes and LBP symptoms has been shown recently in population-based cohorts [10,12,14]. Kjaer et al. suggested that Modic changes constitute the crucial element in the degenerative process around the disk in relation to LBP and clinical findings [14]. They demonstrated that DDD on its own was a fairly quiet disorder, whereas DDD with Modic changes was much more frequently associated with clinical symptoms. Most authors agree that among Modic changes, type 1 changes are those that are most strongly associated with symptomatic LBP [5,7,12,13]. Braithwaite et al. suggested that vertebral endplate could be a possible source of discogenic LBP [4]. Therefore, Modic changes appear to be a relatively specific but insensitive sign of a painful lumbar disc in patients with discogenic LBP. Buttermann et al. suggested that abnormal endplates associated with inflammation are a source of pain, and treating endplates directly with anterior fusion may be a preferred treatment for this subset of degenerative patients [15]. Chataigner et al. suggested that anterior fusion is effective for the treatment of LBP due to DDD when associated with vertebral plate changes [16]. Fritzell et al. reported that posterior lumbar fusion in patients with severe chronic LBP can diminish pain and decrease disability more efficiently than commonly used nonsurgical treatment, through a prospective multicenter randomized controlled trial from the Swedish Lumbar Spine Study Group [17]. Kwon et al. suggested that PLIF procedures in which TFC is used in patients with Modic types 1 and 2 showed an acceptably high success and fusion rate [18]. Segmental fusion operations are performed frequently as treatment for DDD with Modic types 1 and 2. Nevertheless, fusion also carries various risks such as adjacent segment degeneration, bone graft donor place pain, and pseudoarthrosis [19][20][21][22]. Dynamic stabilization controls abnormal movements in an unstable, painful segment and facilitates healthy load transfer, preventing degeneration of the adjacent segment [23]. Recently, several clinical studies reported that dynamic stabilization yielded good clinical results and represented a safe and effective alternative technique to spine arthrodesis in selected cases of degenerative lumbar spine instability [24][25][26]. The purpose of the current study was to assess the efficacy of dynamic stabilization in DDD with Modic types 1 and 2. Patients were informed about the operation. All the patients completed the consent forms. The patients had leg and/or chronic LBP, and those who had previously undergone spinal surgery were excluded. We also excluded patients with spinal tumor, infection, spondylolisthesis, traumatic vertebral fracture, scoliosis, and serious systemic disease. Patients were diagnosed to have DDD with Modic changes on MRI. All patients were examined with lumbar anteroposterior (A-P) and lateral X-rays. Cosmic (Ulrich GmbH & Co. KG, Ulm, Germany) and Safinaz (Medikon AS, Turkey) dynamic pedicle screws and rigid rod system were used together with the microdiscectomy procedure in all patients. Evaluation of Quality and Pain Scores. The quality of life and pain scores were evaluated using visual analog scale (VAS) score (0, no pain; 10, worst pain) and Oswestry Disability Index (ODI) both preoperatively and at 3, 12, and 24 months after surgery ( Table 2). Radiological Analysis. The patients underwent preoperative MRI and/or computed tomography (CT). Furthermore, all patients had AP and lateral standing X-rays of the lumbar spine preoperatively and at 3 (1 postoperative), 12 (2 postoperative), and 24 months (3 postoperative) after surgery. Lordosis of the lumbar spine (L1-S1) was measured as the angle between the lines drawn on lateral standing X-rays from the lower endplate of L1 and upper endplate of S1. Segmental lordosis of the operative level (or levels) was measured as the angle between lines drawn from the upper and lower endplates of the vertebrae across which instrumentation spanned preoperatively as well as 3, 12, and 24 months after surgery. The ratio of the height of the intervertebral disc spaces (IVSs) to the vertebral body height was measured and compared preoperatively and postoperatively. The IVS ratio was calculated as the mean anterior and posterior intervertebral disc height divided by the vertebral height of the rostral vertebra of the motion segment. MRI Evaluation. Lumber sagittal MRI was performed with a slice of 5 mm thickness. A T2-weighted image with a repetition of 2500 msec and an echo time of 90 msec of the lumbar spine was taken for all the participants. The signal intensity of nucleus pulposus of the discs L2-L3, L3-L4, L4-L5, and L5-S1 was evaluated independently by three radiologists. The grade of disc degeneration was determined according to Schneiderman's classification: Grade 1, normal signal intensity; Grade 2, heterogeneous decreased signal intensity; Grade 3, diffuse loss of signal; Grade 4, signal void. MRI analysis was carried out, and according to the data obtained, the grade of disc degeneration was classified as mild (Grades 1-2), and severe (Grades 3-4). In this study, before surgery, endplate abnormalities were divided into Modic type 1 signals (low intensity on T1weighted spin-echo images and high intensity on T2weighted spin-echo images) and Modic type 2 signals (high intensity on both T1-and T2-weighted spin-echo images). Operative Technique. All patients were taken into the operating room under general anesthesia in the prone position. Prophylactic antibiotics were given to all of them before the operation. All operations were performed using operational microscopy and standard surgical technique. The level of operation was determined via intraoperative fluoroscopy. When the interlaminar level with disc herniation was approached from the medial aspect, laminotomy was widened with the help of a high-speed drill. After identifying the correct nerve root, free disc fragments under the nerve root and passageway were removed. Decompression was completed by performing the required laminotomy. After carrying out the microdecompression procedure, we also executed posterior dynamic transpedicular stabilization from the same incision with the help of lateral intraoperative fluoroscopy using Wiltse approach via inside lateral paravertebral muscle. The dynamic pedicle hinged screws used in our cases were Cosmic (Ulrich Gmbh & Co. KG, Ulm, Germany) and Safinaz (Medikon, Turkey), in combination with rigid rods (Figure 1). Statistical Methods. Kolmogorov-Smirnov test was used for homogeneity of the groups to comply with the normal distribution test. Friedman and Wilcoxon test was used for statistical analysis. Results In Table 1, the median, minimum and maximum range, Lumbar lordosis, angle, and IVS value are given. The mean 1, 2, and 3 postoperative IVS ratio was significantly greater than that of the preoperative group (P < 0.001, Table 1). However, the mean 1 and 2 postoperative IVS ratio was not significantly different (P > 0.05). The mean preoperative and 1, 2, and 3 postoperative angles of lumbar lordosis and segmental lordosis were not significantly different (P > 0.05). Furthermore, the mean lumbar lordosis preoperative and 1, 2, and 3 postoperative values were not significantly different (P > 0.05). All cases of Modic type 1 degeneration upgraded to type 2 or 3 degeneration after 24 months without pain. From Table 2, it can be noted that the mean VAS pain score and ODI score 3, 12, and 24 months after surgery decreased significantly, when compared with the preoperative scores in the groups (P = 0.000). Furthermore, 24 months after surgery, the mean VAS score and ODI score decreased significantly, when compared with preoperative scores and postoperative 3-and 12-month scores in the groups (P = 0.000). Discussion Abnormalities of the vertebral endplate and vertebral bone marrow were described by Modic et al. [2]. Abnormalities associated with decreased signal intensity on T1-weighted spin-echo images (Modic type 1) correlated with segmental hypermobility and LBP [3]. Fayad et al. found that patients with chronic LBP and predominantly type 1 inflammatory Modic changes had better short-term relief of symptoms following intradiscal steroid injection than those with predominantly type 2 changes, which further supports the inflammatory nature of Modic type 1 changes and the role of inflammation in the generation of LBP [27]. Two recent publications suggest a possible relationship between bone marrow abnormalities revealed by MRI and discogenic pain [4,28]. In these studies, moderate and severe types 1 and 2 endplate abnormalities were considered abnormal, and all the tested discs caused concordant pain on provocation [6]. Ohtori et al. reported that endplate abnormalities in patients with discogenic pain are related to inflammation and axonal growth into the abnormal bone marrow induced by cytokines, such as tumor necrosis factor- [29]. Thus, tumor necrosis factor-expression and sensory nerve in-growth in abnormal endplates may be a cause of LBP [29]. It has been reported that Modic type 1 change is associated with pathology, including disruption and fissuring of the endplate with regions of degeneration and regeneration and vascular granulation tissue [2,5]. In addition, an increased amount of reactive woven bone as well as prominent osteoclasts and osteoblasts has been observed [2]. It has been reported that there were increases in the amount of cytokines and the density of sensory nerve fibers in the endplate and bone marrow in Modic type 1 change, when compared with normal subjects, strongly suggesting that the endplates and vertebral bodies are the sources of pain [29,30]. These reports suggest that Modic type 1 signal shows an active inflammatory stage [2,5,29,30]. In contrast, type 2 changes were found to be associated with fatty degeneration of the red marrow and its replacement by yellow marrow. Thus, it had been concluded that type 1 changes correspond to the inflammatory stage of DDD and indicate an ongoing active degenerative process, whereas type 2 changes represent the fatty stage of DDD and are related to a more stable and chronic process. In the study by Toyone et al. [5], 70% of the patients with type 1 Modic changes and 16% of those with type 2 changes were found to have segmental hypermobility, defined as a sagittal translation of 3 mm or more on dynamic flexionextension films [5]. In a study assessing osseous union following lumbar fusion in 33 patients, Lang et al. found that all 19 patients with solid fusion had type 2 Modic changes, whereas 10 of the 14 patients with nonunion had type 1 changes [31,32]. They suggested that Modic type 1 in patients with unstable fusions might be related to reparative granulation tissue, inflammation, edema, and hyperemic changes. They concluded that the persistence of type 1 Modic changes after fusion suggests pseudoarthrosis. Similarly, Buttermann et al. observed that nonfusion was associated predominantly with the persistence of type 1 Modic changes [15]. There are patients having very low back pain Modic type 1 and in addition patients with unbearable pain will spend for the failed fusion surgery. For this reason, we performed dynamic stabilization in Modic type 1 and 2 patients. Hinged screw systems have been used for posterior dynamic stabilization in the current series. The advantages of this system are as follows. (i) These systems stabilize the spine and restore the neutral zone [33][34][35]. (ii) They provide a simple surgery, when compared with anterior, posterior, or combined fusion surgery. (iii) These types of dynamic systems allow performing lumbar lordosis during the surgery. (iv) Pseudoarthrosis rate is high in cases with fusion surgery [16,31]. (v) The clinical experience demonstrated good results in the literature [36,37]. Chataigner et al. studied 56 patients who underwent anterior procedures with bone grafting for LBP [16]. Their best results were obtained in patients with Modic type 1 lesions. The results were poorer in patients who had black discs without endplate involvement or Modic type 2 lesions. Among five nonunions, three requiring posterior revision surgery were observed in Modic type 2 changes. Anterior surgery, with disc herniation associated with Modic type 1 or 2 as the basis for the implementation of changes, is difficult. Because these patients for the treatment of disc herniation and discectomy ago posterior made, then the patients given the same or a different session, the anterior position to apply the anterior fusion surgery. Anterior surgery is time consuming and is an intervention method with a high likelihood of complications. For these patients instead of an application, we propose a posterior dynamic stabilization. Kwon et al. studied the long-term efficacy of PLIF with a threaded fusion cage based on vertebral endplate changes in DDD [18]. They found that the fusion rate was 80.8% for patients with Modic type 1 changes, 83.6% with Modic type 2 changes, and 54.5% with Modic type 3 changes. Furthermore, the nonfusion rate was 20%. This ratio is higher for patients with Modic type 1 as a high proportion of patients continue to complain about pain and do not see the benefits of treatment. Vital et al. assessed the clinical and radiological outcomes following instrumented posterolateral fusion in 17 patients with chronic LBP and type 1 Modic changes [32]. Six months later, all type 1 changes had converted, with 76.5% being converted to type 2 changes and 23.5% back to normal, and clinical improvement was seen in all patients. They concluded that fusion accelerates the course of type 1 Modic changes probably by correcting the mechanical instability, and that these changes appear to be a good indicator of satisfactory surgical outcome after arthrodesis. The natural course of the signal anomalies reported by Modic et al. was subsequently followed up by the same authors [2]. Five of the six type 1 lesions were replaced by type 2 signal anomalies over 14-36 months. The type 2 lesions remained stable over 2-3 years of follow-up evaluation. Lang et al. showed that the persistence of Modic type 1 signal after arthrodesis suggests pseudoarthrosis [31]. Toyone et al. concluded that Modic type 1 signal is associated with instability, requiring arthrodesis more commonly than Modic type 2 change, which can accompany nerve-root compromise [5]. In brief, we can state that Modic type 1 changes are associated with instability and painful disorders connected with instability. In such cases, posterior dynamic stabilization could be an effective and alternative treatment modality.
v3-fos-license
2016-05-12T22:15:10.714Z
2015-03-12T00:00:00.000
9922374
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1002/fsn3.213", "pdf_hash": "01eca33bc806af8bc9fbc9cfe3f492fe78787261", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42240", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "01eca33bc806af8bc9fbc9cfe3f492fe78787261", "year": 2015 }
pes2o/s2orc
Amino acid profile and oxidizable vitamin content of Synsepalum dulcificum berry (miracle fruit) pulp The amino acid profile of the Synsepalum dulcificum berry was studied. Among the essential amino acid observed, leucine (2.35 g/100 g protein) was the highest while methionine (0.31 g/100 g protein) was the lowest. The nonessential amino acids were also discovered, with glutamic acid (3.43 g/100 g protein) being the highest and glycine (0.38 g/100 g protein), the lowest. The study of the oxidizable vitamins revealed that vitamin C (1.33 mg/100 g) was more abundant than vitamin A (2.54 µg) and vitamin E (0.78 mg/100 g). This information will hopefully enhance the fruits acceptability by more people and thus, generally promote its utilization and appreciation in our diets. Introduction Synsepalum dulcificum is a tropical fruit, native to West Africa. The plant belongs to the family -Sapotaceae. Although it can grow up to 20 feet high, its predominant form is shrubby. The plant first bears fruits after growing for approximately 2-3 years. At times, it produces two crops per year, often around March-April and later, after the rainy season. It produces green elongated leaves which remain green as long as they remain attached to the plant all year long (plate 1). Although it has two varieties, distinguished by the production of red and yellow berries, the yellow variety (plate 1) is more prevalent in Nigeria, especially the Eastern part of Nigeria. The berry has a unique effect on the taste buds, such that flavors of fruits (citrus fruits), consumed after eating the fruit are generally enhanced and their delicate flavors, formerly masked by natural acids, are released, hence the name 'miracle fruit'. A new class of sweeteners from proteins found in the fruits of tropical plants has been discovered, and natives of the areas where the plants producing these proteins grow naturally have frequently used them to sweeten their food stuff. Synsepalum dulcificum is one of such plants. There is increased interest in natural sweeteners which may be as a result of 'perceived' health risks of some artificial sweeteners (WHO 1999). The miracle fruit has been in use since the 18th century (Slater 2007). Some scientists (Inglett et al. 1965) found some experimental evidence that the active principle was a macromolecule. The taste modifying principle was independently isolated by Kurihara and Beidler (1968), Henning et al. (1969), and Brouwer et al. (1968); and found to be miraculin. The destruction of the active principle by trypsin and pronase suggested its proteinaceous character. Other scientists (Metcalfe and Chalk 1972) from their studies confirmed that the sweetening property of miracle fruit was due to the presence of miraculin, a glycoprotein consisting of 191 amino acids and some carbohydrate chains (Theerasilp et al. 1989) found in the pulp of the berry. An evaluation of the amino acid profile of the yellow variety of Synsepalum dulcificum, therefore, becomes necessary to identify and quantify some of these amino acids. The vitamins content was investigated to determine if the pulp can provide additional benefits. This natural sweetener may be exploited especially by dieters and diabetes, who need more protein and vitamins in their diet. Materials and Methods Fresh mature berries of Synsepalum dulcificum (miracle fruit) were obtained from Umuagwo in Ohaji Egbema Local Government Area of Imo State, Nigeria. The pulp of these freshly harvested and cleaned Synsepalum dulcificum berries (plate 2) was extracted by scraping the fruits with clean stainless spatula. It was oven dried and used for the vitamins and amino acid profile analysis. Determination of oxidizable vitamins content The vitamins A, E, and C contents of the pulp sample were determined using the procedure described by Pearson (1976). Preparation of sample for vitamins determination The sample was oven dried using a Baird and Tatlock oven BS2648 at a temperature of 50°C for 24 h. Determination of vitamin A content One gram of the sample was macerated with 30 mL of absolute alcohol. Three milliliters of 50% potassium hydroxide was added. The solution was boiled for 30 min and cooled. Thirty milliliters of distilled water was added. The mixture was transferred to a separating funnel and washed with 10 mL of petroleum ether. The lower layer was discarded while the upper layer was evaporated to dryness. The residue was dissolved with 10 mL of isopropyl alcohol. The absorbance was taken at 334 nm using a spectrum 21D PEC spectrophotometer. The vitamin A content was extrapolated from a vitamin A standard curve ( Fig. 1). Alternatively, using the formula given below (1) where DF = Dilution Factor Gradient Factor = slope of the standard curve ( Fig. 1). Determination of vitamin E content One gram of the pulp sample was macerated with 20 mL of ethanol. The solution was filtered with Whatman No 1 filter paper. One milliliter of the filtrate was pipetted out and 1 mL of 0.2% ferric chloride in ethanol was added. One milliliter of 0.5% a-dipyridyl solution was also added. The solution was diluted to 5 mL with water and the absorbance read at 520 nm using a spectrum 21D PEC spectrophotometer. The vitamin E content was extrapolated from a vitamin E standard curve (Fig. 2). Alternatively, using the formula given below: where DF = Dilution Factor Gradient Factor = slope of the standard curve (Fig. 2). Determination of vitamin C content One gram of the sample was macerated with 20 mL of 0.4% oxalic acid. It was filtered with Whatman No 1 filter paper. One milliliter of the filtrate was pipetted out and 9 mL of indophenol reagent added to it. The absorbance was read at 520 nm (using a spectrum 21D PEC spectrophotometer). The vitamin C content was extrapolated from a vitamin C standard curve (Fig. 3). Alternatively, using the formula given below: where DF = Dilution Factor Gradient Factor = slope of the standard curve (Fig. 3). Amino acid profile determination The amino acid profile in the sample was determined using methods described by Speckman et al. (1958). The sample was dried to constant weight. A known weight (300 mg) of the dried sample was put into extraction thimble and the fat was extracted using Soxhlet extraction apparatus as described by AOAC (2006). A small amount (200 mg) of ground fat-free sample was weighed, wrapped in Whatman No 1 filter paper, and put in a Kjeldhal digestion flask. It was digested and distilled. The distillate was then titrated with standardize 0.01 N hydrochloric acid to gray-colored end point and the percentage nitrogen in the sample was calculated using the formula below: A known weight (50 g) of the defatted sample was put into glass ampoule. Seven milliliters (7 mL) of 6 N hydrochloric acid (HCl) was added and oxygen was expelled by passing nitrogen into the ampoule (this is to avoid possible oxidation of some amino acids such as methionine and cystine during hydrolysis). The glass ampoule was then sealed with Bunsen burner flame and put in an oven preset at 105°C AE 5°C for 22 h. The ampoule was allowed to cool before breaking it open at the tip and the content was filtered to remove the humins. The filtrate was then evaporated to dryness at 40°C under vacuum in a rotary evaporator. The residue was diluted with 5 mL acetate buffer (pH 2.0) and stored in plastic specimen bottles, which were kept in the freezer. It was noted that tryptophan was destroyed by hydrolysis with 6 N hydrochloric acid. Between 5 and 10 µL of the buffered residue was dispensed into the cartridge of the analyzer and analyzed with the TSM (Technicon Sequential Multi-sample) analyzer to acidic, neutral, and basic amino acids. The period of an analysis lasted for 76 min. A constant 'S' was calculated for each amino acid in the standard mixture using the formula, where MAA = micromole of amino acid in the standard. Finally, the amount of each amino acid present in the sample was calculated in g/16 gN or g/100 g protein using the following formula: Results and Discussion From Table 1, it was observed that although fruits are known as important sources of vitamins especially vitamins A and C, the pulp of yellow Synsepalum dulcificum was found to be very low in vitamin C, with content of 1.33 mg/100 g AE 0.24. This was less than the contents in other berries (blackberry, blueberry, raspberry, and strawberry) and fruits as reported by Food and Nutrition Board (2006), FNIC (2011) and Ihekoronye and Ngoddy (1985). Low ascorbic acid (vitamin C) levels have been associated with fatigue and increased severity of respiratory tract infections (Johnston et al. 1998). The vitamin A content of the sample was 2.54 µg (8.476 IU) (Table 1). Although the precursors of vitamin A, including beta-carotene and certain other carotenoids are found particularly in yellow to orange colored fruits, the content in the sample does not compare favorably with the content in blackberry (214 IU), raspberry (160 IU), and blueberry (54 IU) (Wikipedia 2011a, 2011b, 2011c). The vitamin A value in the pulp was also very low compared to other fruits such as pineapple (50 IU), guava (200 IU), orange (120 IU), mango (1000-8000 IU), and pawpaw (2500 IU) (Harald 1997;Onyeka 2002;Food and Nutrition Board 2006). Deficiency of vitamin A leads to night blindness, failure of normal bone, and tooth development in the young and diseases of epithelial cells and membrane of the nose, throat, and eyes which decrease the body's resistance to infection (Arnold 1960). The vitamin E in the pulp of Synsepalum dulcificum (0.78 mg/100 g AE 0.05) ( Table 1) was higher than those of blueberry (0.57 mg) and raspberry (0.56 mg) (USDA 2004) but lower than the 1.17 mg content in blackberry (Wikipedia 2012). It was also higher than the content in the citrus fruits (0.24-0.25 mg) but lower than the content in mango (1.12 mg), pawpaw (1.12 mg), and avocado (1.34 mg) (Onyeka 2002). Vitamin E prevents the peroxidation of membrane phospholipids and cell membrane oxidation through its antioxidant actions. This berry is primarily consumed for its taste-modifying effect and not necessarily for its nutrients. As such, it is only eaten when there is a need for its sweetening function, making it highly underutilized. This investigation aims to change this by identifying its nutritional benefits. However, from the results above, it is observed that to adequately provide needed vitamins, in comparison with other berries and fruits, more quantity of the berry pulp may be consumed. All the essential amino acids were detected in the test sample ( Table 2). The chemical scores for the essential amino acids calculated from the WHO reference protein (Ihekoronye and Ngoddy 1985;Onuegbu et al. 2011) are also shown in Table 2. The highest value was from leucine (2.35 g/100 g protein) with chemical score of 55.95%, followed by Lysine (1.60 g/100 g protein, chemical score of 38.10%), and the lowest from methionine (0.31 g/100 g protein) with chemical score of 14.09%. Leucine, isoleucine, and valine are oxidized in the muscle and the nitrogen used for the formation of alanine. All the analysed amino acids in miracle fruit had values lower than the amounts reported for African pear (Dacryodes edulis) pulp by FAO/WHO/UNU (1985). However, they were higher in quantity than the amino acids in Pyrus communis pear pulp (Mahammad et al. 2010). The nonessential amino acids were also detected as shown in table. Glutamic acid had the highest value (3.43 g/100 g protein) while glycine had the least value (0.38 g/100 g protein). Norleucine was not detected. The values of the amino acidsisoleucine, leucine, lysine, threonine, and valinein the miracle berry, all exceeded the (FAO/ WHO/UNU 1991) reference values of 2.8 mg/100 g protein, 6.6 mg/100 g protein, 5.8 mg/100 g protein, 3.4 mg/ 100 g protein, and 3.5 mg/100 g protein, respectively. The methionine + cysteine and phenylalanine + tyrosine (FAO/WHO/UNU 1991) reference values of 2.5 mg/100 g protein and 6.3 mg/100 g protein, respectively, were all exceeded in the miracle berry. This implied that the amino acids in the pulp of miracle fruit had high biological values and could contribute in meeting the human requirements of these essential amino acids especially if the commercial potential of this berry or its processed by-products is exploited. However, in comparison with the reference standard for ideal protein, the value for leucine and isoleucine contents of Synsepalum dulcificum pulp were below the recommended amino acid requirements (4.6 g/100 g protein) (Mahammad et al. 2010) for infants. Conclusion The research revealed that the berry's pulp had more vitamin C than vitamins A and E. The oxidative vitamin con- tent (vitamin C, A, and E) of the pulp was generally lower than that of other berries like blackberry, raspberry, and blueberry. The berry also had varying amounts of all the essential amino acids, with leucine having the highest amount and methionine the least value. This investigation on the yellow variety of the miracle berry has revealed the amino acid profile of the pulp. This study has also provided information on vitamin contents of the berry with respect to their identity and quantity in the pulp.
v3-fos-license
2023-02-05T16:17:38.188Z
2023-02-01T00:00:00.000
256581927
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/20/3/2692/pdf?version=1675338626", "pdf_hash": "20327e44dcef88188e11de12db264cd0240a9165", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42242", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a6c1ab2d41e408c57bb77efabca2944b9606f659", "year": 2023 }
pes2o/s2orc
Body Mass Index and Late Adverse Outcomes after a Carotid Endarterectomy A cohort study was conducted to examine the association of an increased body mass index (BMI) with late adverse outcomes after a carotid endarterectomy (CEA). It comprised 1597 CEAs, performed in 1533 patients at the Vascular Surgery Clinic in Belgrade, from 1 January 2012 to 31 December 2017. The follow-up lasted four years after CEA. Data for late myocardial infarction and stroke were available for 1223 CEAs, data for death for 1305 CEAs, and data for restenosis for 1162 CEAs. Logistic and Cox regressions were used in the analysis. The CEAs in patients who were overweight and obese were separately compared with the CEAs in patients with a normal weight. Out of 1223 CEAs, 413 (33.8%) were performed in patients with a normal weight, 583 (47.7%) in patients who were overweight, and 220 (18.0%) in patients who were obese. According to the logistic regression analysis, the compared groups did not significantly differ in the frequency of myocardial infarction, stroke, and death, as late major adverse outcomes (MAOs), or in the frequency of restenosis. According to the Cox and logistic regression analyses, BMI was neither a predictor for late MAOs, analyzed separately or all together, nor for restenosis. In conclusion, being overweight and being obese were not related to the occurrence of late adverse outcomes after a carotid endarterectomy. Introduction According to the World Health Organization (WHO), in 2016, almost 60% of adults in the WHO European Region were overweight and obese [1]. In Serbia, according to the latest health survey from 2019, the results were similar to those in Europe: 36.3% of adults were overweight, while 20.8% were obese [2]. Being overweight and obese are the fourth most common risk factors for noncommunicable diseases, including cancers, cardiovascular and cerebrovascular diseases, type 2 diabetes mellitus, and chronic respiratory disorders [1][2][3][4][5][6]. They have also been related to perioperative mortality after vascular surgery [7,8]. In addition, some studies have identified obesity as an independent risk factor for carotid plaque destabilization [9], although others showed no relationship between obesity and carotid artery stenosis [10]. Extracranial carotid disease accounts for approximately 18-25% of ischemic strokes [11,12]. Carotid endarterectomy, as a stroke preventive procedure, carries some risk for periprocedural and postprocedural complications that must be considered in the overall assessment of its safety and efficacy. Data on the association between obesity and CEA adverse outcomes are inconsistent. In some of the investigations [13,14], there was no association between obesity and stroke or death as early adverse outcomes after CEA. Still, there are also studies in which perioperative mortality after the intervention was higher in obese patients [15,16], as well as studies in which mortality and stroke were significantly lower in obese patients [17,18], which corresponds to the so-called "obesity paradox". Most of these studies referred to early adverse outcomes. The aim of the present study was to examine the association of increased BMI (overweight and obesity) with late adverse outcomes after CEA. Materials and Methods As previously described in detail [14], this cohort study comprised 1533 patients in whom 1597 CEAs were performed at the University Vascular Surgery Clinic in Belgrade from 1 January 2012 to 31 December 2017. Patients who previously had massive cerebrovascular insult and severe neurological damage, patients with CEA and simultaneous coronary artery bypass grafting, patients with severe cardiac comorbidity, and patients with severe renal insufficiency were not included in the study. The body mass index (BMI) was calculated as the weight in kg divided by the height in m 2 . For the purpose of the present study, patients were categorized into four categories: underweight (<18.5), normal weight (BMI = 18.5-24.9), overweight (BMI = 25.0-29.9), and obese (BMI ≥ 30) [19]. The BMI status before the operation was used in the analyses of all CEA outcomes. From all patients, we obtained data on some demographic characteristics, body height and weight, smoking, comorbidities, some laboratory findings at admission, family history of cardiovascular diseases, characteristics of carotid disease, the operative data, and the preoperative therapy, as well as the hospital discharge therapy. All CEAs were performed under general anesthesia. The patients were monitored for the first 30 postoperative days in order to determine whether there were any differences in early adverse outcomes, according to BMI; the results obtained were previously presented [14]. Follow-up was performed four years after CEA. Patients who did not come to the control examinations at the Clinic where they were operated on were contacted by phone. In the phone interview performed with the study participant or a member of their family, the following data were collected: data of patients' survival and cause of death, if known, neurological and cardiac events, and their time of occurrence, as well as the data of the patients' last duplex ultrasound control and possible restenosis. Only the first event of each outcome was included in the analysis. Data for myocardial infarction and stroke were available for 1223 CEAs, data for death for 1305 CEAs, and data for restenosis for 1162 CEAs. The Ethics Committee of the Faculty of Medicine, Belgrade approved this study (No. 1322/XII-1). Statistical Analysis The categorical variables were presented as numbers and percentages, and continuous variables were presented as the means and standard deviations. The CEAs in patients who were overweight and obese were compared separately with the CEAs in patients with a normal weight. Univariate and multivariate logistic regressions were used in the analysis. All variables that were, according to univariate analysis, associated with overweight or obesity at a level of p ≤ 0.10 were included in the model of multivariate analysis. Cox regression analysis was used to assess the hazard ratio of the late adverse outcomes (myocardial infarction, stroke, and death, separately and all together as late major adverse outcomes-MAOs) in patients who were overweight and obese compared to those with a normal weight. We analyzed whether being overweight and having obesity were predictors of restenosis using logistic regression analysis, since we had no data on the time when restenosis occurred. The selection method was backward Wald. All p values were based on a two-tailed test, and p < 0.05 was considered significant. In order to determine whether there was any correlation between the level of the BMI and the late adverse outcomes, the ROC curve analysis was also used. In addition, we analyzed the correlation between the independent variables included in the study. Follow-up data concerning stroke, myocardial infarction, and death, separately and together, were analyzed by the Kaplan-Meier test. The Statistical Package for Social Sciences (SPSS), version 23, was used for the analysis. Results Out of the 1223 CEAs, for which the data of late MAOs (myocardial infarction, stroke, and death all together) were obtained, 413 (33.8%) were performed in patients with a normal BMI, 583 (47.7%) in patients who were overweight, and 220 (18.0%) in patients with obesity. The number of CEAs in patients who were underweight (seven CEAs-0.5%) was too small to be included in further analysis. In comparison to the patients with a normal BMI (Table 1), the patients who were overweight were more frequently males (p < 0.01), and, in their personal history, they more frequently had an aortocoronary bypass-ACB (p ≤ 0.10), aneurysmatic disease (p ≤ 0.10), and non-insulin-dependent diabetes mellitus-NIDDM (p < 0.05). Compared to the patients with a normal BMI, the patients with obesity were younger (p < 0.05), and in their personal history, they more frequently had a myocardial infarction (p < 0.01) and NIDDM (p < 0.001) and had a more frequent family history of cardiovascular diseases-CVD (p ≤ 0.10). Moreover, they used OAC therapy more frequently before the operation (p < 0.05) and had higher values of triglycerides at hospital admission (p < 0.05). According to the data presented in Table 2, there were no significant differences between the groups in terms of the characteristics of the carotid disease and in the type of surgery; however, in the patients with obesity, the clump duration was shorter than in those with a normal BMI (p ≤ 0.10). Statistically significant differences were found for some drugs prescribed at the time of discharge: ACEI were prescribed more frequently in patients who were overweight compared to those with a normal weight (p < 0.01), and OAC were prescribed more frequently in patients with obesity than patients with a normal BMI (p < 0.05). Moreover, in comparison to the patients with a normal weight, higher doses of statins were prescribed in patients with obesity (p < 0.10). The groups did not significantly differ in the frequency of myocardial infarction, stroke, death, and restenosis as late adverse outcomes of CEA (Table 3). According to the results of the multivariate regression analysis (Table 4), in comparison to patients with a normal BMI, those who were overweight significantly differed in sex, being more frequently males (p = 0.004), they more frequently had NIDDM (p = 0.023), and the use of ACEI at the time of discharge was more frequent (p = 0.003). In comparison to patients with a normal weight, patients with obesity were younger (p = 0.029), they more frequently had a myocardial infarction (p = 0.002), NIDDM (p < 0.001), and increased triglyceride levels (p = 0.045). Moreover, patients with obesity more frequently used OAC in the preoperative therapy (p = 0.010), and the clump duration was shorter (p = 0.023). In the ROC curve analysis, there was no significant correlation between the level of the BMI and any of the late adverse outcomes [myocardial infarction (area under the curve-AUC = 0.502, p = 0.955), stroke (AUC = 0.502, p = 0.966), death (AUC = 0.484, p = 0.583), and restenosis (AUC = 0.516, p = 0.563)]. Moreover, there was no significant correlation between the independent variables included in the study. None of the correlation coefficients was higher than 0.75, taken as the limit value significant for multicollinearity. The follow-up data concerning stroke, myocardial infarction, and death, separately and together, did not show any difference between those with a normal weight, those who were overweight, and those with obesity, analyzed by the Kaplan-Meier test. The curves almost overlapped with p values ranging from 0.973 to 0.764. (The data are available as supplementary materials). According to a Cox regression analysis (Table 5), patients with and patients without late major adverse outcomes (myocardial infarction, stroke, and death), analyzed separately or altogether, did not differ in the BMI. In the logistic regression analysis, being overweight and having obesity were also not predictors for restenosis (Table 5). Table 5. Association of being overweight and having obesity with the occurrence of late major adverse outcomes (myocardial infarction, stroke, and death) and restenosis after carotid endarterectomies *. Discussion In the present investigation, there were no significant differences in the frequency of late adverse outcomes after CEAs in patients who were overweight and patients with obesity compared separately to the CEAs in patients with a normal BMI. There is a large amount of literature data about the association between the BMI and early adverse outcomes after CEA [13,15,16,18], but the results have been inconsistent. The literature data on the association of BMI with late adverse outcomes after CEA are scarce. In fact, we found only a few articles [16,[20][21][22]. When early adverse outcomes after CEA were considered, our results were similar to the results of these studies. In the analysis of the early adverse outcomes after CEA [14], we found that being overweight and having obesity were associated neither with major nor minor complications nor the need for reoperation. Only bleeding was significantly less frequent after CEA in patients who were overweight compared to patients with a normal weight. In Volkers et al.'s study [16], BMI was not associated with postprocedural risk of stroke or death, and in the study by Jeong et al. [21], BMI was not associated with early major adverse events (MAEs). In the study of Arinze et al. [20] conducted on nearly 90,000 patients, only those with morbid obesity (BMI ≥ 40 kg/m 2 ) had significantly higher perioperative cardiac complications, while bleeding was significantly more frequent in patients who were underweight; however, neither of these two BMI groups was analyzed separately in our study. However, while we did not find an association between being overweight and having obesity with any of the late adverse outcomes, Volkers et al. [16] found that a BMI 25-29.9 was associated with a lower postprocedural risk of stroke or death than a BMI 20-24.9. In the study by Arinze et al. [20], one-year and five-year survival after carotid endarterectomy was significantly associated with the body mass index. Compared to those with a normal weight, patients who were underweight had an increased risk for one-year mortality, but in both patients who were overweight and patients with obesity, the risk for one-year mortality was decreased. The same associations persisted at the five-year time point. Moreover, compared to patients with a normal weight, those with morbid obesity had a decreased risk for fiveyear mortality. The authors stated that these results agreed with the questionable obesity paradox [23], also present in the study by Volkers et al. [16]. In the recent study by Blecha et al. [22], a BMI < 20 kg/m 2 was one of the predictors of five-year mortality after CEA for asymptomatic carotid stenosis. In the present investigation, we analyzed symptomatic and asymptomatic patients together, and we did not include patients who were underweight in the analysis due to the very small number. Jeong et al. [21] did not find an association between the BMI and late MAEs but found that a higher BMI was significantly related to the occurrence of restenosis. It is difficult to say whether these differences in the association between the BMI and late complications after CEA are the results of the fact that our sample was small in comparison with some other studies or whether they occurred as a result of some differences in the variables in terms of the potential confounders that were included in these investigations. Although being overweight and having obesity were not significantly related to late complications after CEA, they were, significantly and independently of other variables, associated with some factors found to be predictors of late adverse outcomes, such as noninsulin-dependent diabetes mellitus [24][25][26][27][28], increased triglyceride levels [29], and some other cardiovascular diseases in the personal history, according to the patients' reports or postulated based on the therapy they received [30][31][32]. In the present study, non-insulin-dependent diabetes mellitus was significantly more frequent in both patients who were overweight and patients with obesity, compared to those with a normal weight. It is well known that an increased BMI is strongly correlated with type 2 diabetes mellitus (T2DM) [33,34]. According to a meta-analysis from the USA and Europe, people with obesity had a several times higher chance of developing T2DM compared to those with a normal weight [35]. The fat tissue distribution is considered the crucial factor in developing insulin resistance and, consequently, T2DM, independent from the stage of obesity [36], and those with a high proportion of visceral fat and limited abdominal subcutaneous fat are more insulin-resistant [37]. Both obesity and atherosclerosis are lipid storage disorders, with triglyceride accumulation in the fat tissue and cholesterol esters in atherosclerotic plaques [38]. In our study, the patients with obesity had significantly higher triglyceride levels than the patients with a normal weight. High triglyceride (TG) levels reflect the presence of high levels of TG-rich lipoprotein (TRL) remnants [39], which seem to be more proatherogenic than LDLs [40]. The accumulation of TRL remnants in atherosclerotic plaques plays an important role in the inflammatory response and the further development of atherosclerosis [41]. Hypertriglyceridemia, as secondary dyslipidemia in obesity, may contribute to the formation and progression of atherosclerotic plaques, including the carotid district. A recent review article demonstrates that both fasting and non-fasting hypertriglyceridemia are risk factors for CAS progression and cerebrovascular events associated with CAS [42]. It is also known that obesity contributes directly to incident cardiovascular risk factors, including dyslipidemia, type 2 diabetes, hypertension, and sleep disorders, but it can lead to the development of some cardiovascular diseases and cardiovascular disease mortality, independent from other risk factors [43,44]. The susceptibility to obesity-related cardiovascular diseases is not mediated solely by the total body fat mass but also depends on the individual differences in regional body fat distribution. The cardiovascular complications associated with obesity are also driven by various mechanisms, such as adipocytokines imbalance, inflammation, insulin resistance, endothelial dysfunction, coronary calcification, activation of coagulation, renin angiotensin, or the sympathetic nervous systems [45]. Powell-Wiley et al. stressed the need for further evaluation of the mechanisms underlying obesity-related cardiac dysfunction. There were some limitations to our investigation. Due to the small numbers, patients who were underweight were not included in the present study. Moreover, class II and III obesity were put together with class I obesity, and all these BMI subgroups could be, according to the results of other investigations, associated with complications after CEA. There is also the question of whether BMI is the best measure of adiposity, or whether it would be better to use measures such as the waist circumference, waist-to-height ratio, waist-to-thigh ratio, or the InBody Test [46][47][48][49]. Conclusions Compared to the patients with a normal weight, the patients who were overweight were significantly more frequently males, with non-insulin-dependent diabetes mellitus, and a more frequent use of ACEI in hospital discharge therapy. The patients with obesity were significantly younger, with myocardial infarction and non-insulin-dependent diabetes mellitus in their personal history, with more frequently increased triglyceride levels, more frequent usage of OAC in the therapy before the operation, and a shorter clump duration. However, being overweight or having obesity was not significantly associated with the occurrence of myocardial infarction, stroke, death, and restenosis, as the late adverse outcomes after a carotid endarterectomy.
v3-fos-license
2018-04-03T05:41:38.411Z
2016-02-04T00:00:00.000
14623765
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep20254.pdf", "pdf_hash": "dd697e3fb422fd7f6383a3c1aa30da82362461fd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42245", "s2fieldsofstudy": [ "Biology" ], "sha1": "dd697e3fb422fd7f6383a3c1aa30da82362461fd", "year": 2016 }
pes2o/s2orc
Tumor-derived exosomes regulate expression of immune function-related genes in human T cell subsets Tumor cell-derived exosomes (TEX) suppress functions of immune cells. Here, changes in the gene profiles of primary human T lymphocytes exposed in vitro to exosomes were evaluated. CD4+ Tconv, CD8+ T or CD4+ CD39+ Treg were isolated from normal donors’ peripheral blood and co-incubated with TEX or exosomes isolated from supernatants of cultured dendritic cells (DEX). Expression levels of 24–27 immune response-related genes in these T cells were quantified by qRT-PCR. In activated T cells, TEX and DEX up-regulated mRNA expression levels of multiple genes. Multifactorial data analysis of ΔCt values identified T cell activation and the immune cell type, but not exosome source, as factors regulating gene expression by exosomes. Treg were more sensitive to TEX-mediated effects than other T cell subsets. In Treg, TEX-mediated down-regulation of genes regulating the adenosine pathway translated into high expression of CD39 and increased adenosine production. TEX also induced up-regulation of inhibitory genes in CD4+ Tconv, which translated into a loss of CD69 on their surface and a functional decline. Exosomes are not internalized by T cells, but signals they carry and deliver to cell surface receptors modulate gene expression and functions of human T lymphocytes. fragmentation 3,14,15 . Interestingly, these effects of TEX could be in part blocked by pre-incubation of human T cells with IRX-2, a cocktail of natural cytokines 14 . In in vitro experiments and upon administration to patients with cancer as a therapeutic, IRX-2 was effective in protecting human CD8 + T cells from TEX-mediated apoptosis 14 . Protection of the immune cells from TEX-induced dysfunction and death, inhibition of suppressive signaling by TEX or both are likely to become important aspects of future therapeutic anti-tumor strategies 16,17 . For this reason, a better understanding of cellular and molecular mechanisms TEX utilize to mediate immune suppression is necessary. Current approaches to overcoming tumor-induced suppression of anti-tumor T cell activity depend on the use of check-point inhibitors, such as, e.g., antibodies (Abs) specific for CTLA-4, PD-1 or PD-L1 18,19 . The ongoing clinical trials with these checkpoint inhibitors provide evidence that a therapeutic restoration of anti-tumor responses can be successful in improving outcome for some patients with cancer 20 . Consequently, there is much interest in identifying other molecular pathways contributing to tumor-induced immune suppression and potentially in silencing of these pathways. TEX carry a wide range of suppressive molecules derived from the tumor cell surface and the cytoplasm of the parental tumor cell [1][2][3]21 . So armed, exosomes can interact with immune and non-immune cells delivering signals which specify suppression of essential functions in the responder cells. TEX have been reported to be able to modify the transcriptional profile of the recipient cells such as human brain microvascular endothelial cells or human hematopoietic cells 22,23 . In view of these reports, we considered the possibility that TEX-delivered signals induce changes in the transcriptional profile of T cells and that the immune response-regulating genes would be preferentially targeted in T lymphocytes, especially in activated T lymphocytes. The objective of this study is to demonstrate that TEX co-incubated with freshly purified human CD4 + CD39 + Treg, conventional CD4 + T cells (CD4 + Tconv) or CD8 + T lymphocytes differentially regulate expression of the key immune function-related genes in these T cell subsets. Results Exosomes isolated from supernatants of the PCI-13, a human tumor cell line, or dendritic cells (DC) had the expected morphology by TEM (Fig. 1), the particle size in the range of 30-100 nm by NanoSight and were biologically active in NK-cell assays as previously described by us 24 . Immunobead-based capture of CD4 + Tconv, CD8 + T cells and CD4 + CD39 + Treg from normal donors' PBMC by AutoMACS yielded highly enriched subsets of T cells to be targeted by exosomes. Isolated CD4 + and CD8 + T cell subsets had the purity of over 90%, while the purity of CD4 + CD39 + Treg varied from 80 to 85%, as determined by flow cytometry. Effects of TEX on mRNA profiles in resting vs. activated T cell subsets. CD4 + T cells (CD4 + Tconv), CD8 + T cells and CD4 + CD39 + Treg were isolated from peripheral blood of three normal donors and each isolated subset was individually co-incubated with exosomes isolated from supernatants of cultured tumor cells (TEX) or from supernatants of cultured human dendritic cells (DEX). In preliminary titration experiments, we observed that TEX-induced changes in lymphocyte mRNA expression were exosome dose dependent, cell type dependent and cell activation dependent. For example, Supplemental Figure 1 shows that the Ct values for IL-8 mRNA expression levels were not changed by TEX in resting or activated CD4 + Tconv or CD8 + T cells, while in activated CD4 + CD39 + Treg, TEX significantly increased the Ct value for IL-8 (i.e., they down-regulated the IL-8 gene expression level), but only at one TEX to Treg ratio (1 ugTEX protein/25,000 Treg). These preliminary experiments indicated that TEX-induced changes in mRNA gene profiles need to be independently evaluated in resting and activated T cell subsets using calibrated TEX doses and pre-determined numbers of target cells to achieve optimal effects. Based on these and other preliminary experiments, we also concluded that a higher Exosomes isolated by differential centrifugation, ultrafiltration and size exclusion chromatography were placed on copper grids, stained with uranyl acetate and examined. Note their vesicular morphology and the size range, which does not exceed 50 nm. The TEM image was acquired and generously provided by Dr. Sonja Funk. numbers of T cells would be needed to optimize the mRNA recovery. Therefore, in all subsequent experiments larger numbers of resting as well as activated (via the T cell receptor) T cells (0.5 × 10 6 to 1 × 10 6 /well) were co-incubated with TEX at the constant exosome concentration of 10 ug protein or in PBS. Effects of TEX and DEX on mRNA expression levels in T cells. While all exosomes isolated from supernatants of tumor cell lines (TEX) are tumor derived, those isolated from supernatants of cultured human dendritic cells (DEX) are produced by normal cells. To evaluate effects of exosomes on mRNA gene expression in lymphocytes, we established a model in vitro system comprised of isolated allogeneic TEX or DEX co-incubated with human T cell subsets (CD4 + Tconv, CD8 + T and CD4 + CD39 + Treg) for 16 h. T cells were isolated from peripheral blood of three different randomly-selected normal donors. Following co-incubation with TEX or DEX, cellular mRNA was harvested from T cells, reverse transcribed and analyzed by qRT-PCR in the microplate system described in Materials and Methods. Changes in expression levels of the selected 24-27 genes in T cells were simultaneously measured relative to the levels in control wells (PBS; no TEX or no DEX). The waterfall plots in Figs 2 and 3 illustrate fold changes in expression levels of these genes in the three T cell subsets (resting or activated) of one representative normal donor. In resting CD4 + Tconv, fold changes in mRNA expression levels were similar for TEX and DEX (Fig. 2). With the exception of only 4 genes (IL-10, COX-2, PTGES and Fas), expression levels of all other genes were decreased relative to controls. Only few of these decreases were significant, including expression of CD26, CD40L and CD73. Resting CD8 + T cells were more responsive to TEX or DEX than CD4 + Tconv. Treg were least responsive to TEX or DEX and showed a distinct change in the mRNA profile after co-incubation with TEX. Expression of IL-10 and COX-2 was significantly up-regulated, while that of CD73 was significantly down-regulated by TEX. All other genes were not significantly altered in expression. In activated T cells, the mRNA expression levels were up-regulated relative to controls by both TEX and DEX but these changes were quantitatively different, with DEX inducing greater transcriptional increases than TEX in nearly all genes in all three T cell subsets (Fig. 3). This was not a consistent result, however, as with the third donor's T cells, down-regulation of gene expression levels was seen, similar to that observed in resting T cells. Activated Treg appeared to be more responsive to TEX than the other two T cell subsets. Interestingly, in activated Treg, the genes coding for CD25 (IL-2R), ectonucleotidases (CD39 and CD73) and adenosine deaminase (CD26) were significantly up-regulated in expression by either TEX or DEX. Also, the PD-L1 expression levels were up-regulated in CD4 + Tconv and CD8 + T cell but less so in Treg. In aggregate, the data suggest that activated T cells are highly susceptible to transcriptional modulation by both TEX and DEX, and that the initial Multifactorial data analysis. To determine significance of the observed fold changes in cellular mRNA expression levels, multifactorial analysis of the data for all T cell subsets obtained from three different donors and incubated with TEX or DEX was performed. By calculating and combining mean Δ Ct values for all factors (T cell subsets, T cell activation, exosome source or exosome absence) and using normalized mean Δ Ct for all tested genes, we determined that of the three factors considered, it was cellular activation followed by the responding T cell type that best discriminated between exosome-mediated effects on mRNA expression levels in lymphocytes ( Table 1). The exosome source had little or no impact on fold changes of gene expression levels in T cells co-incubated with TEX or DEX. Heat-map analysis. To be able to compare exosome-induced changes in expression levels of individual genes within the T cell subsets, unsupervised and supervised heat-map analyses were next performed. An unsupervised heat map for the entire data set (Fig. 4A) illustrates differential effects of TEX and DEX on activated vs. resting T cells. Two major clusters were identified. Within the activated T cell cluster, TEX and DEX induced distinct transcriptional changes in T cells, as indicated by lower Ct values (i.e., higher level of mRNA transcription) for DEX-vs TEX-induced transcripts, as also shown by waterfall plots in Fig. 3. The heat map indicates that gene expression changes induced by TEX or DEX are quantitatively different from those in PBS-treated control T cells. Activated Treg co-incubated with TEX have a distinct transcriptional profile from that seen in Treg incubated with DEX (see asterisks in Fig. 4A). The lowest transcriptional activity (in green) occurred for the adenosine pathway-related genes, and the highest (in red) in the immunoregulatory genes such as PD-L1, PD-1, CD40L, CD25, ZAP-70. Within the resting T cell cluster, all three T cell subsets co-incubated with TEX, DEX or PBS, show minimal or no changes in transcriptional activity as does GAPDH, which is equally highly expressed in controls and after co-incubation with exosomes. To further compare changes in expression levels of individual genes induced by TEX vs DEX in different T cell subsets, a supervised heat map was constructed, in which the selected 24 genes were grouped according to the molecular pathways they regulate (Fig. 4B). Again, resting T cells were minimally affected by co-incubation with TEX or DEX. Among the activated T cell subsets, DEX induced higher transcriptional changes than TEX, especially in genes involved in the inhibitory (IL-10, TGF-β, CTLA-4, PD-1, PD-L1) and signaling (Zap70, CD40L, CD25, CD26) pathways. Expression levels of genes regulating adenosine receptors and ectonucleotidases, CD39 and CD73, were not up-regulated following co-incubation of T cells with DEX or TEX. Interestingly, activated Treg were less susceptible to transcriptional changes mediated by TEX than the other two T cell subsets. The supervised heat map suggests that exosomes exert differential effects on genes involved in molecular pathways operating in activated in T cells. Effects of TEX on human CD4 + CD39 + Treg. Given that Treg were previously shown by us to respond to TEX by in vitro expansion and increase in suppressor functions 6 , translational profiles of CD4 + CD39 + Treg were compared to those of CD4 + Tconv or CD8 + T cells after their co-incubation with TEX. A heat map was constructed which displays mean Δ Ct values for 27 genes (IL-8, JAK3 and STAT3 were added) following co-incubation with TEX of resting or activated CD4 + Tconv, CD8 + T and Treg cells obtained from all three donors (Fig. 5). High Δ Ct values (in red) denote increased mRNA levels relative to PBS controls after exposure to TEX, and as expected, activated T cells, especially activated Treg, show positive Δ Ct values for nearly all tested genes. There were some notable gene changes in T cells co-incubated with TEX: the expression levels of COX-2 and IL-10 were increased in all subsets of resting and activated T cells, and more genes were up-regulated in CD4 + CD39 + Treg than in CD4 + Tconv. Results in this heat map, combining mRNA measurements for all three T cell donors, are consistent with the waterfall plots shown in Fig. 3 for one representative donor. Compared to other T cell subsets, CD4 + CD39 + Treg had broader and higher transcriptional activity after co-incubation with TEX. This suggests that activated Treg are more susceptible to TEX-mediated regulation of mRNA expression levels than CD8 + T cells or CD4 + T conv. Exosome interactions with immune cells. To further investigate cellular interactions responsible for exosome-induced changes in the gene expression profile of immune cells, we labeled TEX with PKH26 dye and monitored their uptake by T cells, B cells and monocytes isolated from human peripheral blood. Image analyses using an Amnis Image Stream cytometer showed that CD14 + monocytes and CD19 + B cells readily took up and internalized PKH26 + TEX during 24 h of co-incubation. Surprisingly, resting or activated Treg (or conventional CD4 + and CD8 + T cells; data not shown) did not internalize TEX even after 72 h of co-incubation (Fig. 6). These results indicated that in T cells, TEX internalization was not necessary for delivery of signals that result in changes of gene expression, and suggest that surface-mediated receptor-ligand interactions might be sufficient for inducing the observed changes. Functional analyses of T cells co-incubated with TEX. To demonstrate that TEX-induced changes in the transcriptional profile of activated T cells have functional consequences, we activated normal CD4 + Tconv with anti-CD3/CD28 Abs in the presence of IL-2 and after co-incubation with TEX, determined expression levels Table 1. Analysis of the factors (T cell type, T cell activation and exosome source (TEX vs. DEX) that could influence variation in gene expression upon co-incubation with exosomes a . a The three-way ANOVA analysis of mean gene expression for the 24 genes measured in resting and activated T cell subsets (CD4 + Tconv, CD8 + T cells and CD4 + CD39 + Treg) obtained from three different donors. All T cells were co-incubated with TEX, DEX or PBS. In comparing effects of TEX vs. DEX, mean Ct values for each factor are evaluated over all other factors.*p < 0.05; **p < 0.01; ***p < 0.001. (MFI) of the CD69 protein (an activation marker) on the surface of these cells by flow cytometry. As shown in Fig. 7A, TEX significantly (p = 0.0005) down-regulated expression levels of the CD69 protein in activated CD4 + Tconv, suggesting that TEX interfered with T cell activation. Viewed in the context of the above-presented evidence for the elevated expression levels of genes encoding proteins involved in suppression such as COX 2 , CTLA-4, Fas, FasL or TGF-β in activated CD4 + Tconv co-incubated with TEX (see Fig. 5), we surmise that TEX selectively enhanced mRNA expression and its translation into inhibitory proteins which interfered with CD4 + T cell activation as evidenced by a decrease in CD69 protein levels. These data are consistent with our previous reports of TEX inducing immune suppression in activated T cells (13)(14)(15). Focusing on effects exerted by TEX on Treg, we examined expression of proteins involved in the adenosine pathway, which is used by Treg to operate suppression (28,29). The Anova analysis of our data indicated that exosomes induced significant change in expression levels of the genes involved in the adenosine pathway ( Table 1). The levels of mRNA encoding CD39 and CD73, CD26 and adenylate cyclase-7 were down-regulated upon co-incubation of resting Treg with TEX or DEX (Fig. 2). To determine whether these exosome-induced changes in mRNA expression levels translated into protein changes in Treg, we next co-incubated the same TEX with freshly-isolated resting CD4 + CD39 + Treg in the presence of exogenous ATP. We examined: (a) changes in expression of CD39 on the Treg surface and (b) adenosine production by these Treg. As shown in Fig. 7A,B, co-incubation of Treg with TEX significantly increased expression levels of CD39 and adenosine production by these cells. It also increased expression levels of intracytoplasmic CD79 in these cells (data not shown). Our data suggest that TEX-mediated down-regulation of mRNA coding for adenosine pathway genes in Treg translates into a burst of enzymatic activity leading to immunosuppressive adenosine production and thus enhanced suppressor functions. . Effects of TEX on protein expression and functions of T cells. In (a) down-regulation of CD69 protein expression on the surface of responder CD4 + Tconv after co-incubation with TEX. Activated CD4 + Tconv were co-incubated with TEX (10 ug protein) produced by the PCI-13 cells or with PBS for 12 h. The CD69 expression levels on CD4 + Tconv were then determined by flow cytometry (MFI) and were converted into MESF units based on calibration curves established with fluorescent calibration beads. The bar graphs show data (mean values ± SD) from 3 independent experiments performed with CD4 + Tconv obtained from different normal donors. The asterisks indicate p values at p < 0.0005. In (b) changes in expression levels of CD39 protein on the surface of resting CD4 + CD39 + Treg co-incubated with TEX produced by the PCI-13 cell line or DEX. The exosomes were used at the concentration of 10 ng protein/ assay. Exogenous ATP was added as described in Methods. Flow cytometry (right) shows up-regulation of MFI for CD39 in a representative experiment, and the bar graph summarizes results of three experiments performed with Treg obtained from different donors. In (c), Production levels of 5′ AMP, adenosine and inosine by resting CD4 + CD39 + Treg co-incubated with TEX produced by the PCI-13 cell line. The data are from one of two experiments performed in the presence of exogenous ATP. The analyte levels were measured by mass spectrometry as described in Methods. Discussion Our earlier studies of human T cells co-incubated with TEX or exosomes isolated from plasma of patients with cancer showed that these nanovesicles down-regulated CD3ζ and JAK3 expression in primary activated T cells and mediated Fas/FasL-driven apoptosis of activated CD8 + T cells 3,[13][14][15]25 . TEX also promoted proliferation of CD4 + Tconv and their conversion into CD4 + CD25 high FOXP3 + CD39 + Treg 6,12 , which co-expressed IL-10 and TGF-β , CTLA-4, granzyme B/perforin and effectively mediated immune suppression [26][27][28][29] . In experiments reported by us and others, TEX were also shown to interfere with functions of NK cells and monocytes 3,[30][31][32][33] . These in vitro studies of suppressive effects of TEX on functions of human immune cells are supported by in vivo studies in mouse models, where TEX were shown to suppress anti-tumor immune functions and promote tumor progression 34,35 . In aggregate, these data suggest that TEX represent a mechanism used by tumors to escape from the host immune system. We suspected that TEX could serve as the vehicle responsible for inducing changes in mRNA expression levels in T cells. To study exosome-induced alterations in mRNA of responder T cells, we used a model system comprising isolated subsets of human primary T cells co-incubated with TEX which were exclusively derived from cultured tumor cells. Exosomes produced by cultured human dendritic cells (DEX) originated from normal, non-cancerous hematopoietic cells. This is an allogeneic model system, in which T cell responses could be biased in part by their alloreactivity with exosomes carrying MHC molecules. Also, it is an artificial system in which exosomes derived from cell lines rather than exosomes isolated from human body fluids are used. In our hands, the initial co-incubation experiments performed with plasma-derived exosomes from patients with cancer and normal donors (data not shown) gave inconsistent results, which appeared to be exosome-donor related, presumably because exosomes obtained from plasma are mixtures of vesicles originating from many different cells. Hence, we resorted to the in vitro model system for TEX and DEX, in which the source and characteristics of exosomes were well defined and uniform. By TEM and NanoSight, these extracellular vesicles fit with the definition adopted for exosomes 1,16 . A highly sensitive method was needed for reliable detection of changes in gene expression levels using mRNA extracted from a small number of primary T cells co-incubated with exosomes. This was especially important when working with CD4 + CD39 + Treg, which represent < 5% of human circulating CD4 + T cells 27 , and were available in limited quantities. The subset of CD4 + CD25 hi FOXP3 + Treg co-expressing CD39 is commonly present in the circulation of patients with cancer and is referred to as inducible (i) Treg 26,29 . We used a qRT-PCR method developed by Mitsuhashi et al. that was previously successfully applied to the analysis of human leukocyte functions 36,37 . As previously reported, an increase in the Ct values of less than two-fold was often statistically significant using this method, especially for abundantly expressed genes 36,37 . Using the model, we expected to gain evidence for a direct, target-cell specific transfer of molecular signals delivered by TEX that initially involves mRNA synthesis and/or translation and ultimately leads to functional dysfunction of immune cells such as occurs in cancer 9,10 . Exosomes are known to deliver miRNA species to cells 23,38 , and TEX derived from cultured glioblastoma cells have been reported to be able to modify the mRNA expression profile of the recipient fibroblasts 22 . In view of these reports, we expected that changes in mRNA expression levels would be selective, that they would be distinct in CD4 + Tconv vs. Treg and that DEX, serving as surrogates for exosomes derived from non-malignant cells, would induce different mRNA profile changes in T cells than TEX. In particular, we expected to show that TEX primarily induced alterations in expression of genes regulating immune suppression. Instead, we found that TEX and DEX similarly modulated mRNA expression levels, inducing decreases in resting and increases in activated T cells. Changes in expression levels of immunoregulatory genes such as COX 2 , IL-10, CD39, CD73, PDL-1 or CD26 were significant in T cells co-incubated with TEX or DEX. The transcriptional changes induced by exosomes were not restricted to any specific mRNA species but were evident in multiple genes regulating inhibitory, apoptotic, signaling/co-stimulatory or adenosine-associated pathways (Fig. 4B). In activated T cells, these changes were quantitatively somewhat smaller upon co-incubation with TEX than DEX. The multivariate analysis of the data generated with cells of the three different donors identified factors that significantly influenced mRNA gene expression levels in target cells exposed to exosomes as: (a) the presence/absence of exosomes; (b) the T cell activation level; and (c) the type of responding T cells. In contrast, the exosome source (TEX or DEX) was not a significant discriminating factor in the model. In addition to cellular origins of exosomes, their interactions with the target cell may be critical for information transfer. Depending on the nature of the target cell, exosomes may be readily or not so readily internalized 39 . While phagocytic cells rapidly take up exosomes, and in cultured human brain microvascular endothelial cells, green fluorescent protein (GFP)-labeled exosomes can be seen in the cytosol within hours of co-culture 22 , our results with PKH26-labeled TEX showed that T cells, even activated T cells, do not internalize TEX (Fig. 6). Therefore, we concluded that in T cells, exosomes deliver signals to receptors present on the cell surface, which ultimately result in alterations of the mRNA profile. In contrast to B cells and monocytes, which internalized exosomes and enabled transfer of miRNAs, Treg co-incubated with TEX even for 72 h did not internalize exosomes. In the absence of the cytosolic protein/ nucleic acid transfer, cell surface signals delivered by TEX to T cells were translated into alterations in mRNA expression levels, which clearly had functional consequences, as shown by down-regulation of the CD69 protein expression on the surface of activated CD4 + Tconv cells or increased adenosine production by resting Treg co-incubated with TEX (Fig. 7C). Different subsets of activated T cells seemed to respond differently to TEX, and the heat map in Fig. 4 shows that TEX induced quantitatively and qualitatively distinct effects in CD4 + Tconv than in CD4 + CD39 + Treg: the Δ Ct values for nearly all 27 genes examined were higher in activated Treg than in other T cells, especially in activated CD4 + Tconv. This finding suggests that activated Treg (an equivalent of induced Treg or pTreg in humans) may be more sensitive to TEX-mediated effects than other T cells. Also, the gene profile of activated CD4 + Tconv co-incubated with TEX indicates low expression levels of genes regulating immune suppression, e.g., COX2, CTLA-4, Fas, FasL, TGF-β (Fig. 5). Our data do not indicate whether TEX modulate mRNA synthesis or mRNA translation into proteins. However, given that low expression levels of these and other genes in activated CD4 + Tconv co-incubated with TEX correlates with the significantly lower MIF of the CD69 protein on the surface of these cells, we suggest that TEX inhibit activation of CD4 + Tconv by promoting translation of the genes encoding inhibitory proteins. TEX-mediated effects on Treg were distinct from those observed in CD4 + Tconv. In resting Treg, TEX induced higher CD39 expression and adenosine production, while downregulating mRNA expression levels of the genes regulating this immmunosuppressive pathway (Figs 2 and 7B,C). Considering that resting Treg need to be activated or induced to efficiently mediate suppression, TEX appear to be able to deliver such activating signals, leading to increased CD39 and CD73 expression and production of adenosine in resting Treg. In contrast, in activated Treg, where gene transcription and translation are likely to be efficient, co-incubation with TEX or DEX induced up-regulation in expression of the same immuno-suppressive genes. This observation suggests that TEX also exert distinct effects on resting vs. activated Treg. Given our previous functional data on TEX-mediated suppression in immune cells 3,[13][14][15] , we speculate that TEX-induced up-regulation of the inhibitory gene expression levels in activated Treg promotes their rapid translation into inhibitory proteins. Co-incubation with TEX increases levels of critical immunoinhibitory proteins, such as TGF-β , IL-10, COX-2 as well as CD39, CD73 and adenosine production. Our ex vivo studies of iTreg in the peripheral circulation and tumor sites of patients with HNSCC illustrated significant overexpression of CD39 and CD73 ectoenzymes in these cells 12,26 . Because plasma of these patients contains elevated levels of exosomes, including TEX, relative to NCs plasma 12,24 , it is tempting to associate this iTreg phenotype with TEX-mediated effects. Overall, our studies provide evidence for differential exosome-mediated alterations in gene expression levels in resting vs activated T cells and support the role of TEX in differential modulation of gene expression and T cell functions in CD4 + Tconv vs Treg. Peripheral blood mononuclear cells (PBMC). Buffy coats obtained from normal volunteers were purchased from the Central Blood Bank of Pittsburgh. Mononuclear cells were recovered by centrifugation on Ficoll-Hypaque gradients (GE Healthcare Bioscience), washed in AIM-V medium (Invitrogen, Grand Island, NY, USA) and immediately used for experiments. Isolation of the peripheral blood T-lymphocyte subsets. T cell subsets were isolated via an immunoaffinity-based capture procedure, using Miltenyi beads as previously described 26 . Negative selection to isolate CD4 + T cells was followed by the separation of CD4 + CD39 + and CD4 + CD39 neg T cells using anti-CD39 Ab-coated Miltenyi beads by AutoMACS. The purity of the isolated cells was determined by flow cytometry. The isolated T cell subsets were either directly used for experiments (resting T cells) or activated by incubation in the presence of anti-CD3/anti-CD28 antibody (Ab)-coated beads and IL-2 (150 U/ml) for 4 h or overnight, depending on the experiment. To confirm activation, cells were harvested, stained for CD69, and the frequency of CD69 + T cells as well as CD69 expression (MFI) on the cell surface were determined by flow cytometry. The MFI values were converted into MESF units, based on fluorescent intensity curves generated with calibration beads. Isolation of exosomes. Exosomes were derived from: (a) supernatants of the head and neck squamous cell carcinoma (HNSCC) cell line, PCI-13 maintained in a long-term culture 40 . This cell line served as a source of TEX; (b) supernatants of human dendritic cells (DC) cultured from monocytes isolated from PBMC by adherence to plastic and incubated in the presence of IL-4 and GM-CSF 41 for 4 days. These supernatants were used as a source of DC-derived exosomes (DEX). The DC cultured from plastic-adherent monocytes were > 90% CD40 + CD83 + CD86 + DR + by flow cytometry. Media used for cell cultures contained FCS which was ultracentrifuged at 100,000 for 3 h to deplete it of bovine exosomes. Exosomes isolated from other HNSCC cell lines (PCI-1, PCI-30) as well as exosomes isolated from plasma of patients with HNSCC and of normal donors as previously described 24 were also used in preliminary experiments to establish their effects on mRNA transcription and on inhibition of T cell functions (data not shown). Supernatants were routinely concentrated in a Vivacell prior to exosome isolation 42 . Exosomes were isolated as described by us previously 24 . Briefly, differential centrifugation (1.000 xg for 10 min at 4 °C and 10,000 xg for 30 min at 4 °C) was followed by ultrafiltration (0.22 μ m filter; Millipore, Billicera, MA, USA) and then size-exclusion chromatography on a A50 cm column (Bio-Rad Laboratories, Hercules, Ca, USA) packed with Sepharose 2B (Sigma-Aldrich, St. Louis, MO, USA). The exclusion volume fractions were collected, ultracentrifugated (100,000 xg for 2 hr at 4 °C), and pellets were resuspended in phosphate buffered saline (PBS). Protein concentrations of exosome fractions were determined using a BCA Protein Assay kit as recommended by the manufacturer (Pierce, Thermo Scientific, Rockford, lL-61105, USA). Characterization of isolated exosomes. Prior to co-incubation with T cells, isolated exosomes were evaluated for morphology by transmission electron microscopy (TEM), particle distribution and size in a NanoSight instrument and biological activity by flow cytometry to demonstrate their ability to down-regulate NKG2D expression in isolated human NK cells as previously described 24 . TEM of isolated exosomes was performed at the Center for Biologic Imaging at the University of Pittsburgh as previously described (24). Briefly, freshly-isolated exosomes were put on a copper grid coated with 0.125% Formvar in chloroform. The grids were stained with 1% (v/v) uranyl acetate in ddH 2 O, and the exosome samples were examined immediately. A JEM 1011 transmission electron microscope was used for imaging. Effects of TEX on CD39 protein expression levels and adenosine production in Treg. CD4 + CD39 + Treg were isolated from PBMC obtained from NCs, placed in wells of 96-well plates at the concentration of 10 6 cells /well and co-incubated with TEX (10 ng protein/well) in the presence of exogenous ATP (20 nM) for various time periods. Control wells contained TEX or Treg alone. Supernatants were collected and processed for mass spectrometry as previously described 12 . Cells were harvested and stained for expression of CD39 and CD73 proteins by flow cytometry as described above. Statistical analysis. The mRNA expression data were not normalized to GAPDH. Expression of this gene was variably altered by TEX, and GADPH was treated as any other gene. Paired analyses compared changes in mRNA expression levels in T cells co-incubated with or without exosomes. To display the data and illustrate mRNA expression levels or changes in mRNA expression levels, unsupervised heat maps were constructed. Clusters were identified by agglomerative hierarchical clustering with a complete linkage. Analysis of variance was conducted to test for effects of the following factors on mRNA expression levels: (a) the T cell phenotype (CD8 + , CD4 + , CD4 + CD39 + ); (b) the T cell activation status (activated, resting); and (c) the exosome source (tumor = TEX and DC = DEX). Initially, the interaction between activation and the source was evaluated and found to be not significant at p > 0.05. Thereafter, only additive effects were tested. Fold differences in gene expression levels were calculated according to the following formula: 2^ (Ct(PBS) -Ct(TEX)). The mean Δ Ct values for gene expression in all tested T cell subsets were calculated using the formula: Δ Ct = (Ct value for PBS -Ct value for TEX). Fluorescence intensity was calculated as MESF units and the data are presented as mean values ± SD. The p values < 0.05 are considered significant.
v3-fos-license
2020-12-24T09:11:34.255Z
2020-01-01T00:00:00.000
234957179
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/82/e3sconf_daic2020_03014.pdf", "pdf_hash": "6297311202fd66b9a3e33fbe14923dfd06e0433b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42246", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics" ], "sha1": "7434ec32ee153bb08f7df5df31b46a6c4effa59c", "year": 2020 }
pes2o/s2orc
Unused agricultural land in Russia - the significance and impact on the economy of agricultural production The problems of unused lands began to be dealt with almost immediately after the introduction of economic reforms. As a result of economic reforms, privatization in agriculture, the main agricultural enterprises - state farms, collective farms changed the form of ownership, and enterprises were also subdivided. As a result, a significant number of private enterprises, joint-stock companies of various types, agricultural cooperatives, peasant farms and private subsidiary farms appeared. Often, as a result of division and fragmentation, newly created enterprises found themselves without the appropriate material and technical base, and more often without financial resources. The next result of economic reforms was a spontaneous increase in the disparity of prices for agricultural and industrial products. As a result, it became impossible to carry out work on cultivation of agricultural crops on all available areas or to carry out work with violation of the technology of agricultural crops cultivation. As a result, the volume of acreage decreased, technologies and crop rotations were disrupted, and all this affected the financial condition of newly formed agricultural organizations. Another important reason that affects the withdrawal of agricultural land from turnover is the outflow of population from small settlements, they simply disappear. Land around these localities becomes “problematic” and is taken out of turnover. It should be noted that the next reason for the withdrawal of land from turnover is its low fertility and the presence of natural anomalies that require large material costs for the cultivation of agricultural crops. Therefore, the introduction of unused agricultural land into turnover provides for an increase in agricultural production, reducing unemployment in rural areas, and most importantly – ensuring state food security. Introduction This research is conducted under the theme "Methods of evaluating the effectiveness of involvement of unused agricultural land in agricultural use" proposed by the Informatization Department of the Ministry of Agriculture of Russia to higher education institutions under the Ministry of Agriculture of the Russian Federation, at the expense of means of the Federal budget under the order of the Ministry of Agriculture in 2021. The problem of unused agricultural land is very relevant. During the period of reforms (since 1990), about 131 million hectares of arable land were withdrawn from agricultural turnover, the area under grain decreased by 2.5 times, and under forage crops by 4.5 times. This problem has been reflected in the research of Russian researchers since the very beginning of reforms in the Russian economy, including in agriculture. It should be noted that in different periods, researchers considered the problem of the emergence of unused land from different points of view. In the initial period (until 2000), they considered from the point of view of the correctness and objectivity of the assessment of agricultural land for sale (Belova T.N., Uzun V.Ya, Arashukova V.P); in the period from 2000 to 2010, research was conducted in the direction of preparing recommendations for bringing the constituent documents that appeared during this time of various economic formations, in accordance with the requirements of the legislation of Russia (Zavorotin E.F., Nechaev V.N., Barsukova G.N., Chemerichko A.V.); since 2010, research has been aimed at studying the reasons for withdrawing agricultural land from turnover, justifying the need to enter land into agricultural turnover, research and development of technologies that ensure the introduction of unused land with minimal costs and effective use of technologies in the future (Nikiforova E.O., Bondarenko O.V., Khairbekov A.U., Mironova A.V., Liskin I.V., Afonina I.I.). In their research, the authors set the task to determine the main reasons for the appearance of unused or abandoned agricultural land, the impact of these reasons on the withdrawal of these lands from turnover and the need to enter unused land into turnover. Materials and methods The research on this topic is based on the materials of the Federal State Statistics Service on agriculture, socio-economic indicators of Russian regions, from statistical collections for 2002-2019, using research of leading Russian scientists. The study was performed using normative, statistical, expert assessments, program-target, economic and mathematical methods. Results and discussion. In order to increase the efficiency of agricultural production, increase its profitability, and ensure food security of the state, there is a need to involve unused agricultural land in agricultural turnover. For a comparative analysis of production volumes and availability of agricultural land, consider the data presented in table 1. The table was developed by the authors using statistical data [1]. Data in table 1 shows that the total cultivated area between 2000 and 2020 dropped to 5040 thousand ha if sown area in agricultural organizations during this period decreased by 21654 thousand ha, in personal subsidiary plots (PSP), peasant farms (PF), private households increased by 16614 thousand ha. But the main decrease in acreage was allowed in the period from 1990 to 2000 and amounted to 33035 thousand ha. In agricultural organizations, the decrease was 41096 thousand ha, in PSP, PF, personal farmsteads increased by 8061 thousand ha. [1] The presented material shows that there was a redistribution of acreage between agricultural organizations of collective ownership on the one hand and PSP, and PF on the other. However, since 1990, 38075 thousand ha of acreage have been withdrawn from agricultural turnover. The decrease in acreage in crop production directly affects the animal husbandry subsector. Thus, the acreage under forage crops has decreased by 14464 thousand ha over the past 20 years, i.e. almost twice. As a result, the number of cattle during this period decreased by 9393.8 thousand heads, including cows by 4778.4 thousand heads [1]. To justify the reasons for the withdrawal of agricultural land from agricultural turnover, consider the dynamics of changes in the crop area in the context of Federal districts of Russia, the data are presented in table 2. In different Federal districts, the situation with the preservation of the land fund is different. In the Central FD, the decrease over twenty years was 1354 thousand ha or 0. After analyzing the data from table 2 and based on our own research, we can conclude about the reasons for the withdrawal of agricultural land from turnover. 1. In our opinion, the main factor influencing the withdrawal of agricultural land from turnover is the bioclimatic potential (BCP). 2. The second, no less important factor is the economic downturn in the country's economy, especially in agriculture. 3. The third factor was the lack of appropriate types of agricultural machinery that can work effectively on small areas and the ability of private agricultural organizations to purchase this equipment. 4. Incorrect methodological approach to privatization in agriculture. The idea of unprofitability and low profitability of agricultural production in large areas was put forward. Let us consider in more detail the influence of bioclimatic potential on the efficiency of agricultural production, crop production, and on the dynamics of changes in acreage. A simple definition of the bioclimatic potential is the following: a complex indicator that characterizes the number of positive temperatures during the growing season, the amount of precipitation and moisture reserves in the soil, natural soil fertility, taking into account humus reserves, soil structure and mineral element reserves in the soil for a particular region. Serious research of the BCP was conducted by domestic and foreign scientists, such as V.Ya. Uzun [7,8] [14], et al. [15][16][17]. In our study, we will consider the impact of BCP on the dynamics of changes in acreage, on the yield of grain crops, depending on the bioclimatic potential of a region. For comparison, we will take two regions of the Southern Federal district with different indicators of BCP, as the Federal district that increased acreage since 2000, and two regions from the Central Federal district that allowed a decrease in acreage. In the first case, it is the Krasnodar territory and the Volgograd region, in the second -the Kursk region and the Kostroma region [18]. The results of the analysis are shown in Fig. 1. We will analyze the indicators using indexes. For the "basic" indicators, we will take acreage of the regions in 2000; grain yield in the Krasnodar territory in 2000. From the data presented in Fig. 1 Since 2010, there has been an increase in acreage at a rate of 0.4% per year. The yield, despite a high BCP, is significantly inferior to the grain yield in the Krasnodar territory, by 2.7 times. However, there is a steady trend towards increasing yields over the past eighteen years, with an annual yield increase of 8.4%. Further, in my research, I would like to deduce the dependence of changes in acreage on the bioclimatic potential. The results of the study are shown in Fig. 2. According to the data from Fig. 2 we can draw the first conclusion that the increase in acreage is observed only in Federal districts with BCP over 137 points (the exception is the Far Eastern FD). In Federal districts with a BCP below 137 points, a decrease in acreage is allowed, and the lower the BCP, the greater the decrease in acreage, both in % terms and in absolute values (Northwestern and Siberian FD). For a more accurate analysis of the dependence of changes in acreage on BCP, consider the dynamics of changes in acreage by region. For this purpose, the dynamics of changes in acreage in 63 regions of Russia that have different bioclimatic potential and are actively engaged in agricultural production are considered and analyzed. The results of the study are presented in table 3. of the total decrease in acreage for the specified period. Acreage increased in the Belgorod and Voronezh regions of the Central FD, in the regions of Southern and North Caucasus FD, with the exception of the Astrakhan region, the Kabardino-Balkarian Republic, the Karachayevo-Cherkessian Republic, the Republic of North Ossetia -Alania, also decreased the acreage in the Kaliningrad region of the Northwestern FD. 2. With BCP of 110-126 points, the increase in acreage was 32.9%, and the decrease in acreage was already 21.4% of the total decrease in acreage. This BCP range includes the regions of the Central FD, the Volga FD, and the Far Eastern FD. 3. With BCP of 100-110 points, there was only a decrease in acreage and accounted for 46.4% of the total decrease in acreage. Regions with this BCP indicator are in every Federal district except the Southern FD. For example, the following can be cited: Ivanovo region (BCP -100), a decrease of 196.8 thousand ha, the Republic of Bashkortostan (BCP -109.4), a decrease of 791.5 thousand ha [19]. 4. With BCP of 86-100 points, there was an increase in acreage in the Amur region (Far Eastern FD) [20] -19.8%, the decrease was 30.5%, mainly in the regions of the Ural and Siberian FD. Conclusions Land in regions with a low BCP (86-110 points) is withdrawn from agricultural turnover. The share of these regions in the total amount of unused land is about 80%. It should be noted that despite the decrease in acreage, including under grain and forage crops, the production of the main types of agricultural products is not reduced. For example, grain production increased 1.85 times over 19 years and reached 121200 thousand tons, while the area under grain crops decreased by 805 thousand ha during this period. Milk production increased by 1.3 times, with a decrease in the number of cows almost twice, with a decrease in the area under forage crops by 59.5%. This became possible as a result of an increase in grain yield by 1.7 times, and cow productivity by 2.7 times. The main criterion that determines the need to introduce abandoned land into agricultural circulation is to ensure food security in Russia. If we achieve the indicators that ensure the state food security and the effective management of agricultural production, we can develop the export of agricultural products. When introducing abandoned land into agricultural circulation, it is necessary to consider the costs of commissioning and the expected effect of agricultural land being put into turnover. Therefore, the means, timing and directions for introducing abandoned land and methods for evaluating the effectiveness of the land being introduced are important. It is equally important to take into account the agricultural machinery park used for this purpose, its structure, and the possibility of using this Park in agricultural production technologies when evaluating the efficiency of land use [21].
v3-fos-license
2023-01-12T16:45:56.780Z
2023-01-08T00:00:00.000
255684287
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4441/15/2/262/pdf?version=1674020906", "pdf_hash": "4e2a390e08896451fdf9a8b071288d7acbd7a8d5", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42247", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "sha1": "d23e7664bd48e91deb0b434b30282ab9438779fa", "year": 2023 }
pes2o/s2orc
Deep Learning Approach with LSTM for Daily Streamflow Prediction in a Semi-Arid Area: A Case Study of Oum Er-Rbia River Basin, Morocco : Daily hydrological modelling is among the most challenging tasks in water resource management, particularly in terms of streamflow prediction in semi-arid areas. Various methods were applied in order to deal with this complex phenomenon Introduction Water resources are of great importance to ensure the world's needs, including for agriculture, industrial and domestic usage, as well as for other environmental systems.However, the availability of water resources is being limited in many countries around the world, especially in arid and semi-arid regions due to climate change, population increase, and irrigation expansion that affects socio-economic development and food security.In Southern Mediterranean regions, the degree of water scarcity and drought conditions may also increase the pressure on water resources [1,2].In addition, these areas are characterized by low precipitation with irregular spatiotemporally distribution and high evaporation.This controls the streamflow process that present a paramount component for understanding and monitoring the quality and quantity of the water supply [3,4]. Therefore, improving streamflow prediction in arid and semi-arid regions is a challenging task for sustainable water resources management and watershed planning, because it has provided valuable statistics to decision-makers for the assignment of accessible water for different purposes, particularly for the agriculture sector [5].This is the case in the Oum Er-Rbia river basin, which serves as one of the heartbeats of hydroelectric and irrigation networks in the kingdom [6].Successful and efficient water resources management require accurate and timely streamflow information.In this context, numerous methods have been used to estimate the streamflow at gauged or poorly gauged watershed involving empirical, physical, conceptual, and data-driven methods [7].Empirical models rely only on the information based on existing data, without taking into account the characteristics of hydrological processes [8].Physical and conceptual models may be two of the best hydrological models to simulate streamflow [9], but they need considerable parameters and require more effort to construct [10].Therefore, data-driven approaches, including machine learning and deep learning, have been revolutionary tools in the watershed planning process.They have largely improved streamflow simulation with no requirement of physical and underlying processes [11]. For machine learning, Support Vector Machine (SVM), regression trees and Artificial Neural Networks (ANNs) are the popular tools utilized to build prediction models, which have definitely improved their ability to solve regression issues [12,13].For streamflow simulation, several studies have been carried out to thoroughly evaluate the methods mentioned.For example, Hadi and Tombul [14] indicated that ANN performs better than SVM to predict streamflow on a daily scale with different physical characteristics.On the other hand, Parisouj et al. [15] revealed that machine learning models achieve favorable performance, especially the SVR, in a daily and monthly time step in different climatic zones.Besides, the reason behind the higher accuracy in each model is due to the input feature and training data, or to the model structure that can affect the selection of the accurate one.Indeed, traditional machine learning algorithms have a simple structure and less data requirement, but ANN and SVR are quite inefficient to capture the series' information in the input data, which is required for handling with the sequence variable [16]. To overcome the potential limitations of machine learning techniques, the use of deep learning, particularly for time series data, provides higher accuracy [17].Deep learning is a growing field with various studies that have been performed for time-series predictions.Recently, one of the famous models used in this field is the Recurrent Neural Networks (RNNs), considering the structure of RNN networks which has a solid aspect of sequence architecture that allowed information to preserve [18].However, because of its structure, this neural network's computation is slow and difficult to process long sequences, and it is incapable of dealing with the vanishing gradient challenge.Long-Short-Term-Memory (LSTM), a powerful RNN architecture, was developed to address vanishing gradient issues [19].Due to such capacity, LSTM has been applied by many researchers for streamflow prediction, as the streamflow information is associated with the last values over extended periods of time [20][21][22].Apaydin et al. [23] indicated that LSTM gives a better performance with supportive accuracy that make them useful for streamflow modeling, compared to ANN and simple RNN models that show an inferior reaction.Nonetheless, there is a difference between simulating streamflow within a daily and monthly time extent.The LSTM model is more applicable for daily prediction, while for monthly modeling ANN got the most accurate results.For example, Cheng et al. [24] found that, compared to ANN, the LSTM model displays a better results performance in daily prediction and less accuracy in a monthly scale because of the absence of an extensive monthly training dataset.Both approaches, ANN and LSTM have diverse scenarios, but they are similar and have advantages that make the long and short predictions more powerful predictive and effective, but with a priority to LSTM [25].The quantity of hydrological and meteorological data plays a critical role in predicting streamflow, as long as the improvement of the higher potential model is related to the higher aspect of the data [26].Several studies noted the effect of feeding the LSTM model with various meteorological data conditions on the performance of the model [27] through the streamflow process.For example, Choi et al. [28] successfully adopted the LSTM network to evaluate the composition of input variables on a daily scale.Moreover, due to the luck of management, the observed data may be disordered and insufficient for training the model, which affects the LSTM efficiency.However, the implantation of the LSTM model on a gauged or poorly gauged river basin seems reliable due to the training data that presents the backbone of the LSTM structure [29].Coi et al. [30] demonstrate the ability of the LSTM model to predict streamflow without hydrological observations.The findings revealed that the model is highly depending on the amount of available data.Therefore, recent methods focus on overcoming the problem by proposing different inputs.Kieran et al. [31] trained the LSTM model to predict streamflow using hydrological and meteorological satellite data as well as the antecedent observations of streamflow.Similarly, Rahimzad et al. [32] explored the capabilities of LSTM compared with different data-driven techniques based on historical streamflow and precipitation time series.The results revealed that the LSTM model is a robust network to distinguish sequential data series behaviors in streamflow modeling. Furthermore, the input data of LSTM requires a three-dimensional array as an input: input samples, sequence length, and input features.The relationship between the input features and the sequence length may have influences on model performance, as the third dimension represents the number of features in the input sequence considering the previous time steps as input variables [33].The impact of sequence length with the input data on the streamflow prediction performance needs to be developed. Although there are studies that aim to solve hydrologic problems in Morocco, such as in Berrchid city where machine learning models were used to forecast groundwater quality [34], but there are scarcely any studies that evaluate streamflow prediction using deep learning techniques.Thus, we found it to be a new debate area and interesting task to work on. The designed experiments in this study will focus on evaluating the reliability of the LSMT network to simulate daily streamflow in a semi-arid mountainous watershed in Morocco, using meteorological data and remotely sensed information.However, the importance of the LSTM model can take advantage of information among time-series data, and performs better in predicting streamflow variability that represent a tendency over time.Thus, in order to elucidate the significance of the LSTM model in streamflow prediction, we explore the capability of the model within the time splitting zone using different approaches, as well as the effect of sequence length selection in the model performance.Moreover, due to the lack of data, this study also evaluates the impact of antecedent values of streamflow by comparing the performance of the model with two different forms of inputs.In the first instance, we initially presented the study area and the data used.This is followed by describing the model architecture and the methodology used.The last section is about the discussion of the training validation and testing results of different approaches that experiment with the effect of sequence length and input features in the LSTM's performance of daily streamflow prediction. Case Study Oued El Abid is the largest affluent basin of the Oum Er-Rbia river, with an area of 7975 km 2 located in the center of Morocco between the meridians 6 • 15 W and 6 • 30 W, and the parallels 32 • N and 32 • 5 N.This basin is a mountainous area with a significant water resource potential, feeding the Bin El Ouidane dam to cover the agriculture activities [35] and refreshing the groundwater downstream of Tadla plain [36].This study area, as a typical South Mediterranean basin, is characterized by a semi-arid climate with an average of approximately 480 mm/year as well as strong spatiotemporal variations in precipitation.Western movements and orographic effects play a crucial role to generate rainfall.The rainy period of the year lasts for 6 months (November to April) and the dry period lasts 4 months (June to September) with a start of watering in October, a maximum in January, and a minimum in July.Depending on the high altitude of the Atlas Mountains and a yearly scale volume primarily accumulated during the spring season, which corresponds to the snowmelt, the flow gradually shifts from rain to snow.The variation of temperature is notably influenced by the high elevation involving snow occurrence.The temperature drops to −9 • C in the winter and rises to 41 • C in the summer [9].The Oued El Abid river is made up of two main sub-basins, the sub-basin of Ait Ouchene and that of Teleguide.Our study focuses on the Ait Ouchene watershed (Figure 1, Table 1). Data The simulation of streamflow requires timely datasets.In this study, daily hydroclimatic datasets (rainfall, streamflow, temperature, and snow cover area) from 2001 to 2010 were used: In situ observations of streamflow and rainfall were provided by the Oum Er-Rbia hydraulic Basin Agency (ABHOER) [37].The regional rainfall of the Ait Ouchene watershed was represented by the average from the gauges situated within the same subbasin.The variation of daily measured streamflow and daily rainfall values is shown below (Figure 2).Due to the lack of ground measurement of snow depth, remote sensing was the main solution to estimate snow occurrence, especially over large mountainous basins.The daily snow cover time series (SCA) at a spatial resolution of 500 m were available from the National Snow and Ice Data Center (NSIDC) using Terra/MODIS (Moderate-Resolution Imaging Spectroradiometer) satellite data, MOD10A1 version 6 [38,39].MODIS was chosen as a baseline for producing SCA, because it is reliable and provides a good streamflow simulation, according to Ouatiki et al., and it has been already studied and tested in many basins [9,32,33].Additionally, the lapse rate approach was used to generate the daily temperature data at a rate of 0.56 • C per 100 m of elevation.[40]. Long Short-Term Memory (LSTM) LSTM network is a particular variety of recurrent neural networks (RNNs) that was developed by Hochreiter et al. [19], and has been applied by many researchers due to the specific design that overcomes the long-term dependency problem faced by RNNs.[41].The structure of LSTM depends on three basic conditions: the cell state that defines the current long-term memory of the network, the output at the prior point known as the hidden state, and the input data in the current time step [42].Thus, the architecture of LSTM can control how the information in a sequence of data comes through three special gates: the forget gate, the input gate, and the output gate (Figure 3).The first step in the process is the forget gate (Equation ( 1)).The decision is taken through a sigmoid layer.Then, the input gate (Equation ( 2)) determines what value should be added to the cell state, taking into account the previous hidden state and the new input data.This step has two parts: the input gate layer that decides which values will update, and the tanh layer (Equation ( 3)).The previous cell state C t-1 is then updated into the new cell state by combining the two layers (Equation ( 4)).The output gate (Equation ( 5)), which determines the new hidden state, is the last phase.To decide which components of the cell state should be generated, it is important to run the sigmoid layer (Equation ( 6)).The mathematical formulas of the model structure are: Methodology The Python-based TensorFlow open-source software package and Keras were used to create the LSTM model for this study.The process used is illustrated in Figure 4, and is divided into four main steps: feature selection (a), data pre-processing (b), hyperparameters tuning (c), prediction and evaluation (d). Feature Selection In this study, we created two input scenarios to explore the sensitivity of the LSTM model in this region.First, rainfall (R), temperature (T), and snow cover area are (SCA) used as default inputs (scenario 1: LSTM).The second input scenario was generated by adding lagged data, conditions, and information on indicators providing the historical point of reference for the next steps.Thus, it will be used to assess the achievement of the effect and outcomes expressed in the model.The number of time lags of the streamflow was determined by using the Partial Autocorrelation Function (PACF) [43].Days 1, 2, and 3 were significant and had an impact on the streamflow at t = 1 day.Three lag days of rainfall, temperature, and SCA were considered to select the model features.However, to find the best subset of features, we used the Forward Feature Selection (FFS) algorithm [44], which evaluates each individual feature by incrementally adding the most relevant ones to the target variable (streamflow) [45].The subset of features that were found to be significantly correlated with the streamflow are presented in Table 2 (scenario 2: FFS-LSTM).This study identified additional concerns regarding the accuracy and reliability of the LSTM model.Accordingly, it is typical to have a mechanism to evaluate the overall performance of the model.Thus, splitting the input data into training (train LSTM) validation (evaluate LSTM) and testing (confirm the results) is the apparent and fast procedure to limit the model from overfitting and to compare its effectiveness in streamflow prediction.Although with time-series data, it is crucial to consider the back values that will be used for testing and training [24], hence we split the data using three approaches: • Approach 3: with limited data samples, k-fold cross-validation is the most widely used method to assess the model's performance.It divides the dataset in k equal-sized numbers, with one out of k parts is used as the testing set while the model is trained using k-1 folds [43].The configuration of the cross-validation parameter is referred as the number of split iterations that the dataset will be divided into.Overall, it is from 2 to 10 depending on the availability of the data.In this study, we tested the different values of cross-validation (CV).The appropriate value is CV = 5 with 80% as the training set (7 years), along with 20% of the train data as the validation set and 20% for testing (2 years) in each group that was employed (Figure 5).In addition, we separated the target variable (streamflow) and the input variables (rainfall, snow cover area, and temperature) from the dataset.The last step in the preprocessing action is the data transformation that plays a critical role in the performance of neural network models, when features are on a relatively similar scale and are close to being normally distributed.One of the most popular methods for scaling numerical data is normalization, which scales the input variables to a standard range between zero and one [46], and uses the same range for scaling the output corresponding to the activation function's size (tanh) on the output layer of LSTM.The MinMaxScaler function (Equation ( 7)), was used to decrease the minimum values in the feature and divided by range using the original training data.It is important to estimate the minimum and maximum available values, and accordingly apply the scale to the training validation and testing.This estimation method scales and converts each feature individually, so that it falls within the training set's given range of zero to one. x = x − min(x) max(x) − min(x) (7) where x is the scaled value, and x is the original value. Hyper-Parameter Tuning The configuration of the neural networks is still difficult because there is no strong and specific approach to develop the algorithm [47].This is why we need to explore different configurations to decide and selected the values of parameters that can be used to control the learning process and avoid overfitting [42,44].In general, neural networks have numerous hyperparameters that minimize the loss function.The LSTM parameters used in this study are composed of three neural network layers: the input layer, the hidden layer and the output layer.Moreover, we used a regularization method named dropout to reduce overfitting and improve the model performance.For the baches, we used 32 batch for the first scenario and 10 batch for the second scenario.The number of epochs used in this study is 250 epochs, with an early stop of 10 epochs when the model performance stops improving on the validation set.The hyper-parameters selected for the model are summarized in Table 3 [48,49].Therefore, the architecture of the LSTM model is a 3D input (num_samples, num_timesteps, num_features) [50].In this study, the number of sequence lengths was evaluated using five-time steps: 2, 10, 20, 25 and 30 (denoted TS2, TS10, TS20, TS25, TS30) days of input data employed to drive the LSTM network to predict to next day. Model Evaluation Criteria Typically, there are various criteria for evaluating model performance for streamflow prediction.In deep learning, these parameters serve to compare the difference between the observed streamflow and the simulated output from the validation and testing values.In our study, we evaluated the LSTM performance by employing the Root Mean Squared Error (RMSE, Equation ( 8)), the Mean Absolute Error (MAE, Equation ( 9)), the Kling-Gupta Efficiency (KGE, Equation ( 10)), and the coefficient of determination (R 2 , Equation ( 11)). The most often used metric in prediction and regression tasks is the Root-meansquare deviation.RMSE is the average squared difference between the true values and the predicted scores.Here, y i (m 3 /s) presents the observed streamflow for each data point, and y i ' (m 3 /s) presents the predicted value.The range of values for RMSE is 0 to ∞, and a great prediction result is obtained when RMSE = 0 [51]. MAE is a popular metric defined by the units of the error score that corresponds to the units of the predicted value, and is calculated as the average of the absolute error values of the difference.The MAE does not give more or less weight to different types of errors, and instead the scores increase linearly with error increases.The value of MAE ranges from 0 to ±∞, When MAE = 0 the prediction result is the best.|y i − y i | is the difference between the observed and expected values in absolute terms [52]. The Kling-Gupta Efficiency, KGE forward certain weaknesses in NSE and is increasingly utilized to calibrate and validate models.It was originally developed to compare predicted and observed time series which can be decomposed into the contribution of average, variance and correlation to model performance.Similarly to the NSE, when the optimal score of KGE = 1 the simulations and observations has a perfect match [53].Different researchers use positive KGE values as indicators for "good" model simulations, whereas negative KGE values are regarded as 'poor'.However, KGE = 0 is implicitly used as the dividing line between "good" and "poor" performance.In KGE, r is the Pearson correlation coefficient between actual and simulations values, β a result of dividing the simulation mean by the observation mean [33,46,54]. The coefficient of determination represents how much of the observed variation is explained by the model, and ranges from 0 to 1.A score of 0 indicates that there is no association, whereas a value of 1 indicates that the model can fully explain the observed variation [10]. Results and Discussion Three approaches were utilized in this study to assess the LSTM model's effectiveness adopting random (approach 1 and approach 2) and automatic (approach 3) data splitting methods.The purpose for designing different datasets is to explore the impact of the training series of data for different time period values on the hydrological process, where the changes over the year in the hydroclimatic conditions cause significant variations in streamflow [55].Moreover, the effect of input features and sequence length on model appearance was conducted to verify model reliability.The statistical metrics of LSTM and FFS-LSTM at TS2, TS10, TS20, TS25, and TS30 using the three approaches, making a comparison during the training between the validation and the testing are shown in Tables 4-7. Evaluation of Model Performance Using Random Split The quantitative analysis of the model behavior (Table 4) using approach 1, illustrates that LSTM (scenario 1) achieved extremely high RMSE and MAE values, shallow values for R 2 , and negative KGEs.However, the performance of LSTM increases with the number of time steps in predicting results.Thus, the LSTM network hardly remembers the sequence using 2 days of data as input for predicting the next day's flow.This is mainly due to the memory challenge in watersheds involving the snow and, thus, the lag between the rainfall and streamflow peaks.The results produced under the first input scenario in the validation and testing periods demonstrate that the model is unable to simulate daily streamflow in this study region, where the higher values of R 2 were found at TS = 30 (0.75 in training, 0.46 in validation and 0.45 in testing), due to the memory data feature of LSTM that was insufficient to feed the model.In addition, the achievement of the model decreased during testing using approach 2 (Table 5), considering the start and the end of the hydrological year from 1 September 2001 to 31 August 2007 as the training samples, where the best statistical data found at TS = 25 days with R 2 = 0.71, 0.51 and 0.34 after training, validation, and testing, respectively.This is mainly due to the meteorological input data and the strong spatiotemporal variability of rainfall.When defining an LSTM network, the network assumes more samples and requires the number of time steps and features to be specified.A time step presents one point of observation in a sample, and a feature is one observation at a time step.Thus, adding lagged data during training could lead LSTM to catch how water is lagged and moved into the watershed, which is interesting for improving model performance.This notion is demonstrated in the second scenario (FFS-LSTM).The values of RMSE, MAE, KGE, and R 2 in learning, validation, and predicting sets indicated upstanding streamflow simulation capacity of the LSTM model at TS10 (approach 1) and TS20 (approach 2).Thus, there is a change in KGE, and R 2 distribution values of LSTM between different periods at TS2, TS25, and TS30.This finding indicates that the generalization of the LSTM model may considerably compromise the appearance of an extreme event, where the LSTM memory cell holds the previous streamflow values to predict the current streamflow.However, in the first approach, when the sequence length was 25 days the model tended to overfit in the testing phase, with a significant decrease in KGE.For both scenarios, the RMSE and MAE decrease with the time step.These results show that the model has the capability to catch the long-term streamflow components, as well as the reliability of LSTM using the second scenario with both approaches. Figures 6-9 shows the best hydrographs and scatterplots of the observed versus predicted daily streamflow during the testing phase.The green line represents the observed daily streamflow, and the blue and the purple lines represent the prediction results from LSTM and FFS-LSTM scenarios, respectively.Moreover, the time series plots in Figures 6 and 7 show the testing results of the first approach, while Figures 8 and 9 show the same results adopting the second approach.From the figures, it appears that the results from both scenarios using the first approach (70% of training data) had a significantly similar hydrographic form as the second approach (considering the hydrological year).This is mainly due to the period of the training and validation datasets that were almost identical. The performance of the model using the first input scenario LSTM almost simulates the low flow data.However, at some points it underestimated the flow occurrence which is vital for water supply planning and the preservation of a quantity of water for irrigation.Moreover, as the extreme peak volumes are essential to monitoring flood and disaster events, the second input scenario almost captured the peak flow events using approach 1 with a flow volume of 223 m 3 /s (Figure 7).The maximum volume caught using approach 2 was 241 m 3 /s (Figure 9).In the scatterplots (Figures 6b and 8b), the points between the simulated and observed streamflow show that the model has underestimated the actual streamflow, since most of the points were under the diagonal line 1:1; this should be attributed to the size of the time-series data.The randomly split approaches have improved their capacity to predict streamflow using the LSTM model with the second scenario, and fail using the first scenario.However, with FFS-LSTM, it is important to evaluate the training data period, since the model performance may depend on the input data.Therefore, we used an automatic splitting approach named cross-validation to evaluate the model performance in different training times. Evaluation of Model Performance with Automatically Split The results of the third approach are summarized in Tables 6 and 7.Only the scores of sequence lengths TS30 for scenario 1 and TS10 for scenario 2 are shown, comparing the splitting training period on the performance of the model.The outcomes using FFS-LSTM appear to be much more effective than those of LSTM (scenario 1).It produced higher R 2 , NSE, and KGE in iterations 2 and 5. Lower scores set in CV = 1 due to the learning data that was taken from the end of the series.Decreasing time steps with fewer features impairs the ability to carry informative signals through time, which makes the prediction for the test data less efficient and probably makes the prediction erroneous.The performance at 10 and 30 days in approach 3 with both scenarios increases with the number of iterations, due to the splitting of the learning period that has an influence on the memory of the LSTM network.The bold values of RMSE (Table 6) were compared with the results performance of different time steps (Figure 10a).As may be seen in Figure 10a, the values of RMSE decrease with the number of time steps at CV5, which shows the superiority of the long-term storage memory (LSTM) unit state.However, the performance values of FFS-LSTM (Figure 10b) were quite stable and not affected by the splitting time zone of the data.In addition, at TS = 25 the testing period from 25 January 2007-11 November 2008 has a high RMSE with 10.16 m 3 /s, due to the chronological data splitting that was a discrepancy at this time step.With the results performance using scenario 1, without adding lagged data as input, the model tends to overfit, thus at CV5 it was reliable.In addition, when using the lagged data, there is a variation of KGE values during the training, validation and testing at the first four folds.Thus, the high values of RMSE in the CV5 are due to the period of training the dataset and the testing data, where the five-fold of the cross-validation was tested with a year that has a high flow volume over 349 m 3 /s (Figure 2).During the testing phase at TS10 using FFS-LSTM at CV1, CV2, CV3, CV4 and CV5, the values of RMSE are 8.31 m 3 /s, 7.05 m 3 /s, 4.97 m 3 /s, 4.78 m 3 /s and 12.40 m 3 /s, respectively, with the maximum observed streamflow of 232.88 m 3 /s, 294.78 m 3 /s, 72.86 m 3 /s, 75.90 m 3 /s and 349.06 m 3 /s, respectively.The flow volume variation influences the changes in the RMSE outputs.Similar to the first scenario, the high RMSE is related to the flow regime. For better visualization of the model's performance, only the best hydrograph using the FFS-LSTM scenario at iteration 5 is shown in Figure 11.Compared to the previous approaches, the values of the prediction are almost the same due to the size of the learning set, which is nearly identical to approach 1 and approach 2. However, the outputs values of FFS-LSTM in approach 2 are slightly higher than that of approaches 1 and 3 in terms of peak streamflow values.Clearly, the model is not highly affected by the condition of the hydrological year in splitting as well as the splitting time.The prediction results of the FFS-LSTM model with the third approach overestimated the daily streamflow in the periods January 2009 and August 2010.Previous studies have reported the importance of the input sequences length on the storage capability of the basin, and the sensitivity of this hyperparameter over the overfitting predictions issues [30,33].In our study, the analysis of the sequence length over the three approaches has been improved to capture the dynamics of the daily streamflow prediction. Reliability of LSTM Model The upstanding results yielded by the FFS-LSTM scenario may be explained by the structure of LSTM, which has an intelligent architecture based on memory cells that retained valuable information over a longer period of time by serval memory cells that could filter and keep the data.The LSTM model's capacity to almost capture the peak of 241 m 3 /s was carried out into the prediction period.This is a powerful set that the LSTM model provides, with relatively high precision for streamflow in small volumes.Moreover, in our study, we also focused on the impact of the splitting time zone.The performance of the model has compared the random splitting over two approaches, using the end and the start of the hydrological year during the splitting that describes a time period of 12 months, and the chronological time splitting that was defined by the cross-validation using two scenarios.According to the results, almost all the low values were apprehended by the model using the three approaches.The overestimation of the model in the testing set was found mostly in the third approach (Figure 11), on account of the reliability of the LSTM model to the hydrological variables leading inputs (the learning data).Moreover, the achievement of the LSTM model in simulating streamflow using short-term datasets with the first scenario is less improved compared to the conceptual hydrological model's results, due to the data requirement by deep learning [9].In most of the studies that used the LSTM model as a setting in hydrological problems, the 3D inputs weren't a priority.In our study, we used the FFS method as a key solution to determine the optimal input combinations utilizing the optimum time step.Hence, in the process of developing the model, the historical input features play an essential role in the model achievement as well as the length of time steps.The results performed by FFS demonstrate the capability to choose the appropriate predictors when adjusting the sequence length, to avoid the issue of overfitting.In addition, the main problem that causes the overfitting is the data scarcity, with all the models tested used on a limited time series dataset.Hence, the LSTM model may not be sufficiently informed with the watershed hydrological processes that may have a difference between streamflow and rainfall.This discrepancy was described by Ávila et al. using hydrological models [56].It is worth highlighting that the LSTM model is capable of achieving good predictability performance with all approaches at TS10 and TS20.Based on these results, the selection of the appropriate 3D input combinations and time lag supports the LSTM model to be more reliable.The past histories of each variable with the time steps used in the model can affect streamflow prediction.It enables the model to capture the heading of the time-series dataset, demonstrates a powerful capacity of prediction, and empowers the memory process throughout the LSTM model.In this model, the structure used information about previous computations from specific previous steps to determine whether or not this instruction should be passed on to the next iteration.Since the LSTM model generates the data in numerous time steps, the input data are utilized to update a set of parameters in the internal memory cell states at each step during a training period.Memory cell states are only influenced throughout the prediction period by the input at a single time step and the states from the previous time step.However, machine learning methods such as the ANN model lack a chronological recall and presume that the model's inputs are independent of one another, making it impossible to detect temporal changes.As a result, the model's memory cells help the LSTM model better capture dataset trends and demonstrate its predictive power.However, the LSTM was unable to predict streamflow when using the default data as inputs.This demonstrates that the datasets were not enough to feed the model to capture the streamflow. Conclusions Accurate streamflow prediction has always been one of the primary concerns in watershed management.In this work, we studied the flexibility of the data-driven LSTM model on the streamflow prediction over a semi-arid region.Comprehensively, the LSTM model was tested based on three input conditions.It has been concluded that, the hydrological year (approach 2) and the time splitting zone (approach 3) does not significantly affect the performance of the model, where the accurate time step number is related to the selection of input feature.On the other hand the model shows upstanding performance in recording the streamflow time series using the Forward Feature Selection technique, compared with the default data as input features where the model shows a bad reaction.The FFS method of streamflow decomposition is a meticulous process that significantly improved prediction accuracy using the LSTM model. The outcomes of the analyses used in this study, illustrate the major issues connected with hydrological modeling studies, particularly the high connection between the LSTM design and the significant impact of input condition circumstances.However, in some of our findings, the model showed overfitting prediction issues due to the scarcity of useful information on ground data, which present the limitations of our study. In conclusion, the streamflow experiments carried out by LSTM model, learning from meteorological data and satellite data of the studied watershed were impressive.Yet, it is necessary to investigate the stability of the LSTM model, which would be our priority in future studies. Figure 1 . Figure 1.The geographical setting of the study area. Figure 3 . Figure 3.The architecture of Long-Short-Term Memory (LSTM) where σ presents the sigmoid function, tanh the hyperbolic tangent, C t−1 previous cell state, h t−1 previous hidden state, x t input data, C t new cell state and h t present the new hidden state.The adding and scaling of information is represented by the vector operations (+) and (X), respectively. Figure 4 . Figure 4. Flowchart of the modeling process for streamflow prediction model. • Approach 1 : splitting data to 70% training, 15% validation, and 15% testing[5,25].The learning period was set from 1 September 2001 to 14 December 2007, the validation period was from 15 December 2007 to 24 April 2009, and the testing period was from 25 April 2009 to 31 August 2010.• Approach 2: splitting data taking into consideration the hydrological year that started from September of the current year and ended in August.Six years for training (1 September 2001-31 August 2007), one year and 6 months for validation (1 September 2007-28 February 2009), and one year and 6 months for testing (1 March 2009-31 August 2010). Figure 6 . Figure 6.The hydrograph of observed and predicted daily streamflow of LSTM scenario during the testing (a) using approach 1 at TS = 30 along with the corresponding scatter plot (b). Figure 7 . Figure 7.The hydrograph of observed and predicted daily streamflow of FFS-LSTM scenario during the testing (a) using approach 1 at TS = 10 along with the corresponding scatter plot (b). Figure 8 . Figure 8.The hydrograph of observed and predicted daily streamflow of LSTM scenario during the testing (a) using approach 2 at TS = 25 along with the corresponding scatter plot (b). Figure 9 . Figure 9.The hydrograph of observed and predicted daily streamflow of FFS-LSTM scenario during the testing (a) using approach 2 at TS = 20 with the corresponding scatter plot (b). Figure 10 . Figure 10.Comparisons of model predictions in different time steps and different data splitting during the testing phase using approach 3 (a) LSTM.(b) FFS-LSTM. Figure 11 . Figure 11.The hydrograph of observed and predicted daily streamflow of FFS-LSTM scenario during the testing (a) using approach 3 (CV = 5) at TS = 10 with the corresponding scatter plot (b). Table 3 . Hyper-parameters used for training LSTM network. Table 4 . Performance of the model scenarios for daily streamflow simulation using approach 1 (70% train; 15% valid; 15% test) with a variety of sequence lengths (number of time steps). Table 5 . Performance of the model scenarios for daily streamflow simulation using approach 2 (considering hydrological year) with a variety of sequence lengths (number of time steps). Table 6 . Performance of the LSTM scenario for daily streamflow simulation using approach 3 (cross-validation) at TS = 30.
v3-fos-license
2020-08-08T13:07:38.436Z
2020-08-07T00:00:00.000
221035330
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2020.00908/pdf", "pdf_hash": "d9af9f0e10c4d5905561ec5ba61ffe45c27f67a6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42248", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "d9af9f0e10c4d5905561ec5ba61ffe45c27f67a6", "year": 2020 }
pes2o/s2orc
Methanolic Extract of Winter Cherry Causes Morpho-Histological and Immunological Ailments in Mulberry Pyralid Glyphodes pyloalis The effect of Withania somnifera a medicinal plant seed extract was tested against lesser mulberry pyralid, a potential pest of mulberry. The mulberry leaves were used for silk production in rural areas of northern Iran. The extract was administered orally by leaf dipping method in two lower (5%W/V) and higher (15%W/V) dosages to third instar larvae (<24 h) for biological assays and to fifth instar larvae (<24 h) for Physiological studies. The results showed formation of larvoids (Ls), larval-pupal intermediates (LPIs), pupoids (Ps) and pupal-adult intermediates (PAIs). The results showed increased larval duration by 1.7 and 2 folds in 5 and 15% treatment, respectively. Fecundity of resultant adults was decreased by 1.2 and 1.3 in 5 and 15% treatment, respectively. Except approximate digestibility (AD) and consumption index (CI) all other feeding indices showed reduction. The feeding deterrence was prominent at 15% (87%) and 5% showing 48% deterrence. Our enzymatic and non-enzymatic assessments upon treatment showed reduction in key components, except detoxifying enzymes. However, the activity of an important enzyme involved in cuticle hardening and immunity called phenoloxidase was reduced. We also investigated the histology of midgut for further analysis and found drastic changes in main cellular elements. Immunological changes following treatment was noticeable in reduced Total Hemocyte Count but surprisingly increased Differential Hemocyte Count. However, the hemocytes structure was extremely damaged. The reduced number of eggs in treated but survived adults indicated reduced ovaries, with vacuolization both in trophocytes and oocytes. The key chemical compounds showed reductions particularly at 15%. The present results are concomitant with few earlier studies on this medicinal plant and deserve further studies particularly in deriving key chemicals that alter metamorphosis similar to insect growth regulators. INTRODUCTION The chemical plant protection agent usage has turned into an environmental disaster reverting the biological sustainability and ecological order. Thus, chemical pesticides are considered a threat to useful fauna, contaminates agricultural products and human health (Nicolopoulou-Stamati et al., 2016;Carvalho, 2017). That is why scientists, particularity environmentalists are in favor of non-chemical methods, that are safer for human, animals and the whole environment (Liao et al., 2017;Kunbhar et al., 2018). The chemical pesticides need to be substituted by natural ones, and in this context plants could play a major role (Isman, 2000;Govindarajan et al., 2016). Plant extracts and essential oils have a wide range of actions including repellents, attractants, anti-feedants, being toxic to larvae and eggs or retard insect growth by disrupting hormonal balance (Sarwar and Salman, 2015;El-Sheikh et al., 2016;Silva et al., 2018). Winter cherry, W. somnifera L (Solanales: Solanaceae) is a local plant to east Mediterranean and south Asia (Parwar and Tarafdar, 2006) and has been used in traditional medicine (Bhattacharya et al., 2001). In Iran, this plant is better known locally as Panirbaad, grown only in Sistan and Baluchistan province, Khash (28 • 13 16 N 61 • 12 57 E) and Saravan (27 • 22 15 N 62 • 20 03 E) cities (Keykha et al., 2017). Antioxidant, anti-tumor, anti-inflammatory, anti-depressant, anti-anxiety, controlling blood sugar and this also effective on neural transmitters (Alam et al., 2012). Roghani et al. (2006) reported that this plant is useful for Parkinson disease. The major constituents that provide the winter cherry with the privilege in disease treatments, include withanine, withasomnine, somniferine, withaferins, withanolides, stigmasterol, and sitoinosides (Dar et al., 2015). Among the withanolides in this plant, withaferins A causes its inhibitory effect on cells and tumors is of great value in the pharmaceutical industry (Rai et al., 2016). Maheswari et al. (2020). Reported that Titanium dioxide (TiO 2 ) nanoparticles modified with W. somnifera and Eclipta prostrate root extract had more anti-cancer activity than other biologically modified samples. Studies have shown that the W. somnifera seed contains fatty acids such as linoleic acid, oleic acid, palmitic acids, stearic acid, 11,14,17-eicosatrienoic acid, and nervonic acid, which can have a significant effect on the treatment of psoriasis-like skin etiologies (Balkrishna et al., 2020). Glyphodes pyloalis Walker (Lep: Pyralidae) feed only on mulberry and it causes severe damage to the foliage. The larvae of this pest upon rolling the leaves eat the parenchyma, leaving only the veins. The insects in addition to direct loss are also responsible for indirect loss through transmission of viruses that are pathogenic to silk worm (Watanabe et al., 1988;Matsuyama et al., 1991). The winter cherry uses the withanoloid in the seeds as an anti-feeding and repellent against insects (Glotter, 1991). In addition psoralen and isopsoralen present in this seed plant act as anti-feedant and insecticide (Panwar et al., 2009). Various concentrations of winter cherry extract have resulted into mortality of adult rice weevil (Suvanthini et al., 2012). The growth inhibitory of seeds and roots of this plant has been reported on certain polyphagous pests, leading to inhibition of pupal and adult formation that effects of seeds were more severe than the roots of W. somnifera (Gaur and Kumar, 2017). The same authors also reported increased larval, pupal and adult duration after incorporation of root extract. Morphological and various growth disorders have been recorded in Spodoptra litura Fab (Lep: Noctuidae) (Gaur and Kumar, 2019).The prepupal treatment by root and seed extracts on S. litura and Peicalliaa ricini Fab (Lep: Arcttidae) showed that the seed extract exhibited mortality but not the root extract (Gaur and Kumar, 2020). Winter cherry extracts (aqueous suspension, ether and water) of roots, stems, leaves and fruits have been used against Callosobruchus chinensis L (Coleoptera: Chrysomelidae) adults where a 63.33% mortality have been observed using 10% ether extracts of its roots (Gupta and Srivastava, 2008). There have been certain work on mulberry pyralid using plant essential oils or extracts. The notable works which were investigated in our laboratory are as follows; the extract of medicinal plant known as sweet wormwood has shown deterrence, growth inhibition and affecting digestion. The energy reserves were also reduced compared to the control (Khosravi et al., 2011). Similarly, treatment of G. pyloalis larvae by a commercial product of neem (CIR-23, 925/96, 0.03% azadirachtin, India) showed significant anti-feedant activity, reduced nutritional indices. The effect was also profound in digestive enzymatic and energy reserves of this pest (Khosravi and Jalali Sendi, 2013). The mortality and the sublethal effect of Thymus vulgaris L. and Origanum vulgare L. essential on G. pyloalis Walker has been reported with the effects on some important enzymes (Yazdani et al., 2014). Similar activities were also found after treatment of G. pyloalis larvae with lavander essential oil (Yazdani et al., 2013). In order to have an understanding of insect behavior and physiology in response to their respective hosts, a basic knowledge of insect feeding indices are a prerequisite (Najar-Rodriguez et al., 2010). Plants produce a set of chemicals based on their needs; those of nutritional values and the second group is the chemicals that are considered as secondary metabolites meant for the purpose of defense against invaders (War et al., 2013). During the course of evolution, plants have adopted themselves with chemicals they produce (Kabir et al., 2013;Isman and Grieneisen, 2014;Rajapakse et al., 2016). As far as insect herbivory is concerned, the plant may act as anti-feedants, repellents, toxicants, growth regulators and may even alter their immunological strategies (Murillo et al., 2014;Selin-Rani et al., 2016). There are several reports on anti-feedants by plant products against insects (Ragesh et al., 2016;Ali et al., 2017;De Santana Souza et al., 2018;Fite et al., 2018;Kaur et al., 2019). Plant products also act as repellents (Abtew et al., 2015;Camara et al., 2015;Nasr et al., 2015;Niroumand et al., 2016) growth regulatory such as malformed larvae, pupae and adult (Gnanamani and Dhanasekaran, 2013;Jeyasankar et al., 2014;Vasantha-Srinivasan et al., 2016). Insects defend themselves by two methods against foreign bodies, a general method (i.e., cuticle and chemicals involve in defending insects) and the second one involving cellular and humeral responses (Beckage, 2011;Dubovskiy et al., 2016;Rahimi et al., 2019). The plant products are capable of indulging in both of them and thus weakening the insects and make them susceptible to various diseases and parasites (Veyrat et al., 2016;Gasmi et al., 2019). There has been rapid growth in botanicals research in the past 20 years, however commercialization of new botanical are lagging behind. The pyrethrum and neem (azadirachtin) are still the standard class botanical pesticides that are in use in many regions of the world. Botanical products may have the greatest impact in developing countries especially tropical regions where the results of available and chemical insecticides use expensive and are a treat to consumers (Isman, 2019). Therefore, the research for safer and cheaper methods of control via available resource with minimal side effects is most welcome in developing countries. Insect Culture The mulberry pyralid larval stages were handpicked the mulberry plantations in Rasht district (37 • 17 N, 49 • 35 E) northern Iran. The larvae were reared on tender mulberry leaves (shin ichinoise variety) in a rearing chamber (set at 24 ± 1 • C, 75 ± 5% relative humidity and 16:8 light:dark) in clear plastic jars (18 × 15 × 7 cm) with muslin cloth devised on the lid for aeration. The paired adults were released in pairing boxes. In order to feed the adults, a piece of cotton saturated with 10% honey solution was provided and for adults to lay eggs, mulberry leaf was provided. The leaves and the cotton wool soaked honey solution were changed daily. Methanolic Extraction of W. somnifera Seeds Fresh fruits of winter cherry were procured from Saravan district (27 • 22 15 N 62 • 20 03 E) Sistan and Baluchistan province south east of Iran. The seeds were dried at room temperature initially and then in the oven at 50 • C for 48 h. They were grounded by an electric grinder. Dried seed powder of winter cherry (30 g) was added to 300 ml of methanol 85% (Merck, Germany) and then stirred for 1 h on a stirrer. The resultant solution was transferred to 4 • C for 48 h and then stirred for additional hour. Then solution was filtered through Whatman filter paper (No.4) and rotary evaporated and a black residue yielded. The residue was dissolved in 10 ml methanol and used as stock (Warthen et al., 1984). Dilutions were made with distilled water using 0.1% Triton X-100 (Darmstadt Germany). Development Based on pretrial, two concentrations (i.e., 5 and 15%) were chosen and then used for bioassays. Circular leaf discs (6 cm in width) were drenched in related concentrations for 10 s. Control received methanol treated leaf discs in the same manner. For each treatment and control third instar larvae <24 h were used. This test was performed in 3 replications and each replication with 10 larvae. They were let to feed on treated food for 24 h after that, fresh leaves were provided to them. The insects were maintained at 25 ± 2 • C, 65 ± 5% RH, and 14:10 light: dark in an incubator. The duration of various stages was monitored and recorded until all control adults died. The number of eggs from treated and control emerged adults were counted daily. Nutrition To study nutritional values for treated and control insects the method of Waldbauer (1968) were adopted. For this purpose <24 h fifth instar larvae were maintained on treated and control diets for 3 days. The formulae were used; Approximate digestibility (AD) = 100 (E-F)/F, Efficiency of conversion of ingested food (ECI) = 100P/E, Efficiency of digested food (ECD) = 100 P/ (E-F), Consumption index (CI) = E/TA and Relative growth rate (RGR) = P/TA. A is the average of dry weight of larvae during the experiment, E is the dry weight of consumed food, F is the dry weight of produced feces, P the dry weight of the biomass of larvae and T is the duration of the experiment (4 days). Anti-feedant Bioassay This method was adopted from Isman et al. (1990). In this method which was based on choice tests, The 10 fifth instar larvae were fed for 24 h on leaf disks control and treated. After the end of experiment the leaf discs were removed and were analyzed by device (leaf-area-meter A3 Light box UK). The feeding deterrence index was calculated with the formula as follows: FDI = (C-T)/(C+T) × 100. Here, C is the amount of leaf eaten in untreated (control) leaves and T is the amount of leaf eaten by of insects on treated leaves (Isman et al., 1990). Biochemical Assays In order to estimate the amount of energy reserves like triglyceride, protein and glycogen, the whole body of treated fifth instar larvae and controls were first homogenized in an eppendorf vial with the help of a hand homogenizer in universal buffer (i.e., 50 mM sodium phosphate-borate at pH 7.1) and the supernatant was freezed at -20 • C until use. The procedure of Lowry et al. (1951) was implemented to measure the total protein. To 100 µL of which reagent, 20 µL of supernatant was added, and then the incubation was done for 30 min at 25 • C. The absorbance was recorded at 545 nm. The amount of triglyceride was measured using the assay kit of Pars Azmoon, Tehran, Iran. The solution included a buffer (50 mM, pH 7.2), the chlorophenol (C 6 H 5 ClO 4 mM), 2 mM adenosine triphosphate (C 10 H 16 N 5 O 13 P 3 ), 15 mM Mg 2+ , 0.4 kU/l glycerokinase, 2 kU/l peroxidase, 2 kU/l lipoprotein lipase, 0.5 mM 4-aminoantipyrine, and 0.5 kU/L glycerol-3-phosphateoxidase. Ten microliter of the sample was incubated with 10 µL of distilled water and 70 µL of reagent for 20 min at 25 • C. The formulae see below was used to estimate triglyceride amount: mg/dL = optical density of the sample/optical density of standard × 0.01126. The reading was done at 545 nm (Fossati and Prencipe, 1982) in an ELISA reader (Awareness, United States). For measuring the glycogen, the entire larvae were drenched into 1 mL of 30% KOH. The sample tubes covered with aluminum foil and boiled for 30 min. The tubes were first vortexed and then placed on ice cubes. A 2 mL 95% ethanol was used in order to separate the glycogen from the solution. Samples were vortexed again and left on ice bag for 30 min. The centrifugation was performed at 13,000 g for 30 min. The pellets of glycogen thus formed were collected and mixed in distilled water (1 mL) and then vortexed. Samples of standard for glycogen were prepared in ascending order (0, 25, 50, 75, and 100 mg/mL) and then mixed with phenol (5%). The samples were incubated on ice bath for 30 min. Finally, the absorbance was read at 490 nm (Chun and Yin, 1998). The activity of α-amylase was estimated based on the method of Bernfeld (1955). The starch (1%) was used as substrate. The enzyme (10 µL) was incubated with Tris-HCl buffer (50 µL of 20mM at pH 7.1) and 20 µL of starch (1%) at 30 • C for 30 min. After addition of 100 µL Dinitrosalicylic acid (DNS) it was heated in water tab set at boiling point for 1 minute and then read at 540 nm. The method of Garcia-Carreno and Haard (1993) was used for determination of general proteases using 1% azocasein as a substrate for their activity. For this purpose, supernatant (10 µL) and buffer (15 µL) and 50 µL of substrate were incubated for 3 h at 37 • C in an oven. In order to stop the reaction 150 µL of 10% trichloroacetic acid was added. The blanks were prepared by addition of trichloroacetic acid to the substrate. The liquids were then kept in a refrigerator (4 • C) for 30 min, and then centrifugation was done at 13,000 g. Later, mixture of supernatant and NaOH, 100 µL each were transferred to ELISA plates and read at 440 nm. The activity of lipase enzyme was measured according to the method of Tsujita et al. (1989). Using 18 µL of the substrate p-nitrophenyl butyrate (50 mM) and then mixing it with midgut extract (10 µL) to which 172 µL of universal buffer solution (1 M) (pH 7) was included and then incubated at 37 • C and read at 405 nm. For estimation of α-glucosidase and β-glucosidase the method used by Silva and Terra (1995) was followed where 15 µL of the enzyme solution was incubated with 30 µL of p-nitrophenyl-α-glucopyranoside (5mM) a substrate for α-glucosidaseand p-nitrophenyl-β-glucopyranoside (5mM) as substrate for β-glucosidase, respectively. Then, to each of the solutions 50 µL of universal buffer (50 mM sodium phosphateborate pH 7.1) added and allowed to react for 10 min at 37 • C. Followed by observation at 405 nm. The general esterases activity followed the methods of Van-Asperen (1962). One whole gut was first homogenized in 1000 µL of 0.1 mM phosphate buffer (pH 7) which included Triton x-100 (0.01%), and solution was centrifuged at 10,000 g for 10 min at 4 • C. The microtubes were replaced with new microtubes phosphate buffer was added to each and observation was done at 630 nm. In order to determine glutathione s-transferase (GST) activity, the procedure of Habing et al. (1974) was incorporated using 1-chloro-2, 4-dinitrobenzene (CDNB) (20 mM) as the substrate. The homogenized larva with 20 µL distilled water was centrifuged at 12,500 g for 10 min at 4 • C. Fifteen microliter of supernatant and 135 µL phosphate buffer (pH 7) with 50 µL of CDNB were mixed with 100 µL of GST. Change of absorbance at 340 nm was recorded for 1 min in 9 s intervals at 27 • C. Phenoloxidase Activity Assay For measuring phenoloxidase activity, hemolymph and ice-cold sterile phosphate buffer saline (PBS) was used in a ratio of 10-90 µL, respectively. The L-DOPA (3, 4-dihydroxyphenylalanine) (10 mM, Sigma-Aldrich Co., United States) was used as the substrate for assaying this enzyme following the procedure of Catalán et al. (2012), with some modifications. Centrifugation of the samples was performed at 5,000 g (4 • C and 5 min). Then 50 µL of solution was mixed with 150 µL of the amino acid L-DOPA. The activity was calculated by division of absorbance with the amount of protein in hemolymph. However, the protein content was measured following the method of Lowry et al. (1951). The specific activity of phenoloxidase was recorded at 490 nm during the reaction. Histology of Larval Midgut and Adult Ovary The digestive system of G. pyloalis fifth instar larvae after being fed on treated leaves from third instar onwards were dissected out under a stereomicroscope (Olympus Japan) in isotonic ringer saline. The dissected digestive system were immediately fixed in Buin's fluid for 24 h. Then washed first in tap water and then distilled water. They were processed for dehydration in ethanol grades (30, 50, 70, 90% and then absolute), and the paraffin was used for embedding. The sections were cut at 5 µm thickness by a rotary microtome (Model 2030; Leica, Germany). Routine staining by hematoxylin and eosin (Merck) was used. Photos in control vs. treatments were taken whenever necessary under a light microscope (M1000 light microscope; Leica) equipped with an EOS 600D digital camera (Canon, Japan). The ovary of 2 days old adults were dissected in the similar way and processed as described for gut. However, the sections were cut longitudinally. Total and Differential Hemocyte Count (THC and DHC) After 48 h, the hemolymph of fifth instar larvae (in two concentrations used and controls) were collected from the first abdominal pro leg. For THC a Neubauer hemocytometer (HBG, Germany) was used. For this purpose, the larval hemolymph (10 µL) was mixed with 290 µL of anti-coagulant solution (0.017 M EDTA, 0.041 M Citric acid, 0.098 M NaOH, 0.186 M NaCl, pH 4.5) (Amaral et al., 2010). The DHC was counted by immersing the larvae in a hot distilled water (60 • C) for 5 min, after drying with blotting paper, the first abdominal pro leg was excised and a drop of hemolymph was released on to a clean slide and a smear was made using another slide. The air-dried smears were stained with 1:10 diluted stock Giemsa (Merck, Germany) for 14 min, then were washed in distilled water. The smears were dipped for 5 s in saturated lithium carbonate (LiCO 3 ) for differentiation of cytoplasm and nucleus and then washed again in distilled water for a few minutes. They were dried at room temperature and then permanent slide was prepared in Canada balsam (Merck, Germany). The cells were identified based on the morphological characteristics observed under a microscope (Leica light-microscope) (Rosenberger and Jones, 1960). Two hundred cells were randomly counted from four corners and a central part of each slide (Wu et al., 2016). Totally, 800 cells of four larvae were counted and the percentage of each cell type was estimated. The number of cells in controls was also simultaneously recorded. Phase Contrast Microscopy The hemolymph was collected from each incised larval proleg and was immediately mixed with anti-coagulant (0.186 M NaCl, 0.098 M NaOH, 0.041 M citric acid, and 0.017 M EDTA, pH 4.5). Five microliter of the solution was placed over a glass slide making a thin film using cover glass. The various hemocytes were identified by phase contrast microscopy, and photos were taken using in built camera microscope. Statistical Analysis All the data in relation to larva, pupal and adult duration were analyzed by one-way ANOVA (SAS Institute, 1997). Similarly, the data collected from non-enzymatic and enzymatic assays were also analyzed in the same way. All the means were separated using Tukey's multiple comparison test (p < 0.05). Enzymatic Assay The results demonstrated that, treatment of larvae with W. somnifera seed extract caused a significant decrease in α-amylase activity after 48 h (F = 49.44; df = 2, 8; P = 0.0002). Similarly, the activity of αand β-glycosidase was reduced significantly with 15% treatment after 48 and 72 h (0.046 ± 0.0026 and 0.029 ± 0.0052, respectively). At 5% treatment there was no change after 48 h but a reduction was followed 72 h later (0.061 ± 0.0102 and 0.066 ± 0.0053). The activity of lipase was also inhibited by 5 and 15% about 1.5 and 7 fold, respectively. The same trend was also followed in general proteases (Table 3). Detoxifying Enzymes The detoxifying enzymes including glutathione-S-transferase, α-naphtyl acetate and β-naphtyl acetate substrates were analyzed after treatment with 5 and 15% concentrations of W. somnifera seed extract which are depicted in Table 4. The overall results showed increase in these parameters (Table 4). However, the phenoloxidae activity decreased at 5 and 15% treatment after 24 and 48 h ( Table 4). Energy Reserves The amount of energy reserves like (protein, triglyceride) were decreased significantly in both concentrations after 48 and 72 h. While at 5% concentration the amount of glycogen decreased 1.5fold and that of triglyceride 1.8-fold (Table 5). Midgut The principal midgut cells in control is intact with large columnar cells and a prominent nucleus. The other prominent cells of midgut include goblet cells and regenerative cells. All the muscle layers are intact including longitudinal and circular ones. Peritrophic membrane and brush borders are also significantly clear (Figures 1a,b). In treatment 5% no clear differences between cells were observed and the midgut epithelium seemed to protrude inside gut epithelium giving it a dislodging surface. Nucleus were extruded from rupturing cells and densely stained in comparison to controls (Figures 1c,d). While in the treatment 15% a rupture of the cell membrane and some signs of necrosis in both nuclei and cytoplasm of the epithelial cells were observed. Large vacuoles were present in the epithelial layer of treated larvae. The principal cells were seen separating from their basement membrane. The overcrowding in the cellular structure making them not to be recognized into their counter parts in controls (Figures 1e,f). DHC The main cells after staining with Gimsa and phase contrast microscopy depicted 5 cells types so called Prohemocyte (Pr), Plasmotocyte (Pl), Granulocyte (Gr), Spherulocytes (Sph), and Oenocytoid (Oe) and a subtype of Plasmatocyte so called as Vermicytes. Treatment with 5% and 15% W. somnifera extract brought about changes in the number of PLs and GRs which are the main immunity involved cells (Figure 3). After 24 h of treatment with 5 and 15% of the extract the number of Pls increased significantly (F = 44.68; df = 2, 8; P < 0.0002). However, after 48 h there was no changes between control and 5% treatment. But in the 15% treatment the PLs number increased (F = 54.04; df = 2, 8; P < 0.0001). The number of GRs after treatment with 5 and 15% were increased after 24 h. The increase was significant in both the treatments compared with the controls, but no differences between the treatments (F = 153.80; df = 2, 8; P < 0.0001). The increase in the number GRs after 48 h was highly significant in the higher dosage used compared to control and the other treatment and 5% treatment was also significantly higher than the control but lower that 15% treatment (F = 212.89; df = 2, 8; P < 0.0001). Morphology of Hemocytes As it can be observed in Figures 4-8 drastic changes were seen in the morphology of studied hemocytes after treatment of 5th instar larvae with W. somnifera methanolic extract. The pleomorphic PLs in control were largely Vermicytes both in phase contrast Microcopy and Giemsa stained with typical long cytoplasmic extensions and a prominent nucleus (Figures 4a,b). However, in treatments, the changes seen are as follows; the cells showed lots of extrusions giving it an irregular shape and making it unidentifiable (5% treatment Figure 4c) showing vocalizations not normally observed in untreated cells (Figure 4d). The changes seen by 15% treatment are notable in the loss of cytoplasmic extensions, vacuolization and gradual degeneration (Figures 4e,f). The Gr in control were typically filled with plenty of granules and a prominent central nucleus (Figures 5a,b). However, in treatments the cells were degranulated and with lots of extruding materials giving the cells form of bulging appearances in both the treatments (Figures 5c-f). The normal Spherulocytes appear circular with a small but prominent nucleus and plenty of regular spherules (Figures 6a,b). After treatment the cells lost their integrity, became irregularly shaped, thus making the identification difficult (Figures 6c-f). The Oenocytoids, are rare cells but they are large with plenty of inclusions, the nucleus is typically smaller compared with cytoplasm and is located near the cell boundary (Figures 7a,b). However, in treated larvae these cells appeared vacuolated with no nuclear boundary and were irregularly shaped (Figures 7c,d at 5% treatment) and with extruded or disintegrating nucleus (Figures 7e,f). The Prohemocytes are small cells, the nucleus is prominent filling the cytoplasm (Figures 8a,b) but after treatment with the extract the cells were vacuolated and the nucleus were smaller compared to controls or the extruding bodies appearing at the surface showing the sign of disintegration (Figures 8c-f). Morphology, Histology and Energy Reserves of Ovaries The dissected ovaries in treated insects were smaller than controls about 2 and 3 folds at 5 and 15% treatments, respectively. The number of oocytes were fewer than controls (Figures 9a-c FIGURE 2 | Mean percentage of plasmatocytes (PLs) and granulocytes (Grs) counts after treatment of fifth instar Glyphodes pyloalis larvae with 5% (T5) and 15% (T15) of Withania somnifera methanolic extract compared to control (C). Mean ± SE followed with the same letters above bars indicate no significant difference (p = 0.05). According to a Tukey's test. FIGURE 3 | Total hemocyte count (THC) following treatment with 5 (T5) and 15% (T15) with Withania somnifera methaolic extract compared to control (C) in Vth instar larvae of Glyphodes pyloalis after 24 and 48 h). Mean ± SE followed with the same letters above bars indicate no significant difference (p = 0.05). According to a Tukey's test. in control, Figures 9d,g in 5 and 15% treatments). The histology in dissected ovaries of treated insects showed no sign of complete formation of oocytes in treatments compared to control (Figures 9b,c). The changes were mostly seen in the form of disintegration of epithelial cells without cell boundaries in treatments vacuolization both trophocytes and oocytes (Figures 9e,f,h,i) with no sign of vitellin and chorion formation with obvious oosorption seen in gross morphology. The amount of total protein, glycogen and triglyceride were reduced by 15% in comparison to control and low dosage treatment (Figure 10). DISCUSSION The extract of various parts in W. somnifera has been reported as a potent insect growth regulator Kumar, 2017, 2018;Gaur and Kumar, 2019). The current study has focused on different effects of methanolic seed extract of this plant exerted on an important pest of mulberry. This pest is also suspicious of transmitting viral infection in silkworm. Treatment of seeds by this extract in two dosages selected with pretrials showed extensive growth inhibition. The growth inhibitory effect was prominent in the formation of intermediates larva-pupal-adults. In extreme cases we also noticed a 3th instar larva which could not shed its old cuticle. The morphological abnormalities might be related to chemicals in the extract that has already been reported as withanolids and withaferins (Trivedi et al., 2017;Gaur and Kumar, 2020). Thus, we looked for possible reasons underlying these subtle circumstances upon treatment, therefore some aspects of nutrition in treated larvae were considered. The results indicated adverse impact of Withania extract on almost all measured nutritional indices. These adverse effects led to the various abnormalities, which are very well documented by various plant extracts including Withania (Senthil-Nathan, 2006Shekari et al., 2008;Hasheminia et al., 2011;Khosravi et al., 2011). However, our results was interestingly different from other studies in giving negative ECD, ECI, and RGR which is indicative of its extreme anti-feedant activity. The feeding deterrence recorded 48 and 87% for 5 and 15% treatments, respectively, clearly indicated the anti-feedant activity (Qi et al., 2003;Luo et al., 2005;Junhirun et al., 2018). This anti-feedant effect is due to the presence of compounds as withanolides such as withanolide A and withanolide D in this plant therefore FIGURE 4 | Effect of Withania somnifera seed extract on the morphology of plasmatocytes (PLs) in Glyphodes pyloalis in controls (a,b) and in treatments (c,d, 5%) and (e,f, 15%) (Bar = 10 µ). the insect is unable to efficiently convert food (Budhiraja et al., 2000). Digestion is a process of interactions taking place between chemicals (enzymes) produced by the same set of cells of midgut that absorb the nutrients. The activity of selected enzymes including, αamylase, αand β-glycosidase, lipase and general proteases were affected and reduced particularly 72 h post treatment, which supports our previous results that the nutritional reductions follows the lower activity of enzymes (Rharrabe et al., 2008;Khosravi et al., 2010;Hasheminia et al., 2011) and other reports are also indicative in inhibition of important enzymes in their respective studies (Senthil-Nathan, 2013;Amin et al., 2019). George et al. (2018) reported the presence of a lectin in Withania leaves which showed insecticidal, growth inhibition and could also damage the secretory cells of midgut. The lectins or agglutinins with binding ability to specific carbohydrates are present in certain plant tissues as a defensive strategy against herbivores (Michiels et al., 2010). A fractioned lectin from Polygonum persicaria exhibited inhibitory activity of certain enzymes in Helicoverpa armigera leading to death of this insect (Rahimi et al., 2018). Macromolecules, like proteins play a major role in organisms for development, growth and performing other vital activities (Chapman, 2013). In this study the reduced amount of protein could be attributed to insect inability to synthesize it or the synthesized protein is broken down to compensate the detoxification process (Vijayaraghavan et al., 2010). The Lipids and glycogens are stored reserves of insects (Chapman, 2013). As many insects are dependent for their adult stage reproductive process solely on the nutrients collected during their immature stages. Thus reduction of these key components are depicted in at least in two stages, first in the developmental stages showing various deformities or intermediates due to lack in energy reserves and second in adult stage where the longevity and egg production are severely affected. There are a number of proofs appearing in the literature for the shortage of critical macromolecules involved after the application of chemical stresses including plant products (Smirle et al., 1996;Etebari et al., 2007;Rharrabe et al., 2008;Shekari et al., 2008;Khosravi et al., 2010;Zibaee, 2011). Detoxification is a phenomenon for reduction of the toxic effects of exogenous compounds that may be received by any organism (Amin et al., 2019). Two important detoxifying compounds are esterases and glutathione-s-transferases. Both enzymes were increased, particularly 72 h post treatment, certainly for removal of the exerted side effects of Withania toxicity. This result is similar to other reports of plant products used against insects (Kumrungsee et al., 2014;Murfadunnisa et al., 2019). Phenoloxidase (PO) is an enzyme that is a link between cellular and humoral immunology of insects (Chapman, 2013). Usually when the insect is attacked by foreign bodies like fungal hyphae, nematodes or egg of parasitoids, this enzyme is released by insect oenocytoids for melanization of invaders to isolate them from damaging the insect tissues (Huang et al., 2002;Yu et al., 2003;Ling et al., 2005;Arakane et al., 2009;Beckage, 2011). The reports indicate decreased activity of PO by IGRs (Zibaee et al., 2012;Mirhaghparast and Zibaee, 2013;Rahimi et al., 2013). The W. somnifera has the characteristics of an IGR which is particularly observed in its ability to form incomplete metamorphosis, leading to various intermediate forms. Therefore, decrease in PO could be related to the IGR behavior of this compound. The middle midgut is the main site of digestion and absorption in many insect orders . In our study, the damages to midgut cells that are responsible for secreting digestive enzymes were severe which clearly supports the inhibition of enzyme activity already reported and discussed above. The damages to midgut cells appears in two forms, one through binding of the extract (lectin) to carbohydrates and secondly damaging the tissues thereby, the enzymes are lowered or not produced at all. Lectin in the leaves of W. somnifera causes damage to midgut secretory cells have been reported recently (George et al., 2018) which corresponds with the present study. The reports by several workers that have tried other plant extracts or even essential oils against the gut of various insect pests are also indicative of a similar finding Murfadunnisa et al., 2019). Cellular immunity of insect is considered as an important immunological system that provides to the insect to get rid of the intruders (Nation et al., 2008). This ability may take at least three forms, to phagocytize small intruders, form nodules orencapsulate larger intruders. The plant extracts exert their effects on cellular immunity by changes the THC and DHC. There are plenty of reports by various scientists, both in reducing or increasing cell numbers (Hassan et al., 2013). The reduced THC in the present study corresponds to the reports of Sendi and Salehi (2010) on Papilio demoleus L (Lep: Papilionidae) exerted by an IGR methoprene. Similarly, Rahimi et al. (2013) and Khosravi et al. (2014) also showed decreased number of cells in their respective insects upon IGRs treatment. On the effects of plant extracts or essential oils on THC, there are plenty of available literature (Sharma et al., 2003;Ghasemi et al., 2014;Shaurub et al., 2014;Dhivya et al., 2018;Sadeghi et al., 2019). The DHC in spite of increasing trend, the corresponding pictures, are clearly indicative of sever damages exerted by plant extract. Although, the cells are recognizable in their respective forms should not be considered as true and active hemocytes. The increase in DHC particularly the immunocytes (Grs and Pls) have been reported by Shaurub and Sabbour (2017) with Melia azedarach (Meliaceae) fruit extract on Agrotis ipsilon Hufnagel (Lep: Noctuidae) and others (Sharma et al., 2008). The increase in DHC is somehow confusing, since the observation as live (Phase contrast) or permanent slides stained with Geimsa showing damaging cells but not degenerated after 24-48 h. Our results in damaged cells are corresponding to the reports of some other workers in their respective studies (Altuntas et al., 2012;Sadeghi et al., 2019). Reproduction in insects has been the target for pest control and thus chemicals that could suppress insect reproduction has attracted scientists' attention long time ago. Hence, this aspect of pest control via biorational chemicals have been the core of several researches (Sahayaraj and Alakiaraj, 2006;Riddiford, 2012;Ribeiro et al., 2015;Lau et al., 2018;Abdelgaleil et al., 2019; FIGURE 10 | The amount of Protein, Triglyceride and Glycogen, in ovaries of 2 day old adult Glyphodes pyloalis in controls (C), in 5% treatment (T5) and in 15% treatment (T15). Mean ± SE followed with the same letters above bars indicate no significant difference (p = 0.05). According to a Tukey's test. Couto et al., 2019). In the present study, the extract used also suppressed G. pyloalis fecundity which was complemented with histological and biochemical investigation on the adult ovary resulting from treated larvae. Costa et al. (2004) and Milano et al. (2010) reported suppression in fecundity and egg viability by essential oils. Engelman (1998) believes that even ovariol number is affected following insufficient feeding at younger stages and also exposed to secondary metabolites during its ovarial development. Birah et al. (2010) and Alves et al. (2014) working on clove and long pepper extract, observed a juvenoid like effect which altered S. litura and Spodoptera frugiperda Smith (Lep: Noctuidae) reproduction, respectively. Clove oil in their study reduced number of eggs and their viability has also been reported by Cruz et al. (2016) in S. frugipedra. Our results are in agreement with the reported investigations mentioned, where reduced number of eggs laid and decreased amount of important compounds including, Protein, triglyceride and glycogen are corresponding to the defects in histology treated insects ovaries. CONCLUSION Our results on the effect of W. somnifera seed extract on G. pyloalis larvae clearly supports earlier findings of the effect of this plant extract on various insects studied. We also found out that, the extract not only had toxicity, but also prolonged later life stages, thus showing effects similar to exogenous growth hormones. This is presumed by the intermediate types formed in later stages, leading to insect mortality. This study also throws some light on the mode of action of this toxic substance. This was evident in reduced activity of main digestive enzymes, macromolecules, and sever damage to cellular structure of the midgut. Another aspect of consideration in mode of action of this extract, was to study immunological aspect. Thus severe morphological damages were evident on these cells making them susceptible to diseases. We also noticed that those insects reaching adult stage were unable to produce eggs. The histology of ovary clearly depicted the loss of yolk, loosening of trophocytes and formation of vacuoles in the trohocytes and oocytes. Vitillin and chorion were not formed in the terminal oocytes of treated insects complemented with reduced energy reserves. Therefore, the Withania extract has the potential to be considered for further studies and a candidate for non-chemical insect pest control. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. AUTHOR CONTRIBUTIONS JS and ZA conceived, designed, and performed the experiments and wrote the first draft of the manuscript. AK-M and AZ helped in analysis and edited the manuscript. All authors approved the final draft. FUNDING JS evinced his deep gratitude to Jahade Keshavarzi Organization of Guilan for financial support.
v3-fos-license
2014-05-28T07:37:16.000Z
2013-12-06T00:00:00.000
56008033
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-014-2912-5.pdf", "pdf_hash": "8abc1c27f79ad5815568064f81fac1657873c962", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42249", "s2fieldsofstudy": [ "Physics" ], "sha1": "8abc1c27f79ad5815568064f81fac1657873c962", "year": 2013 }
pes2o/s2orc
Thoughts on entropic gravity in the Parikh-Wilczek tunneling model of Hawking radiation In this letter, we use the Parikh-Wilczek tunneling model of Hawking radiation to illustrate that a reformulation of Verlinde's entropic gravity is needed to derive the Newton's law for a temperature-varying screen, demanded by the conservation of energy. Furthermore, the entropy stored in the holographic screen is shown to be additive and its temperature dependence can be obtained. In this letter, we use the Parikh-Wilczek tunneling model of Hawking radiation to illustrate that a reformulation of Verlinde's entropic gravity is needed to derive the Newton's law for a temperaturevarying screen, demanded by the conservation of energy. Furthermore, the entropy stored in the holographic screen is shown to be additive and its temperature dependence can be obtained. Verlinde's entropic gravity and holographic screen To explain why gravity is so different from other three forces in the nature, Verlinde proposed that gravity could be regarded as an entropic force caused by changes in the information associated with the positions of material bodies [1]. His idea can be illustrated by a particle with mass m approaching some hypothesized holographic screen. The screen bounds the emerged part of space containing the particle, and stores data that describe the part of space that has not yet emerged, as well as some part of the emerged space. The change in entropy on the screen, denoting as ∆S, is assumed to be linear in the particle's displacement ∆x as follows: If one further assumes that the screen is thermalized at the Unruh temperature set up by particle's acceleration, then the Newton's second law can be derived from the entropic force relation: One opposition to this proposal is given by [2] using the measurement result of quantum states of ultra-cold neutron under the Earth's gravity. According to Kobakhidze's argument, a pure state neutron would have evolved into a mixed state thanks to ∆S > 0 in (1). This criticism, however, is questioned in [3] and one resolution was suggested in [4] by abandoning an implicit assumption in [2] that the entropy on holographic screen is additive. Instead, unitarity could still be restored even with ∆S > 0 if bits on the screen were entangled in some delicate way. Nevertheless, it remains unclear how the entropy is entangled on the holographic screen. The other restriction of (2) is that a holographic screen at thermal equilibrium corresponds to the particle moves in uniform acceleration. A generalization of entropic force relation is needed to describe a particle with generic motion. Similar to a generalization of Newton's second law to include varying mass, it is natural to generalize the entropic gravity to include a varying screen temperature. With that being said, we consider an adiabatic process such that temperature of holographic screen is slowly varying but still well defined while a massive particle moves relatively to the screen. The generalized entropic force formula then becomes for varying screen temperature T and screen entropy S while a particle is displaced a distance ∆r from the screen at fixed location. In the following, we will illustrate that Verlinde's screen could be realized in the tunneling model of Hawking radiation if a reformulation of entropic gravity is adopted to compensate the effect of varying temperature. Following a similar argument as in [2] but for temperature-varying screen, we show that the entropy can still be additive on the screen and its temperature dependence can be derived. Entropic gravity with temperature-varying screen The original treatment of Hawking radiation by Hawking is to consider perturbation in a fixed background of Schwarzschild black hole. The thermal spectrum brought up controversial debates over the Information Loss Paradox. Parikh and Wilczek considered radiation as an outgoing tunneling particle where the conservation of energy is enforced [5]. It was later confirmed that in their model information is also conserved by computing the mutual information for two successive radiations [6]. If gravity has an entropic origin, it is desirable to describe the Hawking radiation (as a tunneling process), at least for Schwarzschild black holes, as some form of entropy change in a holographic screen. We recall that back reaction from radiated particle to the black hole makes it a system away from thermal equilibrium, namely temperature varying or not well defined. As a result it is proper to apply the general formula (3) rather than (2) to the tunneling model. For a screen which stores same amount of information as the black hole entropy, and possesses temperature same as the black hole, the change on right-hand-side before and after radiation can be computed as which can be understood as a quantum of mass(energy) ω that is discarded from the black hole up to a factor 1/2. 1 The left-hand-side computes the required work by pulling a point-like varying mass(energy) from the black hole. According to the Newton's law of gravity, that is which is consistent with the right hand side. Therefore, the gravitational force can again be identified with an entropic force, even for a nonequilibrium system. Holographic screen in the Parikh-Wilczek tunneling model of Hawking radiation Inspired by the reformulation of entropic gravity in the previous discussion, we are motivated to study of a holographic screen at arbitrary position but admitting varying temperature, in reaction to displacement of a radiation particle. For symmetry reason, a spherical screen enclosing the black hole seems a good choice. We denote the information stored on a holographic screen associating to a black hole with mass M as S M (r). The screen locates at some distance r from the center of black hole. We may want to have r > 2GM to avoid ambiguity of a screen inside the black hole. Before proceeding with the general formula (3), we would like to first point out some difficulties appeared in the original formulation of entropic gravity (2) in the tunneling model of black hole radiation. If we assume the entropy stored on the screen follows simple coarse grained relation as shown in the Figure 1, following the neutron-earth system in [2]: where ∆r is the distance between radiated mass and the screen. For infinitesimal displacement, we have S M (r + ∆r) ≃ S M (r) + ∆S. If entropy on the screen were additive, that is, One obtains ∆S M ≃ S ω (r + ∆r) − S ω (r). Since ∆S M ∝ ∆r for its entropic origin, we come to the same conclusion that the translation operator is no longer Hermitian and disobeying quantum mechanics. In the [4], a generalized entropy formula by Tsallis [8] was called to restore the unitarity of translation, where an additional entangled term was essential. In our radiation-black hole system, it was known that the Parikh-Wilczek model predicted a non-thermal radiation spectrum and correlation between two emissions ω 1 , ω 2 was computed as 8πω 1 ω 2 , known as mutual information [6]. The entanglement is necessary for the conservation of entropy. Therefore, it is possible to correct the additivity condition (7) to include a term S ent > 0 encoding entanglement between black hole and radiation: such that entropy is conserved before and after radiation 2 . Notice the relative sign for the last term is different from usual entanglement between two subsystems. This difference in fact makes situation worse since now ∆S M ≃ S ω (r + ∆r) − S ω (r) − S ent (r). For a translation to be unitary, say S ω (r + ∆r) = S ω (r), one has ∆S M < 0 implying a violation of second law of thermodynamics, which reflects the fact that black hole cannot evaporate classically. If the Hawking radiation were thermal, one might expect the increased entropy during thermal process is large enough to compensate the reduced entropy due to splitting of radiated particle from black hole. However, in the Parikh-Wilczek's tunneling model, one has no such additional source for entropy since the process is nonthermal. Now we turn to application of the general entropic formula (3). Because temperature may vary for screens at different locations, we assume the coarse grained relation (6) is modified as T (r + ∆r)S M (r + ∆r) = T (r + ∆r)S ω (r + ∆r) + T (r)S M−ω (r). If we assumes a linear relation between screen temperature and small displacement, that is T (r + ∆r) ≃ T (r) + ∆T , one obtains For translation of the radiated particle being unitary, S ω (r) = S ω is a constant. Vanishing of ∆S M for ∆r = 0 suggests S ent (r) = 0. According to (9), the entropy on screen is additive. The temperature dependence can also be obtained by integration: for some reference position r 0 . The movement of radiated particle with respect to the screen does not change corresponding entropy S ω . Instead, it feels different temperatures from screens at different locations. If one recalls the scenario of a massive particle tunneling out of black hole horizon: the radiated particle experiences less gravitational pull while it moves away from the black hole, as if it feels colder from holographic screen. The screen temperature reflects the acceleration or deceleration as what happens in the Unruh effect. Discussion In this letter, with an explicit example of Parikh-Wilczek's tunneling model of Hawking radiation, we illustrate that Verlinde's holographic screen could have violated unitarity of quantum mechanics if it were additive in entropy. This sickness cannot be simply cured by assumption that the entropy on screen is entangled. Motivated by a general formula of entropic gravity, one shows that entropy on screen can still be additive if the relation between screen entropy and temperature is given by (12). Some comments are in order: Firstly, the Parikh-Wilczek is a semiclassical model which could receive further quantum correction. For instance, a one-loop correction to the surface gravity was considered in the [9]. As a result, the Bekenstein-Hawking area law receives a logarithmic correction, denoting as S α (M ) = 4πM 2 − 4πα ln (1 + M 2 α ) and the Hawking temperature is modified as T H = M 2 +α 8πM 3 for large black hole, where coefficient α is related to the trace anomaly. One can show that the relation of generalized entropic force (3) is no longer satisfied since Nevertheless, this might be fixed from two different viewpoints. One way is to modify the Newton's gravitational force by quantum correction up to O(α 2 ), such that F α (r)dr agrees with the above result. The other way is to rederive the Hawking temperature, instead of that obtained in the small ω limit. The proper definition of Hawking temperature T α (M ) is given by It seems unlikely to have analytic forms for loop corrected F α (r) and T α (M ), but at least in principle one can get the order expansion for small α. Once the relation (3) can be fixed, our conclusion for the temperature-varying screen will still be valid, at least perturbatively in terms of α. Secondly, although Verlinde's adoption of entropic force was meant to tame the wild gravity beast, it is natural to ask how to incorporate other forces in his entropic formalism. This generalization has been considered for the electromagnetic force, by inclusion of chemical potential in the first law of thermodynamics [10]. In particular, for the tunneling model of Reissner-Nordström black hole radiation [11], one expects the relation (3) is generalized as where gauge potential and total black hole charges are identified as the chemical potential and number density on the screen respectively. On the left-hand-side, one has to include electromagnetic force for a consistent result. If there exists a holographic screen for charged black holes, it must also carry degrees of freedom for charges. The black hole censorship constraint Q ≤ M would translate into an upper bound for charge density on the screen. The break down of screen for excess charge would correspond to possible naked singularity or closed time-like curve. The recently found charge-mass ratio bound for emission from RN black hole [12] seems to support this picture by stating that a part of screen, which corresponds to the radiated particle, also has upper bound for charge density.
v3-fos-license
2018-04-03T06:13:46.494Z
2016-12-07T00:00:00.000
10953472
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep38754.pdf", "pdf_hash": "6bf1fd85c54b377e6636d8bf978f78cf207386e1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42251", "s2fieldsofstudy": [ "Biology", "Medicine", "Engineering" ], "sha1": "187f53db975114e82d15d4944596284338c4cdda", "year": 2016 }
pes2o/s2orc
Progressive Muscle Cell Delivery as a Solution for Volumetric Muscle Defect Repair Reconstructing functional volumetric tissue in vivo following implantation remains a critical challenge facing cell-based approaches. Several pre-vascularization approaches have been developed to increase cell viability following implantation. Structural and functional restoration was achieved in a preclinical rodent tissue defect; however, the approach used in this model fails to repair larger (>mm) defects as observed in a clinical setting. We propose an effective cell delivery system utilizing appropriate vascularization at the site of cell implantation that results in volumetric and functional tissue reconstruction. Our method of multiple cell injections in a progressive manner yielded improved cell survival and formed volumetric muscle tissues in an ectopic muscle site. In addition, this strategy supported the reconstruction of functional skeletal muscle tissue in a rodent volumetric muscle loss injury model. Results from our study suggest that our method may be used to repair volumetric tissue defects by overcoming diffusion limitations and facilitating adequate vascularization. Reconstructing functional volumetric tissue in vivo following implantation remains a critical challenge facing cell-based approaches. Several pre-vascularization approaches have been developed to increase cell viability following implantation. Structural and functional restoration was achieved in a preclinical rodent tissue defect; however, the approach used in this model fails to repair larger (>mm) defects as observed in a clinical setting. We propose an effective cell delivery system utilizing appropriate vascularization at the site of cell implantation that results in volumetric and functional tissue reconstruction. Our method of multiple cell injections in a progressive manner yielded improved cell survival and formed volumetric muscle tissues in an ectopic muscle site. In addition, this strategy supported the reconstruction of functional skeletal muscle tissue in a rodent volumetric muscle loss injury model. Results from our study suggest that our method may be used to repair volumetric tissue defects by overcoming diffusion limitations and facilitating adequate vascularization. Cell-based therapies in tissue engineering (TE) and regenerative medicine (RM) provide promise to restore normal functions of damaged and injured tissues and organs 1 . Such strategies include cell transplantation and implantation of engineered tissue constructs, where efficient cell survival following implantation is a critical factor to the success. Cell-based strategies have been used successfully in preclinical and clinical trials to treat defects in avascular tissues, such as cartilage and cornea, which do not necessitate blood supply to maintain cellular viability and function under hypoxic conditions [2][3][4] . Small injuries in the vascularized tissues that correspond to a few microns can be repaired using cell-based approaches because the implanted cells will remain viable due to direct transport of oxygen and nutrients within 200 μ m 5-9 from host vasculatures as well as diffusion from adjacent host blood vessels. Skin regeneration has been achieved using cell-based therapy; 10,11 however, efficient treatment of defects larger than millimeter or centimeter scale in vascularized tissues and organs such as heart, liver, and skeletal muscle remains challenging. In most cases, repair of larger tissue defects requires implantation of large, volumetric engineered tissue constructs or implantation of high-dose cells [12][13][14] to restore normal functions. Under such conditions, oxygen transport to all of the implanted cells is difficult. In particular, cells located in the center of thick tissues (a few millimeter scales) with low oxygen concentration will become necrotic leading to failure of tissue grafts. To improve the cellular viability within large-sized defects, efficient nutrient and oxygen supply are necessary; 1,15,16 therefore, strategies need to be developed for volumetric tissue repair to improve vascularization, which will have a positive impact on cell survival. To date, several strategies have been developed to accelerate vascularization of engineered tissues. The conventional method used in early studies promoted vascularization for survival of the implanted cells through stimulation of in vivo microenvironments at the time of implantation. To stimulate vascular environments, pro-angiogenic factors such as vascular endothelial growth factors and fibroblast growth factors were incorporated with engineered tissue constructs, followed by the implantation 17 . In other cases, exogenous endothelial stem or progenitor cells were co-seeded with tissue-specific cells before implantation 18,19 . Although incorporation of such vascularization cues resulted in improved vascularization in vivo, formation of new blood vessels within the implant site was too slow to support the majority of implanted cells 7,16,19,20 . Pre-fabrication of vascular networks within engineered tissue constructs during in vitro cell culture of the seeded scaffolds provides an alternative strategy for the repair of a volumetric muscle defect. Morphological characterization has revealed that in vitro pre-vascularized tissues contained well-organized vascular structures and could accelerate vascularization time by providing adequate blood supply to the seeded cells. Unfortunately, host-implant anastomosis of in vitro pre-vascularized tissues usually occurs within several days after implantation; [21][22][23] thus, integration of reconstructed tissue with the host was inefficient. An in vivo pre-vascularization strategy has been developed to fabricate large-sized, vascularized implantable constructs. By implanting the cell-seeded scaffold into the highly vascularized site, vascular tissues could be obtained in vivo and transferred to the target site [24][25][26][27] . In another study, the polysurgery approach was proposed to produce thick, viable myocardial tissues at an ectopic site 28 . This work shows that repeated cell-sheet transplantation at time intervals of 1-2 days can generate vascularized cardiomyocyte sheets in vivo. While those strategies is a promising approach in terms of addressing volumetric tissue defects, several issues such as delayed perfusion, numerous surgical interventions, and inefficient cell grafting within the vascularized explanted tissue must be addressed before clinical use 15 . Therefore, none of the conventional vascularization strategies is appropriate for volumetric tissue repair. Towards this end, we proposed a novel and simple cell delivery method that enables reconstruction of viable, large tissues in vivo for restoration of volumetric tissue injury through an efficient vascularization strategy. As described above, conventional cell-based approaches for volumetric tissue repair are limited due to inefficient blood supply for implanted cells. Therefore, we hypothesized that multiple injections of a high dose of cells in a progressive manner would maintain cellular viability through the vascularization process when compared to single injection of the same number of cells for implantation. We utilized the normal vascularization process that occurs during the natural regeneration process (Fig. 1). To show the feasibility of restoring functional volumetric tissues in the defect site, multiple, progressive delivery of cells was performed using ectopic cell transplantation in a subcutaneous site. Appropriate cell delivery parameters such as cell density, cell injection volume, and time interval between injections were tested. The efficiency of volumetric tissue formation was compared with single injection of the same number of cells that were used for multiple injections. Furthermore, this cell delivery technique using C2C12 cells and human muscle progenitor cells (hMPCs) was applied to a rodent volumetric muscle loss (VML) model; moreover, histological and functional recovery was evaluated to determine the possibility for applications to treat critical-size muscle defects. Results Ectopic muscle construction by multiple and progressive cell injection. To investigate the feasibility of restoring volumetric muscle tissues by multiple cell injections in a progressive manner, C2C12 cells were subcutaneously injected in athymic mice, and the volume of the newly formed tissues was measured at pre-determined injection points for comparison. Multiple-cell injections with one week interval between each injection resulted in an increased volume of the implants (see Supplementary Fig. S2a). Quantitatively, increased number of cell injections (up to 8 injections) correlated with an increased implant volume (Fig. 2a). Particularly, 6-8 cell injections demonstrated a statically significant difference (ANOVA and post hoc Tukey Test) when compared with 2-4 cell injections (*P < 0.0001 and † P < 0.005, respectively). The volumes of implants in all multiple, progressive cell injected groups showed a significant increase compared to the progressive gel only-injected groups (ANOVA, ‡ P < 0.001). In hematoxylin and eosin (H&E) and masson's trichrome (MT) staining images (see Supplementary Fig. S2b), increase in the volume of reconstructed tissue formation was notable between 2 and 4 cell injections, but no significant size difference was observed beyond 4 cell injections. In each group, the MT images demonstrated that the newly formed tissue structures were skeletal muscle fibers as confirmed by red staining within the implant. The efficiency of progressive cell delivery in the reconstruction of volumetric tissue was determined by comparing to the single injection of cells. In this comparison, total volume (1.2 ml) and cell number was set to be equal between multiple (4 × 0.3 ml per injection) and single injection (1.2 ml for one injection); furthermore, 4 progressive cell injections were selected as more than 4 injections would yield an unimplantable volume for single injection to the animal. The volume of the reconstructed tissue of the progressive cell injection group showed approximately 4-fold increase as compared to that of the single injection of cells with a statistical difference (Student's t-test, n = 4, *P < 0.0001) (Fig. 2b). Interestingly, the results of green fluorescent protein (GFP) immunostaining revealed that the progressively-injected C2C12 cells contributed to the formation of muscle fibers in vivo as confirmed by GFP + muscle fiber-like structures within the reconstructed tissue (Fig. 2c), while a few GFP + C2C12 cells scattered in the implant were found in the single injection. This result demonstrates that multiple, progressive cell injections are more effective than single injection for the formation of a volumetric muscle-like structure at a muscle ectopic site. Efficient muscle cell survival, muscle tissue formation, and vascularization by progressively injected cells. To examine the muscle tissue formation and maturation of progressively injected C2C12 cells, myosin heavy chain (MHC) immunostaining was performed on the reconstructed tissue. Throughout the reconstructed tissue, viable MHC + muscle fiber-like structures and cells were clearly visualized as indicated by the dotted lines (Fig. 3b, first row). In the area of the early 1 st injection (inner site of the implant), a number of MHC + multi-nucleated muscle fibers were observed, while a few MHC + cells with pre-matured structures were found in the area of the last 4 th injection (outer site). The observable difference in muscle maturation demonstrates that multiple cell injections performed in a progressive manner allows muscle cell survival, muscle formation, and maturation of the injected cells in an ectopic site. Vascularization of the injected C2C12 cells in a timely manner is a critical factor to re-create volumetric muscle tissue by providing an adequate blood supply to the grafted cells. We hypothesized that each injection of cells in a progressive manner would promote efficient vascularization with the host. To test the hypothesis, double staining of GFP and von Willebrand factor (vWF) was performed to determine whether the injected GFP + C2C12 cells The efficacy of progressive cell delivery was compared with single cell delivery for implanted cells survival and volumetric tissue construction. Total injected volume of 4-progressive cells in gel injection was same as that of single injection. Volume of implant (mm 2 ) of single and 4-progressive injection was compared in (b). Student's t-test (n = 4). *P < 0.0001 with single cells in gel injection. Representative H&E images of single and progressive cells in gel injections were shown in upper row of (c). Scale bars, 1 mm. Implants were stained for GFP (green) to identify injected cells and their representative images were shown in lower row of (c). Implanted site of single cells in gel were distinguished by white dash line. Scale bars, 200 μ m. are localized with vWF + blood vessels for improved cell survival. The double fluorescent imaging showed that GFP + C2C12 cells are present throughout the injected area and are localized within 100-200 μ m of vWF + blood vessels (Fig. 3b, second row). This finding suggests that progressively injected cells remained viable along with newly-formed vasculatures in the injected area up to 4 weeks after injection. Vascularization and neuronal ingrowth by progressive cell injections. To compare vascularization outcomes between the progressive and single cell injections, vWF/α -smooth muscle actin (α -SMA) immunostaining was performed in each group (Fig. 3a,b). Quantitatively, new blood vessel formation and vascular maturation were assessed by counting the number of vWF + vessels (per field), and calculating the area of vWF + vessels (μ m 2 per field), maturation index (%) and percentage of different vessel sizes (Fig. 3c, n = 4, 9-12 fields per each sample). The number and area of vWF + vessels (/field) in the progressive injection samples significantly increased, as compared to that of the single cell injection group ( Fig. 3c-i,c-ii, Student's t-test, *P < 0.05). However, the level of vascular maturation showed no significant difference between the two groups ( Fig. 3c-iii, Student's t-test, P > 0.05). The majority of blood vessels observed in the single cell injection group was less than 200 μ m in diameter, whereas a higher percentage of larger blood vessels greater than 500 μ m in diameter was present in the progressive cell injection group ( Fig. 3c-iv, Student's t-test, *P < 0.05). These results indicate that multiple, progressive cell injection is an effective cell delivery method to achieve vascularized volumetric muscle structure. To examine whether vascularization and maturation during progressive cell injections occur in a normal and physiological manner, blood vessels formation of progressively injected areas (the 1 st , 2 nd -3 rd , and 4 th injected High magnification images of the firstly, secondary and thirdly, and fourthly injected areas were shown in second, third and fourth columns, respectively. Scale bars, 50 μ m (x400 magnification). (c,d) In-vivo vascularization of single injection vs progressive injection (c) and each injected area in progressive injection (d) was evaluated with staining images for vWF/α -SMA (x400 magnification) in aspects of number of vWF + vessels (per field, i), area of vWF + vessels (μ m 2 per field, ii), maturation index (%, iii) and percentage of each size of vessels (iv). Maturation index (%) = α -SMA + vessels/total vessels × 100. (c) Student's t-test (n = 4, 3-4 fields of each sample), *P < 0.05 with single injection. (d) ANOVA, Tukey test (n = 4, 3-5 fields per each area in each sample). *P < 0.05 with the 1 st injection, † P < 0.05 with the 2 nd -3 rd injection. Scientific RepoRts | 6:38754 | DOI: 10.1038/srep38754 areas) was quantified by using vWF/α -SMA immunostaining images (Fig. 3d, n = 4, 3-5 fields per each area in each sample). Large and matured vessels (vWF + α -SMA + ) were observed in both areas surrounding the 1 st injection site and the 2 nd -3 rd injection (Fig. 3b, third row). More immature vessels (vWF + α -SMA − ) are visualized in the area of the 4 th injection (last injection) than any of the other sites. The site of the 4 th injection showed the highest number of vWF + blood vessels and the lowest area of vWF + vessels with a significant difference ( Fig. 3d-I,d-ii, ANOVA, *P < 0.05 with the 1 st injection, † P < 0.05 with the 2 nd injection, Tukey test), which demonstrated more newly formed small-sized capillaries at the injection site when compared to that of the earlier injected sites. In terms of the degree of blood vessel maturation, the 1 st injection site (inner site) showed highest maturation index compared with the other three sites ( Fig. 3d-iii, ANOVA, *P < 0.05 with the 1 st injection, † P < 0.05 with the 2 nd injection, Tukey test). These quantitative results correlate with the percentage of blood vessels as a function of blood vessels size ( Fig. 3d-iv). This finding is consistent with normal angiogenesis that can be found in another study 29 . In contrast, inner area of implants constructed by single cell injection showed the lowest area of vWF + vessels, and there were no significant differences in number, maturation and percentage of blood vessels among inner, middle, outer areas (see Supplementary Fig. S3). In addition, neuronal ingrowth, which is a critical cellular event in functional muscle regeneration, appeared to occur in a normal condition. The progressive cell injection facilitated recruitment of Neurofilament (NF) + peripheral nerves at each injection site (Fig. 3b, lowest row). These results showed that multiple cell injections in a progressive manner maintained viability of the delivered cells within the muscle ectopic site while single injection did not; furthermore, the vascularization and neuronal ingrowth were observed that facilitated the volumetric skeletal muscle construct in vivo. Improved structural and functional recovery of critical defect in TA muscle by progressive cell injection. Encouraged by promising outcomes in the ectopic implantation study, we applied the progressive cell delivery method to restore muscle function of the injured skeletal muscle. To test the possibility, a volumetric muscle defect in a critical-size was utilized to test the efficiency of the progressive cell delivery method. As a volumetric muscle defect model, 30% of original tibialis anterior (TA) muscle mass was excised and this level of muscle loss is incapable of fully restoring the TA muscle for several months without any treatment. Therapeutic efficacy of progressive cell injections was evaluated by anatomical and functional analysis (n = 4 per each group). Grossly, the TA muscle of progressive injections was harvested at 4 weeks after first injection and showed better TA anatomy than no treatment (defect only), gel only, or single injection group (see Supplementary Fig. S4). As observed in the gross images, single injection of C2C12 recovered the defected TA muscle by a 1.3-fold increase in TA muscle mass compared with no treatment and gel only injection group (Fig. 4a). The ratio of the treated-to contralateral TA muscle mass of the progressive C2C12 injection group showed a 1.3-fold increase compared to that of the single C2C12 injection group with a significant difference (ANOVA and Tukey test, § P < 0.017) (Fig. 4a). Interestingly, the increased muscle mass correlates with improved muscle function. Tetanic muscle force of the progressive C2C12 muscle injection group showed a 2-fold increase, as compared to that of the single C2C12 injection group (ANOVA and Tukey test, § P < 0.016). Muscle function improvement demonstrated by the progressive C2C12 injections was approximately 42% of normal TA muscle. The multiple progressive injections of hMPCs also showed a similar pattern, in terms of muscle function improvement. When compared with the single hMPC injection, the progressive hMPC injections facilitated a 1.5-and 2.6-fold increase in muscle mass and function, respectively (Student's t-test, *P < 0.05) (Fig. 4c). Progressively injected hMPCs in the TA muscle showed 39% restoration of the muscle function, as compared with normal TA muscle. Improved functional outcome by both cell types was evidenced by histological analysis. Based on the H&E images, TA muscle thicknesses of progressive C2C12 and hMPC injection groups were 1.6-and 1.2-fold higher than that of single injections with a statistical difference (ANOVA and Tukey test, § P < 0.001 and Student's t-test, *P < 0.003) (Fig. 4b,d), respectively. To evaluate the level of fibrosis in muscle tissue, collagen I/MHC immunostaining was performed. The percentage of collagen I + area of inner TA and outer TA was quantified (n = 4, 3 random fields per each area in each sample) (see Supplementary Fig. S5). In all groups, severe fibrosis was observed in the outer area of injection sites, when compared with the inner TA. However, the progressive injection group, regardless of the cell types, resulted in reduced levels of fibrosis, as compared with the single cell injection group in both the inner TA and outer TA (ANOVA and Tukey test, P < 0.05). Particularly, the degree of fibrosis of the inner area in the progressive C2C12 injection group was comparable to normal TA muscle with no statistical difference (ANOVA and Tukey test, *P > 0.997). Overall, multiple cell injections in a progressive manner is an effective cell delivery method in term of anatomical and functional improvement of volumetric muscle defect, when compared with no treatment, cell delivery vehicle only and single delivery of same cell number. Muscle fiber formation of progressively injected cells and vascularization and neuronal ingrowth. To examine whether each progressive cell injection can efficiently deliver the infused cells to indicated by white arrows) organized with the host muscle tissue. The outer site of the TA muscle showed different cell localization compared with the inner site. In the progressive group, a number of injected C2C12 cells with fluorescent labeling are found to contribute to the muscle formation with host muscle tissues (white arrows). More interestingly, the number of GFP + MHC + fluorescent-labeling positive cells is higher in the progressive injection group than in the single injection group. In addition, a few fluorescently labeled-GFP + C2C12 cells at the surface site of the TA muscle are not involved in the muscle formation as indicated by white arrowheads. hMPC injection also showed a similar pattern of cell distribution with C2C12 cells (Fig. 5b). Progressive injection of hMPCs resulted in efficient distribution at the entire site of the TA muscle, as confirmed by double staining of MHC and human leukocyte antigen (HLA) staining (Fig. 5b, lower row). More HLA + MHC + chimeric muscle fibers are clearly found in both inner and outer TA site (white arrows) by progressive injection but not with single injection. Interestingly, some of chimeric muscle fibers (white arrows) in the progressive group in the outer area of the TA muscle tissue are notable in terms of clear involvement of the injected hMPCs (white arrows in outer TA). Differentiation of injected C2C12 cells and hMPCs into myofibers or myotubes was quantified by calculating the percentage of fluorescent-labeled-Myogenin + C2C12 cells and Myogenin + human nuclear antigen (HNA) + cells in the C2C12-injected group and hMPCs-injected TA muscles, respectively (n = 3-4, 3-5 fields per sample). In both C2C12-injected group and hMPCs-injected TA muscle, the percentage of fluorescently labeled Myogenin + cells and Myogenin + HNA + cells was higher with progressive injections than that by single injection with a statistical difference (Student's t-test, P < 0.05) (42.74% ± 10.63% and 19.18% ± 6.04% in progressive C2C12 and hMPCs injection, respectively) (see Supplementary Figs S6 and S7). Notably, it was clearly seen that fluorescently labeled C2C12 cells were fused to form muscle fibers with host muscle tissues, some of which was Myogenin + cells. Proliferation of injected cells in the TA muscles was evaluated by fluorescent staining images of proliferating cell nuclear antigen (PCNA). The percentage of proliferating C2C12 cells (fluorescent labeled PCNA + ) and hMPCs (PCNA + HNA + cells) in the progressive injection group was 33.31% ± 15.56% and 31.99% ± 15.84%, respectively, which is higher than in the single injection group (n = 3-4, 3-5 fields per each sample, Student's t-test, P < 0.05) (see Supplementary Figs S6 and S7). To examine the vascularization and neuronal ingrowth by the injected hMPCs, vWF/α -SMA and NF/AChR/ MHC immunostaining were performed, and double-and triple-positive staining was visualized, respectively. Quantitative results of vascularization indicated that only progressive hMPCs injections showed increased vascularization, in terms of number and area of vWF + vessels (/field) (n = 3-4, 6-10 fields per sample, ANOVA and Tukey test, P < 0.05) (Fig. 6b,c), while the degree of vascularization was not statistically different among the no treatment, gel only and single hMPCs injection groups (ANOVA and Tukey test, P > 0.05). Meanwhile, the degree of vascularization between inner region and outer region in each group was not statistically different in terms of the number and area of vWF + vessels (/field), maturation index (%) and percentage of different vessel sizes (data not shown, Student's t-test, P > 0.05). Neuronal ingrowth of the TA muscle was confirmed as NF + /AChR + /MHC + staining in both progressive and single injection, and the level of neuronal ingrowth was not significantly different between the injection methods (see Supplementary Fig. S8). Discussion With increasing interests in translation of cell-based therapies from pre-clinical to clinical applications, appropriate treatment of large volumetric tissue defects relies on efficient cell survival following implantation 15,30 . While several cell-based approaches using TE and RM techniques have facilitated successful outcomes in terms of recovery of avascular tissue function through an effective tissue regeneration conditions 2-4 , reconstruction of volumetric and highly-vascularized tissues or organs on a large scale (> mm to cm) in vivo remains challenging. Decreased cell survival of the implanted cells due to an insufficient blood supply to the implanted cells has limited Figure 5. Identification of injected cells and their myotubes or myofibers formation in the injured muscles. (a) Representative staining images of single-C2C12 and progressive-C2C12 injected TA muscles. For both single and progressive injection, GFP + -C2C12 was injected and identified by staining for GFP (green). Myotubes or myofibers were stained for MHC (grey). GFP + -C2C12 was labeled with DiI (red) in single-C2C12 injection. In progressive-C2C12 injection, the firstly and fourthly injected GFP + -C2C12 were labeled with DiI (red). Yellow arrows, GFP + /DiI + cells. White arrows, MHC + /GFP + /DiI + cells. White arrowheads, MHC -/GFP + /DiI + cells. (b) Representative staining images of single-hMPC and progressive-hMPC injected TA muscles. Injected hMPC were identified by HLA staining (red) and myotubes or myofibers were stained for MHC (green). White arrows, MHC + /HLA + cells. (a,b) Scale bars, 500 μ m in left column and 50 μ m in middle and right columns. the efficient integration of such large constructs with the host vascularization 1,15,16 . To address this issue, several pre-vascularization approaches have been developed and many have demonstrated successful reconstruction of vascularized tissues in in vitro and in vivo 15,31 . The time-consuming vascularization process, however, must be overcome to allow for the volumetric repair 15,30 . Currently, efficient method is available to treat the volumetric tissue defect; therefore, we developed an effective cell delivery method to construct a large size muscle tissue through an efficient vascularization process in vivo. Our results from histological and immunohistochemical analysis demonstrate that muscle cell delivery by multiple cell injection in a progressive manner facilitated large scale muscle (> mm) tissue formation in an ectopic mouse model, and the reconstructed tissue is well-integrated with the host vascular system and neuronal ingrowth. When this technique was applied to a skeletal muscle injury model with a critical volumetric defect, the progressive cell injection demonstrated enhanced muscle tissue reconstruction and improved functional recovery when compared with single injection. From these results, we suggest that our cell delivery system in a multiple progressive manner is a promising and effective method to reconstruct volumetric muscle and thereby improve muscle function in vivo. The idea of 'multiple and progressive cell delivery' for volumetric tissue reconstruction arose from the phenomena frequently observed in cell-based approaches; 32 which includes decreased cell survival of transplanted cells or implantation of engineered tissue construct due to the insufficient delivery of oxygen and nutrients as a result of the time-consuming process of vascularization. Large defect sizes (e.g. mm-cm in size) are particularly affected by decreased cell viability when in the vascularized tissues needs to be repaired. In this case, a high number of cells are usually required for treatment, and the requirement for oxygen and nutrients increases with the increased cell number 1 . To address this critical issue, we hypothesized that a multiple number of cell injections performed in a progressive manner would effectively enhance volumetric tissue in vivo (Fig. 1). This delivery method is primarily designed to improve the viability of the delivered cells at each injection to allow for efficient vascularization surrounding the cells at the delivery site. The first injection of cells at an appropriate cell number and injection volume will obtain sufficient oxygen from the host vessels at the injection site. Following the first injection, the host vasculature will surround the site of the injection within a few days and provide vascular beds for subsequent cell injections. The period of vascularization following each subsequent injection results in the formation of a suitable angiogenic environment for the injected cells. Repeating the cell injection process allows a larger and thicker tissue with structural and functional properties to be reconstructed in vivo (Fig. 1). To test the hypothesis, we utilized an ectopic implantation model through subcutaneous injection in athymic mice. The subcutaneous injection model was chosen for several reasons including a highly vascularized, non-myogenic tissue that was easily accessible for cell injection. It was particularly important to evaluate muscle regeneration within a non-myogenic environment to evaluate whether the volumetric muscle formation in vivo occurred as a result of the injected muscle cells following the progressive cell injection strategy and without any contribution of host muscle tissue. Using this model, we first attempted to optimize cell density for injection. Subcutaneous injection of three different cell concentrations showed dramatically different results in terms of cell survival, where highest cell density (30 × 10 6 cells per ml) indicated higher necrosis with an exponential increase (see Supplementary Fig. S1), thus, indicating that a higher cell density within the implanted area exceeded the amount of oxygen and nutrients that could be supplied by the host. It is plausible that high cell concentration increased oxygen consumption by cells, while diffusion of oxygen decreased in denser tissues. Therefore, cell concentration of 10 × 10 6 cells per ml was selected for this study to prevent necrosis and improve tissue formation. In addition to the selection of an appropriate cell density for the cell injection, the time interval between each cell injection was an important parameter to be considered. As a proof-of-concept study, we chose an interval of one week between each of the multiple cell injections based on the time for normal vascularization to occur in the body 29 . Generally, the new vascularization surrounding the cell injection site will occur within one week and new blood vessel formation under maturation will occur at 2-3 weeks. We attempted to perform four series of cell injections in a progressive manner in the ectopic implantation study (Fig. 3); therefore, we expected that the 1 st -3 rd injection, which occurred at least two weeks before harvesting the reconstruct tissue formation, would form the appropriate vascular maturation while the 4 th (final) injection site would show newly formed capillaries. The results of immunostaining and quantification analysis confirm that one week between each injection established an appropriate time interval to produce efficient vascularization along each cell injection (Fig. 3b,d). Our results are consistent with the outcomes from another study that established a vascular chamber in vivo using an arterio-venous loop (AV loop) and optimized the timing of cell implantation to determine efficient cell survival 33 . Their results showed that angiogenic activity peaked between 7-10 days after insertion of the AV loop and suggested that as further vascularization led to an increased survival among the implanted cells. This study revealed that delayed cell implantation (at day 7) into a site with well-established vessels could improve cell survival 33 . Another parameter for successful cell injection strategy involves an accurately controlled injection volume to reduce cell necrosis after cell transplantation. Generally, the implanted cells can obtain an adequate oxygen and nutrient supply within 200 μ m from adjacent vasculatures as well as through diffusion at a distance of 0.2-0.3 cm 1,34 . While cell density was optimized at 10 × 10 6 cells per ml for injection (see Supplementary Fig. S1), the higher cell injection volume in the single injection resulted in significantly lower cell survival compared to progressive injection (Fig. 2c). Thus, injection volume can affect cell survival. Since the volume of 1.2 ml for the single injection can prepare 1.2 cm 3 in dimension (> 1 × 1 × 1 cm), the cells localized to the center of the fibrin gel will encounter a hypoxic condition with limited oxygen diffusion from the host vessels 1,15,16,20 , and the result will be increased cell necrosis. Eventually, the increased cell death will lead to the failure of the volumetric tissue construction (Fig. 2c, single injection). Meanwhile, the volume of each injection in the progressive injection model was 0.3 ml and is equivalent to 0.3 cm 3 , which can reconstruct an implant of approximately 300 mm 3 in volume due to the diffusion of oxygen from the surrounding blood vessels. The injected cells obtain an adequate blood supply through diffusion and have greater survival. Therefore, the volume for injection should be controlled to ensure cell survival and subsequent formation of tissue reconstruction within the implant site (Fig. 3). Optimization of several parameters such as cell density, time interval between cell injections, and volume for cell injections allowed us to demonstrate the possibility to use a progressive cell injection model to facilitate volumetric muscle tissue construction through efficient vascularization events in an ectopic site (Fig. 3). We show that the levels of vascularization in the progressive cell injection group were significantly higher than that of the single injection group in the ectopic implantation study. Notably, the vascular formation pattern in the core region was significantly different. While the progressive cell injections resulted in the formation of larger (> 500 μ m) and mature blood vessels in the core region, the area of blood vessels present in the single injection group was significantly lower by a 300-fold difference. Interestingly, the majority of the blood vessels (90%) found in the single cell injection group consisted of small size capillaries (< 200 μ m) ( Fig. 3 and Supplementary Fig. S3). As such, it is speculated that the progressive cell injection strategy facilitates the formation of volumetric viable tissue by establishing vascular networks throughout the tissue construct, even in the core region, thus overcoming the problems resulting from the conventional single cell injection method, such as necrotic tissue core due to diffusion limitation. Encouraged by the promising outcomes, this novel cell delivery system was applied to treat a critical-sized muscle tissue defect. As the first target for reconstruction of damaged tissues or organs, the strategy of progressive Scientific RepoRts | 6:38754 | DOI: 10.1038/srep38754 cell injections was applied to skeletal muscle tissue injuries, particularly VML, which is caused by traumatic or surgical loss. VML is a challenging clinical problem for military, civilian, and sports medicine since skeletal muscle is a relatively large, thick tissue that often involves other tissue or organ damage such as skin, bone, and internal organs 35 . As an animal model for VML, we utilized a rat TA muscle defect model that was developed by Wu, X. et al 36 . after modification. This is a standardized rodent model of VML injury generated by excising ~20% of the middle of the TA muscle, and muscle weight and tetanic muscle force are not recovered until 6 months 36,37 . In this study, we introduced a larger defect size (~30% excision of TA muscles) to produce a critical sized muscle defect animal model. The anatomical and functional analysis demonstrated that our progressive cell injection model could significantly increase muscle mass and thickness, reduce fibrosis and partial restoration of muscle function in the TA defect when compared to that of a single injection (Fig. 4); moreover, recovery was confirmed by myotubes or myofibers formation by progressively injected cells (Fig. 5 and Supplementary Figs S6 and S7) as well as vascularization (Fig. 6) and neuronal ingrowth (see Supplementary Fig. S8). Multiple cell injections in a progressive manner showed better cell engraftment over the single injection method, as evidenced by the presence of numerous GFP + or DiI + cells and HLA + or HNA + cells in the progressive cell injection group, as compared with those in the single cell injection group. In addition, higher proliferation of the engrafted cells (30% of engrafted hMPCs) was observed in the progressive injection group than in the single cell injection group. Based on these observations, it is speculated that if the same number of cells are injected, the strategy of progressive cell injections would increase cell engraftment and proliferation, as compared to the single injection method. Moreover, the myogenic capacity of engrafted cells would be increased by the progressive injection strategy. The percentage of differentiating cells in the progressive-C2C12 and hMPCs injection was 43.74% and 19.18%, respectively, at 1 week after the final injection. While the current study showed the significant potential of the progressive cell injection strategy, several limitations remain to be solved before the application could be translated. Since volumetric tissue repair and functional recovery was evaluated only up to 4 weeks in a VML injury model, reliability of the therapeutic effects need to be demonstrated over a long-term. In terms of defect size, in this study, we used a TA injury model of approximately 30% muscle mass defect. Although the extent of defect is significant in a rodent model, it is unclear whether this defect size reflects the clinical conditions presented in humans. Further investigations using a larger animal model with critical defect size and mass 38 should be performed to determine the effectiveness of the progressive cell injections strategy. Muscle function is closely related to innervation and anti-fibrosis; therefore, specific factors that facilitate efficient innervation and reduction in fibrosis should be considered to improve muscle function. Such factors include agrin 39 or suramin 40 to accelerate innervation or reduce fibrosis, respectively. This manuscript describes a proof-of-concept study showing that multiple cell injections result in enhanced cell survival than single injection, which contributes to improved muscle function structurally and physically. To prove the hypothesis, we developed a multiple injection protocol that is performed in a progressive manner with several cell injection parameters. Although we have obtained positive outcomes, in terms of muscle recovery using a pre-clinical animal model, translation of this technology into the clinical settings requires further optimization and refinement, as well as validation in a clinically relevant animal model. For example, translation of this technique to clinical applications needs modification of the cell injection parameters, depending on the target tissue and defect size including injection volume, injection time interval and cell concentration. Selection of a cell delivery vehicle should also be considered for clinical translation. In this study, fibrin gel was used as a cell delivery vehicle since it has been widely used in various clinical applications. Since a hydrogel system such as fibrin gel usually displays weakness in mechanical property, other biocompatible materials with enhanced mechanical strength should be identified in order to maintain the implant volume for a longer period of time. In conclusion, our study provides a novel cell delivery strategy utilizing an appropriate and efficient in vivo vascularization process to overcome the reduced cell survival limitation of current cell-based therapies. The concept of "multiple and progressive cell injections" was supported by demonstrating that the progressive cell injections resulted in an improved cell survival through normal and efficient angiogeneic events surrounding the implant and led to reconstruction of volumetric muscle tissues in vivo. In addition, this novel strategy was applied to a critical-size muscle defect to show restoration of muscle mass and function in a VML animal model. Therefore, multiple cell injections in a progress manner present a promising strategy for volumetric tissue repair in TE and RM. Methods Cell culture and materials preparation. C2C12 mouse myoblasts (ATCC, Manassas, VA) were transduced with GFP to prepare GFP + -C2C12 with a method developed previously 41 . GFP + -C2C12 cells were cultured in DMEM/high glucose (Thermo Scientific Inc., Waltham, MA) supplemented with 10% fetal bovine serum (FBS, Gibco, Carlsbad, CA) and 1% penicillin/streptomycin (PS, Thermo Scientific) at 37 °C with 5% CO 2 . During the cell culture, the GFP expression of C2C12 cells was confirmed by a fluorescent imaging. As another cell source for this study, hMPCs were used after isolation and expansion. hMPC were isolated from human muscle biopsies as previously described 42 and expanded in a growth medium composed of DMEM/high glucose, 20% FBS, 2% chicken embryo extract (Gemini Bio-Products, West Sacramento, CA) and 1% PS. Cells were expanded up to passage 4 for cell injection study. As a vehicle for the cell injection, a fibrin gel system was used. To form the fibrin gel, 40 mg ml −1 of fibrinogen solution and 40 U ml −1 of thrombin solution (Sigma, St. Louis, MO) were prepared by dissolving fibrinogen from bovine plasma (Sigma) in 0.9% sodium chloride saline solution, and bovine thrombin (Sigma) in 25 mM CaCl 2 in saline solution, respectively. For cell injection, the muscle cells were suspended with fibrinogen solution and adjusted to a cell concentration of 20 × 10 6 cells per ml to yield a final cell concentration of 10 × 10 6 cells per ml in the fibrin gel after mixing with thrombin solution at a 1:1 ratio. Ectopic cell injection. All animal procedures were performed in accordance with a protocol approved by the Institutional Animal Care and Use Committee at Wake Forest University School of Medicine. Male athymic mice (6-8 weeks old, total 24 mice) were obtained from Charles River Laboratory (Wilmington, MA). Anesthesia was induced by using 3% isoflurane before surgical procedures. Under aseptic conditions, subcutaneous injections into the dorsal to dorso-lateral region were performed. For progressive cell injections, C2C12 cells in a fibrin gel were delivered into the left dorsal regions of mice; 2, 4, 6, and 8 injections (n = 3-4). Injections were performed every 7 days at the same site where the former injection was done. As control, the same volume of gel without cells was injected into the contralateral region of each animal in the same manner. For each injection, 150 μ l volume of fibrinogen solution with or without cells was injected using a 26-gauge needle. An equal volume of thrombin solution was immediately injected at the same site, where the injected cells will be placed within the fibrin gel. Animals were euthanized 1 week after the final injection. To determine efficiency of progressive cell injection, the volume of implant after 4 injections was evaluated and the measured volume was compared with that by single injection of cells. For the single injection, 600 μ l of fibrinogen solution with cells and 600 μ l of thrombin solution were injected, of which number of cells is equal to the total number of 4 injections. The single or progressive injection group was euthanized 4 weeks after injection or 1 week after the 4 th injection, respectively. Implant volume was measured by water displacement method, then implant was evaluated by histological and immunohistological analysis 43 . VML injury model and cells injection. The VML injury model was created in nude rats (male, 12-14 weeks old, Charles River Laboratory) 36 . Under anesthesia, the fascia was separated from the TA muscle, and then approximately 30% of middle third TA muscle was excised. The excised TA muscle weight was estimated by following using the equation: y (g) = 0.0017 × body weight (g) -0.0716. In addition to the TA muscle excision, extensor digitorum longus (EDL) and extensor hallucis longus (EHL) muscles were removed to exclude compensatory hypertrophy during muscle regeneration following TA excision. The remaining TA muscle was covered with fascia and skin was closed using sutures and surgical glue. Fibrin gels with or without cells were injected into the defect sites. In this study, 7 groups were investigated (n = 4 per group, total 28 rats); (1) normal (age-matched control), (2) no treatment (defect only), (3) multiple injection-gel only, (4) single injection-C2C12, (5), progressive injection-C2C12, (6) single injection-hMPC, (7) progressive injection-hMPC. For a single injection, 300 μ l of fibrinogen solution with cells was delivered into the defect sites with a 26-gauge needle followed immediately by an additional injection of 300 μ l of thrombin solution at the same injection site to form fibrin gel (total injection volume = 600 μ l). The volume of 600 μ l filled the defect in the TA muscle. For multiple and progressive injections, 4 cell injections were performed every week. The first injection was performed with a total volume of 300 μ l and subsequent three injections were done with a volume of 100 μ l per injection. To track the injected cells within the TA muscle, C2C12 cells were labeled with DiI (Vybrant ® Multicolor Cell-Labeling Kit, Thermo Scientific, Inc.) for the 1 st and 4 th injections and co-labeling of DiI and GFP was used to identify the injected C2C12 cells within the TA defect. In vivo functional analysis of TA muscle. To examine restoration of muscle function, tetanic force of TA muscle was measured at 4 weeks after surgery (1 week after 4 th injection in the multiple and progressive injection). Anterior crural muscle in vivo mechanical properties were analyzed with the dual-mode muscle lever system (Aurora Scientific, Inc., Mod, 305b, Aurora, Canada) 36 . The foot to be measured was attached to a foot plate and knee and ankle were positioned at 90-degree angle. Tetanic analysis was performed by stimulating the peroneal nerve using a Grass stimulator (S88) at 100 Hz with a pulse-width of 0.1 msec and 10 V. Muscle force (N Kg −1 ) was calculated by peak isometric torque per body weight (n = 4 per group). After the functional assessment and harvesting of TA muscle, the retrieved TA muscle tissue was weighed and processed for histological analysis. The percentage of muscle mass was calculated by the ratio of the weight of injured TA muscle to that of contralateral TA muscle (n = 4 per group). Histological and immunofluorescent analysis. The harvested TA muscles were freshly frozen in liquid nitrogen immediately for cryo-embedding or fixed with 4% paraformaldehyde for paraffin embedding. For histological evaluations, H&E staining and MT staining were performed on the tissue sections. To evaluate TA muscle thickness of each group, three different regions in the middle of the TA muscles in the H&E images were chosen and the thickness was measured (n = 4 of each per group) in blinded fashion. For immunostaining, the cryosections (7 μ m) were fixed with 4% paraformaldehyde. Paraffin sections (5 μ m) were deparaffinized and processed for antigen retrieval with the heat-induced process using sodium citrate buffer. Tissue sections were incubated with methanol at − 20 °C for 10 minutes, acetone at room temperature for 7 minutes or 0.2% Triton X-100 for 30 minutes at room temperature for permeabilization, and then blocked using a serum-free blocking agent (X090930-1; Dako, Carpentaria, CA) for 1 h at room temperature. All antibodies were diluted with antibody diluent (S302283-1; Dako), and the blocked sections were incubated with primary antibodies at room temperature for 1 h or incubated at 4 °C for overnight. Secondary antibodies such as Alexa 488-conjugated anti-mouse or anti-rabbit antibody (A11017; A11070; 1:200 dilution; Invitrogen, Eugene, OR), Texas Red-conjugated anti-mouse, anti-rabbit, or anti-rat antibody (TI-2000; TI-1000; TI-9400; 1:200 dilution; Vector Labs, Burlingame, CA), or Cy5-conjugated anti-mouse or anti-rabbit antibody (A10524; A10523; 1:200 dilution; Invitrogen) were treated at room temperature for 40 min. Tissue sections were then mounted with VECTASHIELD Mounting Media with DAPI (H-1200; Vector Labs) and analyzed by fluorescent imaging using an upright (LEICA) and confocal microscope (Olympus).
v3-fos-license
2019-03-17T13:12:03.645Z
2017-09-21T00:00:00.000
9015275
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://crimsonpublishers.com/apdv/pdf/APDV.000504.pdf", "pdf_hash": "65e9c7ae6f991212380de26dbbd09f5efed9d797", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42252", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Medicine" ], "sha1": "34613a2ac7251f5dd0a1511b0cb82a66fdff3890", "year": 2017 }
pes2o/s2orc
Lumpy Skin Disease: Global and Turkish Perspectives Lumpy skin disease (LSD) is an economically important infection since the presence of the disease affects cattle health and export of cattle products. It is caused by capripoxvirus and shows characteristic skin lesions in infected cattle. The disease was first reported in Zambia, in 1929. It then spread to Africa, Middle East and recently to European countries like Greece and Bulgaria. The first Turkish outbreak of LSD was reported in 2013 in Kahramanmaras, Turkey. Until now, many cattle are affected and the disease spread to farms located in different parts of Turkey. After the first outbreak, rapid diagnostic methods have been used in order to identify disease outbreaks. Control and eradication programs have been applied by The Ministry of Food, Agriculture and Livestock of Republic of Turkey including contingency plan, culling and compulsory vaccination Etiological agent Lumpy skin disease virus (LSDV) is a double-stranded DNA virus in the capripoxvirus genus of subfamily chordopoxvirinae within the family of poxviridae. It is an enveloped virus and has a genome size about 150 kbp coding for 147 genes. The genome is conserved and has 97% of similarity with the goatpox and sheeppox viruses. Crossreactions between poxvirus species have been well established [6]. Therefore, these viruses cannot be distinguished by serological tests. They can only be differentiated by molecular analyses. Results of the molecular analyses have indicated that LSDV was found to be closer to goatpox than sheeppox. LSDV is considered to be 1 serotype called Neethling virus and used as reference strain [6]. The LSDV is very much resistant to environmental conditions. It may remain stabilized 18 and 35 days in the diried hide and skin lesions, respectively. It is sensitive to sunlight and lipid solvents. It can be inactivated by heating at 55 °C for 2 hours and at 65 °C for 30 minutes [7]. In addition to characteristic skin lesions seen in infected cattle with LSDV, abortions, infertility, emaciation, decrease in milk and meat production, severe damage of hide have been observed causing significant economical losses as well as badly affected trade because of notification of the disease [9]. Diagnosis Clinical appearance of the skin lesions makes suspicion of LSD. However, laboratory analyses are necessary for definitive diagnosis. Electron microscopy, virus isolation, serological and molecular tests have been used for the laboratory analyses of LSDV [10]. The main problem for LSD diagnosis is the differentiating infected from vaccinated animals as well as to detect subclinical animals. Serological tests are not sufficient to do that [11]. New diagnostic tests are necessary to differentiate vaccinated animals from infected as well as to detect subclinically infected animals. LSDV can grove in cell cultures and embryonated chicken eggs. Lamb testis cells and bovine epidermal cells are generally used to culture LSDV with cytopathic effects in 7 days after inoculation by using the skin and blood of infected animals. It can also be adapted to VERO cells [12,13]. For serological analyses, agar gel immunediffusion, ELISA, indirect fluorescin antibody test (IFAT), virus neutralization (VN) and western immuneblotting are used [13]. Antibodies to LSDV can be detected by VN test after 21 days of infection [9]. Antibodies to LSDV were detected by ELISA in 56% and 11.1% of the clinically infected and fevered cows, respectively [10]. The disadvantageous of serological tests is that vaccinated animals cannot be differentiated from infected animals as well as other poxviruses [9][10][11]. PCR and real time PCR have been used for molecular detection of LSDV in samples taken from skin lesions, blood, saliva, milk and semen [10,11,[14][15][16][17][18]. Viral DNA can be detected in the skin lesions by PCR for 42 days [9] and 92 days [14] after experimental infection. Primers targeting g-protein-coupled chemokine receptor (GPCR), rpo30, p32 and ORF 132 gene are frequently used in PCR [13,14]. Prob based real time PCR has also been developed. A real time PCR has been assessed in order to detect and differentiate capripoxviruses [11]. Recently, a real time PCR was developed to differentiate vaccine and field strains [19]. Epidemiology LSD is a disease of cattle (bos taurus and bos indicus) and buffaloes. However, it was shown that wild ruminants like Giraffe, Gazelle and Antelope are sensitive to LSDV infection [2]. Animal to animal transmission by close contact is minimal. Arthropod vectors play major role in transmission and spread of LSDV. Aedes aegypti mosquito for LSDV [20] and stomoxys calcitrans transmission for SPPV have been reported [21]. Horn flies, horse flies, midges also reported to transmit the virus. Novel evidence on the role of hard ticks has been found [22]. Virus can be transmitted by intradermal and intravenous injections. Therefore, iatrogenic transmission through injections and other applications have been occuring as well as human playing important role in spreading the virus [2]. Infected animals may harbour the virus in the skin lesions up to 39 days post infection [14]. LSDV has been found in cutaneous lesions, saliva, nasal discharge, milk, semen, muscles, and hides. Although, there is no report that LSDV transmitted through contact to body fluids but play important role in virus spread to the environment [2,9]. Dogs, cats as well as wild carnivores may play particular role in spreading the virus while carrying and eating the dead animals (personal comment). This point needs to be investigated. Subclinically and clinically affected animals are being transported to other parts of the country is another risk to spread the virus to different places [3]. Lumpy skin disease was first seen in Zambia in 1929 [2]. It was then occurred in Bostwana and Zimbabwe between 1943 and 1945 indicating the infectious behavior of the disease. In 1949, in South Africa, about 8.000.000 cattle were affected by LSD [2,3,23,24]. After 1956, LSDV spread more or less to Africa continent as well as Madagascar and remained in Africa until 1986 [23]. It was then spread to Egypt in 1989 and nearly whole country had LSD cases and 1499 deaths were reported [3,10,23] [2,3,10]. After these outbreaks and Syrian conflict, the LSD virus spread to Middle East countries and Turkey. First case of LSD was seen in Kahramanmaras, Turkey in 2013, August [5]. LSD cases have been reported now in Iran, Azerbaijan, Georgia and Balkan countries like Greece, Bulgaria and Macedonia posing high risk to bordering countries in Europe [2,3,25,26] (Figure 3). LSD outbreaks in Turkey started in august 2013 [5], in Kahramanmaras after the outbreaks seen in Lebanon, Jordan and Israel in 2012 [1][2][3]. It was then spread quickly to neighboring localities and 18 outbreaks seen OIE data, [24,27,28] (Figure 4). In 2014, the number of outbreaks was 784 [29,30]. In 2015, LSD was spread all over the country occurring 510 outbreaks (OIE Data [4,31]). In 2016, the number of outbreaks and cases were decreased after the control measures taken by the Ministry of Food Agriculture and Livestock of Turkish Republic (OIE Data [4,31]). According to the data submitted to OIE, the highest morbidity and mortality rates in Turkey were 68% and 30%, respectively [5,29]. The lethality rate was about 45%. The morbidity and mortality rates were higher in 2013 but decreased after the infection was being notifiable and after vaccination started in 2014. The high number of cases was seen in vector season between July and November [5,29]. Control and Eradication Three control measures have been applied and scientific data indicated that these measures are found to be effective to control LSD [31]. These are: I. Removal of infection by disinfection, vaccination, vector control, detection of subclinically infected animals, carcass disposal and culling infected animals. II. Movement control by quarantine, animal movement control, zoning and control of imported animals and their products. III. Networking and information by rising the awareness and education, epidemiological investigations, rapid diagnosis and rapid notification. Amongst these control measures, removal of infection (I) plays major role in controlling LSD [31]. Vaccination seems to be the most effective way to control LSD at present as well as disinfection, vector control, carcass disposal, culling infected animals, quarantine, animal movement control and rapid notification. In addition, there is need to produce new vaccines which are effective and safe. Control of LSD in Turkey Because of the antigenic similarities between the capripox viruses, sheep poxvirus has been used in immunization of cattle to control LSDV infections in cattle [32-34]. The compulsory vaccination of cattle started in Turkey in 2014 in high risk areas closer to the first outbreaks occurred. In this vaccination regime, 1 dose of sheep poxvirus vaccine was administered to a cattle. This vaccine was prepared from the local sheep poxvirus after 65 passages. This vaccine contains at least TCID 50 10 2,5 viruses per dose and it is still currently in use against LSD in Turkey . In 2014, 1.590.757 cattle were vaccinated in the East and South East Anatolia region [35]. Animal movement and trade also restricted in those areas. Biosecurtiy measures, quarantine (30 days), culling and animal movement restriction were put in to force [4]. In 2015, all cattle were planned to be vaccinated in country wise. In this vaccination regime, 3 Appro Poult Dairy & Vet Sci vaccines should be developed and host-specific vaccines should be used to control LSD outbreaks. In addition, from the experiences in Turkey, time of vaccination (before the vector season starting) and farmers support affect the success of vaccination. Conclusion LSD is a widespread disease and global treat for cattle industry. Vaccination seems to be the most effective way to control LSD in combination with biosecurity, vector control, quarantine and animal movement control. However, spread of vaccine strain to the environment and occurrence of possible mutations in the vaccine strain should be thought. Also there is always need to produce vaccine from local strains of LSDV as well as the optimal dose of vaccine used. The vaccine should be effective and safe. Experiences and suggestions i. Vaccination is the most important preventive measure along with biosecurity. ii. Triple amount of attenuated sheep poxvirus seems to be working in vaccination but higher doses need to be evaluated. iii. Rate of vaccination (up to 100%) and vaccination time (before the vector activity) is important. iv. A DIVA vaccine is urgently needed. v. Clinical and field survey must be performed periodically. vi. Early diagnosis, quarantine and culling are other control measures which should be applied urgently. vii. Control of animal movement very important especially for sub clinically infected and in the incubation period animals. viii. Vector control, in the field and even in the aircrafts and ships. ix. New reservoirs and animals (dogs, cats and wild carnivores) possibly playing roll in spread of the virus needs to be investigated.
v3-fos-license
2019-03-02T13:38:40.282Z
2017-04-18T00:00:00.000
73581848
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://thescipub.com/pdf/10.3844/ajisp.2017.131.143", "pdf_hash": "71c386d3f63cf944300c2ae275339022908fc1f8", "pdf_src": "Unpaywall", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42254", "s2fieldsofstudy": [ "Biology" ], "sha1": "71c386d3f63cf944300c2ae275339022908fc1f8", "year": 2017 }
pes2o/s2orc
Humoral and Cellular Effects of Stress-An Extensive Model System The influence of stress on the immune system of the common carp (Cyprinus carpio) was studied by measuring leukocytes levels using flow cytometry and mRNA immune components by real time qPCR. Acute and chronic oxidative stresses were generated by different regimes of exposure of carp to environmental air. In acute stress, induced by single air exposure, the pro-inflammatory cytokines (IL1β, IL6 and TNFα) and the down-regulatory ones (IL10 and TGFβ) showed significant simultaneous elevations (515, 147, 373, 300 and 198% respectively). Following chronic stress (multiple air exposures) however, a drastic decline of 80%, in macrophages/monocytes, B-cells likes and plasma-cells like, occurred in peripheral blood. No statistical changes in IL6 and TNFα, as well as in IgM and C3s mRNA levels could be shown during this experiment. CD4 mRNA decreased up to 6% in the 2nd week of chronic stress and elevated only to 55% at the 3rd week Vs a temporal decline of up to 22% in CD8a mRNA at the 2nd week. The regulatory cytokines (IL10, FoxP3 and TGFβ) as well as the pro-inflammatory ones (IL1β and IL17) decreased significantly up to 0.06, 0.2, 5, 6 and 4% respectively, at the second week before being restored to normal at the 3rd week. Moreover, a persistent decrease, up to null levels, in the cytokines IFNγ2b, IL12b and IL8 was also revealed. These downregulations were suggested as a result of the impaired Th1 and/or cytotoxic cell function and, to a certain degree, the leukocytes mobilization. The above findings show that in contrast to the detrimental effects of chronic stress, in which cells and functions of acquired immunity were partially or completely impaired, the acute stress was found rather beneficial and in line with the known ephemeral “fight and flight” response. Stressors were reported to exaggerate adverse effects like sensitivity to illness, autoimmunity, shrinking of the thymus and spleen or other lymphatic organs, changes in the number and distribution of white blood cells, or appearance of bleeding or ulcers (Harper and Wolf, 2009). Stress increases immunosuppressive pathways and increases proinflammatory cytokines (Tort et al., 1996;Douxfils et al., 2011;Milla et al., 2010;Talbot et al., 2009;Petrovsky, 2001). These stress effects impact both the innate and adaptive immune system (Øverli et al., 2006;Mommsen et al., 1999), mainly following considerable decrease in lymphocyte numbers (Engelsma et al., 2003), plasma IgM concentration (Nagae et al., 1994), a selective suppression in phagocytosis and complement activities in head kidney and blood. As a consequence, an increase in susceptibility to infection occurs in teleost fish (Pickering, 1984;Law et al., 2001;Small and Bilodeau, 2005;Mauri et al., 2011). The effect of stress depends on the duration and intensity of the stressor. Mild and/or acute stressors enhance immune responses while, severe or long-term stressors can be immunosuppressive (Demers and Bayne, 1997;Sunyer and Tort, 1995;Harris and Bird, 2000;Raberg et al., 1998). In handling acute stress, an increase in C3, lysozymes (Demers and Bayne, 1997;Sunyer and Tort, 1995) and leukocytes (Maule and Schreck, 1991) were reported in head kidney. In chronic stress however, there is a decrease in C3 and lysozyme levels (Sunyer and Tort, 1995) as well as in immune cell numbers and functions (Verburg-Van Kemenade et al., 2009). It was assumed that this dual response depends on the intensity and duration of the stressor and that these processes are controlled by different hormonal and neuronal paths (Tort, 2011;Nardocci et al., 2014). The stress mechanism has been mostly studied in higher vertebrates and much less in fish. In mammals, immune and inflammatory responses are followed by the activation of the stress hormones that systemically inhibit the T-helper-1(Th1) pro-inflammatory responses but potentiate a Th2 shift which is followed by downregulation of some cytokines involved in cellular immunity (TNF-α, IFN-γ, IL-2, IL-12) as well as production of cytokines belonging to other Th-cell subsets (IL-4, IL-10, IL-13, TGF-β) (Elenkov and Chrousos, 1999). Furthermore, it has been shown that stress induces changes in cell numbers and in their traffic patterns. Substantial differences in the leukocyte distribution in different body compartments have been observed in carp (Wojtaszek et al., 2002). It was stated by these authors that such a situation may lead to ineffective immune protection due to decreased leukocyte recruitment at the affected sites. The activation of leukocytes is related to the activation of the sympathetic nervous system and to the release of catecholamines (Tort, 2011;Dhabhar, 2002). Blood cells, including both erythrocytes and leukocytes, are mobilized as part of the acute stress response. The changes in blood leukocyte numbers are characterized by a significant reduction in the numbers and percentages of lymphocytes and monocytes and by an increase in the numbers and percentages of neutrophils (Dhabhar, 2002). Several studies in fish, support stress mechanism as reported in mammals (Wojtaszek et al., 2002;Dhabhar, 2002;Cortés et al., 2013), while the participating immune cells and humoral processes are still vague. Therefore, in the present work we describe the influence of air exposure acute and chronic stresses (Melamed et al., 1999;Dror et al., 2006) on different immune components of spleen, blood, kidney and head kidney in the common carp. Hence, we studied the participation of most of the known components in the fish immune system in acute and chronic stresses by examining changes in the levels of: (1) Immune cell groups of small and large lymphocytes, Polymorphonuclear (PMN) cells and monocytes/macrophages during stress treatments; (2) CD4 and CD8a cells which represent the majority of cells involved in immune processes (Todaa et al., 2011;Annunziato and Romagnani, 2009;Wan and Flavell, 2009); (3) IgM and the complement C3s (a fish variant of mammal C3) which are considered as significant agents of the innate immunity (Nakao et al., 2000;Brattgjerd and Evensen, 1996;Kaattari and Irwin, 1985); (4) The pro-inflammatory cytokines IL1b, IL6 and TNFa (Secombes and Fletcher, 1992); (5) the inflammatory cytokines related to Th1 cells (IFNγ2b and IL12b) and Th17 cells (IL17) (Du et al., 2014;Zou et al., 2005;Wang et al., 2014); (6) IL10, TGFβ and FoxP3 (Wei et al., 2013;Wang et al., 2010;Kohli et al., 2003) regulatory cells cytokines; (7) The chemoattractant CXCL8 that acts similarly to the mammalian IL8 in mobilizing macrophages/neutrophils/leukocytes to the target area (Van der Aa et al., 2012). Animals Common carp (150±30 gr.) were obtained from a local fish farm (Mishmar Hasharon, Israel). The fish were acclimatized to laboratory conditions for at least one month before experiments. Fish were maintained in containers (105×105×80 cm) with air bobbling and recirculating fresh water at 24±2°C, in a 12 h. light/12 h. Dark cycle and fed a commercial diet once a day. Two weeks before the experiment, the fish were kept into net cages (75×28×48 cm), 2 fish in each one. The cages were maintained in water tanks (350×300×100 cm), equipped with a biological filter and continuous flow of water and air. Acute Stress A group of 8 fish was exposed for 10 min. to the air and then immersed for 30 min. in water, after three cycles of exposure/immersion, the fish were left for 24 h in the water (Melamed et al., 1999;Dror et al., 2006) and then anaesthetized by immersion in 0.01% benzocaine/water. Their spleens were collected into liquid Nitrogen for RNA extraction. In order to minimize handling stress, all stress treatments were done into the net cages i.e., the net with the fish was exposed to the air and immersed into the water. Chronic Stress The fish were similarly treated as in the acute stress group, but the exposures to the air and immersions, as above, were repeated three times a week for three weeks. Twenty four hours following the last air exposure, performed at the 9th, 16th and 23th days, groups of 8 fish each were anaesthetized by immersion in a 0.01% benzocaine solution and their spleens were collected into liquid Nitrogen for RNA extraction. Gene Expression Quantification Total RNA was extracted from each spleen using 1ml TRI reagent according to the manufacturer's instructions (Geneall Biotechnology, Seoul, Korea). RNA quantification was carried out using a NanoDrop ND-2000c spectrophotometer (Thermo Scientific). Total RNA quality was monitored by running samples on a 1.3% agarose gel. Adequate samples were used for complementary DNA (cDNA) synthesis which was carried out with the FastQuant RT Kit (with gDNase) (Tiangen, Beijing, China) and served as a negative control for quantitative PCR (qPCR). Part of the cDNA was used for a standard curve in each qPCR experiment and the rest of the material was diluted to 100 ng µL −1 . qPCR amplification was carried out in 20 µL reaction volume containing 5 µL of diluted cDNA (500 ng) used as a template for qPCR cytokine quantification, 10 µL FastFire qPCR PreMix (Syber Green) (Tiangen, Beijing, China) and 5 µL primer, resulting in a final concentration of 0.1 µM. All immune component samples were run in triplicate while standards (standard curve of each cytokine and of the RNA negative control following gDNase) in duplicate in the CFX96 (Bio Rad) following the manufacturer's conditions, as follows: Initial denaturation for 1 min, 95°C, followed by 40 cycles of 5 sec denaturation at 95°C and 15 sec for annealing/extension at 59°C to 62°C (Table 1). The melting curve in each experiment was used to examine qPCR and primer quality. Results of qPCR experiment were accepted if: (1) There was no contamination of dimmers or other material; (2) the efficiency of the qPCR reaction was 90 to 109%, (3) the R line of the reaction was 0.98 to 1. PCR Qualification Amplification was performed in a 20 µL of a reaction volume containing 10 µL GoTaq Green Master Mix (Promega, Madison, WI, USA), 5 µL primer (in a final concentration of 0.1 µM) and 5 µL diluted cDNA (500 ng). This solution was used as a template to synthesize immune components in each stress treatment. Samples were run in the UNO II (Biometric) as follows: Initial denaturation for 5 min at 95°C, followed by 30 cycles of 30 sec denaturation in 95°C, 30 sec annealing at 60°C and 30 sec. extension at 72°C, ending with 72°C for 10 min. Samples were loaded on an 1.3% agarose gel and visualized by a MiniLumi Imaging System (DNR Bio Imaging Systems). Primer Design Primers were designed by the NCBI tool and purchased from Integrated DNA Technologies, Leuven, Belgium (IDT). Each primer was analyzed by an IDT Oligo Analyzer. Running conditions of each primer were analyzed and only those which showed negligible dimmer, high PCR efficiency (90-109%) and R≥0.98. were used (Table 1). Data Analysis All experiments were analyzed by the CFX96 (Bio-Rad) software. Ratio production of immune components between stress conditions to control was expressed as fold changes. Cq was normalized to gene reference 40S rRNA and analyzed according to the Pfaffl and Livak method (Pfaffl, 2001;Livak and Schmittgen, 2008) by correcting the efficiency of each primer at stress relative to control. Cell Separation About 1 mL blood was removed from the caudal vein of each fish by a heparinized syringe and diluted in 9 mL Dulbecco's Modified Eagle Medium (DMEM) solution (Biological Industries, Israel). The spleens, kidneys and head kidneys of six fish from each treatment were removed following anesthesia from the groups of control, unstressed, acutely stressed and chronically stressed fish after one, two and three weeks. Organs were minced through a net with a 10 mL syringe piston into DMEM solution. Leukocytes were separated on Ficoll-Paque TM plus (GE Healthcare). After three washes, cells were used for Flow Cytometry (FACS) and for May-Grunwald/Gimsa/right staining and identification. FACS Cells were incubated in PBS solution containing monoclonal mouse anti carp IgG (produced in our lab), 0.1% sodium azide and 2% Bovine Serum Albumin (BSA) (Sigma) for 30 min at 4°C, were washed twice and incubated in PBS with FITC-goat anti mouse IgG (Sigma) for 30 min at 4°C, were washed twice and kept in a PBS solution containing 0.1% sodium azide, 2% BSA and 0.6% paraformaldehyde, at 4°C. Cell analysis was performed on a flow cytometer, FACSCalibur (Becton Dickinson) equipped with a 488 nm cooled argon-ion laser. Green fluorescence was collected through a 520-530 nm bandpass filter. About 30,000 cells within the gated region were identified. Results were analyzed by the FlowJo software (FlowJo, LLC, Ashland, Or, USA). Cell Staining Slides were stained as follows: (1) Fixed for 3 min in methanol and dried by air; (2) Immersed for 20 min in diluted May-Grunwald solution (Sigma) (1:1 in methanol), then were washed for 1 min in phosphate buffer pH 6.3, 0.01 M (PB) and dried by air; (3) Immersed for 30 min in diluted Giemsa stain (Sigma) 1:3 in PB and then were washed for 6 min in PB and dried by air; (4) Immersed in Wright stain (Sigma) 200 mg/40 mL methanol for 20 min and then washed for 15 min in PB. Cells were observed and counted by axioimager.Z1 microscope (Zeiss). Follow-up of Blood Leukocyte Profile in Stressed Fish We used a group of 4 fish to follow changes in their individual peripheral blood leukocyte profiles throughout the stress treatments. Therefore, blood control samples (1 mL each) were taken from the caudal vein of each fish, with heparinized syringe, before stress treatments. Two weeks later, the fish were treated for acute stress, as detailed above and 24 h later, blood samples were taken, as above, from each fish. Two weeks later, the fish were treated for chronic stress during 3 weeks, as detailed above and blood samples were taken at the end of each week. Leukocytes from each blood sample were then separated on a Ficoll gradient and used for FACS evaluation and for cell staining as detailed above. FACS-Determination of Cell Groups Blood samples of 3 fish were taken as above and leukocytes were separated on Ficoll gradient. Leukocytes sorted by FACS ARIA III (BD Bioscience) to 4 main FACS gated groups (Fig. 1). Cell sample of each leukocytes group was transferred to slides by cytocentrifugation (Elliot-Shandon, Recyclab), stained as above for microscopy and FACS identification of each gated group. Identification of Macrophages/Neutrophils Leukocytes were incubated in a solution of 200 µL PBS containing 2% Hepes, 0.2% BSA, 10 7 FITC-Staphylococcus albus and 50 µL carp inactivated serum, for 1 h at 28°C. The reaction was stopped by adding cold PBS. Fluorescence of phagocytosing cells in the analyzed gates in flow cytometry was examined by FACSCalibur (Becton Dickinson) and ImageStream (Merck Millipore). Statistical Analysis The acute stress results were tested for significance by F and T tests and those of chronic stress were analyzed by a one way ANOVA followed by Bonferroni and Tamhane Post Hoc Tests. Results Cell sorting and staining of the different cell groups showed that: Cells of group A consisted mostly in small lymphocytes; of group B in medium and large lymphocytes; of group C in PMN cells and of group D in macrophages/monocytes (Fig. 1). Neutrophils and macrophages were identified by phagocytosis of marked Staphylococcus albus, flow cytometry and cell staining. B-cell like and plasma-cell like were identified by staining with FITC-bounded monoclonal mouse anti carp IgG and fluorescence examination using FACSCalibur and ImageStream. Leukocyte levels showed high variability between individuals which disguised stress influence (Fig. 2). Consequently, the influence of stress was also studied by following changes in peripheral blood leukocyte levels in 4 carp throughout different stress treatments (Table 2). In acute stress, only the follow up of peripheral blood leukocytes levels showed a significant decrease in small lymphocytes and B-like cells (~10%, ~50% respectively) (p≤0.05) ( Table 2). In chronic stress, by sampling 8 carps, leukocyte levels throughout stress treatments did not show significant changes in ANOVA test in the kidney and the head kidney except a decrease of 15% of B-cell like at the 3 rd week of chronic stress in the kidney. On the other hand, in the spleen and the blood, macrophages/monocytes decreased significantly up to 50% in the blood by one way ANOVA test and in the spleen by trend test. Moreover, B-cell-like and plasmacell like decreased significantly, as evaluated by trend test, up to 50% in the blood (p≤0.05) (Fig 2). Moreover, trend test indicated that PMN cells levels rose slightly and permanently during weeks 1-3 of chronic stress (R = 0.998) in head kidney (Fig. 2). On the other hand, by following changes in leukocyte profiles of peripheral bloods of 4 individuals throughout stress treatments resulted in a drastic decline of 70-80% of macrophages (p≤0.05, in one way ANOVA) (table 2). In a similar way, B-like lymphocytes and plasma-like cell levels, decreased significantly by 80% in the blood (Table 2) at week 2 and 3 of chronic stress (p≤0.05, in one way ANOVA test). Results of immune components revealed that in acute stress, the level of the proinflammatory cytokines, IL1β, IL6 and TNFα, showed a significant increase of 515%, 147%, 373% Vs control, respectively (p≤0.05), as well as that of the down-regulatory ones, IL10 and TGFβ, that showed a significant increase by 300 and 198%, respectively (Table 3 and Fig. 3, first and second wells). The level of the other components mRNAs: IL8, IgM, IFNγ2b, FoxP3, C3s and the cell markers CD4 and CD8α, showed no significant changes, except a slight insignificant decrease in C3s mRNA (Table 3 and Fig. 3, well 2). In chronic stress, cytokine mRNA levels of IL6, TNFα, C3s and IgM showed no significant changes compared to the control throughout the whole treatments ( Fig. 4A and 4D) in spite of some fluctuations were seen especially in C3s levels (Fig. 3). IL1β, TGFβ and CD8a mRNA decreased respectively to 6%, 5% and 22% levels of control at the second week, after 7 regimes of stress. At the third week, however, after 10 regimes of stress, they returned to control levels (Fig. 4A, 4E and 4C), except IL1β which increased by 2.7 times above the control level (p≤0.05) at returning to homeostasis. IL12b, IFNγ2b mRNA decreased dramatically to zero levels throughout the whole chronic stress time and their levels did not recover even after three weeks of stress (Fig. 4B). IL8 and CD4 mRNA alike IL12b and IFNγ2b considerably decreased at the first week up to 1 and 6% and rose to 25%, 55% levels of the control, respectively, at the third week ( Fig. 4A and 4C). IL10 and FoxP3 mRNA decreased sharply (p≤0.05) in the second week to the level of 0.06 and 0.2% of the control, respectively. In the third week of chronic stress, after 10 regimes of air exposure, their mRNA amounts returned to control levels ( Fig 4E). It is noteworthy that the evaluation of IL17mRNA was quantified by PCR instead of real time qPCR because of difficulties in selecting proper primers. IL17 mRNA decreased following acute stress to 4% and at the first week of chronic stress to almost zero levels. However, its levels rose from the second week, reaching control levels in the third week of chronic stress (Fig. 3). 93±0.25* *,p≤0.05 in one way ANOVA. Each result was the mean of 4 followed individual fish ± SEM. The results represented changes in the percent of leukocytes following five different stress treatments. The cell percentage was calculated from 30000 cells identified by FACSCalibur in the gated area and analyzed by the FlowJo software. Cell type was identified by cell sorting, binding of monoclonal mouse anti carp IgG to leukocytes, phagocytosing FITC-Staphylococcus albus and by cell staining. ctrl, control; as, acute stress; csw1, one week of chronic stress; csw2, two weeks of chronic stress; csw3, three weeks of chronic stress. Fig. 2. Percent of leukocytes in carp lymphatic organs following chronic stress treatments: *, one way ANOVA p≤0.05; t*, a significant trend of elevation/decrease of cells ≤ 0.05. The results represent the mean of 6-8 carp ± SEM. Carp were treated in chronic stress along three weeks. At each week leukocytes from lymphatic organs were separated. Cell percent levels were calculated from 30000 cells identified at the gated area by FACSCalibur and analyzed by the FlowJo software. ctrl, control; csw1, one week of chronic stress; csw2, two weeks of chronic stress; csw3, three weeks of stress Fig. 3. Comparative cytokine levels in common carp spleen following different stress treatments. Cytokines were produced from mixed 500 ng cDNA of 8 fish by PCR amplification and loaded on 1.3% agarose gel with TBE running solution. 1, control; 2, acute stress; 3, one week of chronic stress; 4, two weeks of chronic stress; 5, three weeks of chronic stress Fig. 4. Immune component levels during chronic stress. *, p<0.05 in one way ANOVA. Each result was a mean of 8 carp spleens ± SEM, Component levels were measured by qPCR after three air exposure regimes per week. Results were normalized to 40S rRNA and the component ratio to the control was calculated by the ∆∆Cq method. ctrl, control fish; csw1, one week of chronic stress; csw2, two weeks of chronic stress; csw3, three weeks of chronic stress *, T test significance, p≤0.05. Each result represents a mean of 8 carp ± SEM. Acute stress was induced by a single regime of air exposure. Spleen cytokine levels were evaluated 24 h. after air exposure by qPCR amplification. Results were normalized to 40S rRNA and the ratio calculated by the ∆∆Cq method. ctrl, control; as, acute stress Discussion In the present study, we followed changes in leukocytes profiles of blood and lymphatic organs and measured levels of components representing the main functions of the immune system during acute and chronic stresses in order to further elucidate the involved cellular and molecular mechanisms. The spleen was used mainly to follow changes in immune constituents in the humoral system because preliminary experiments (data not shown) revealed that the cytokines level profile in the spleen was similar to that of the blood and provided more material for gene expression experiments than the blood. In addition, the leukocyte spread of spleen and peripheral blood displayed similar patterns, while that of kidney and head kidney were different. Whereas, the lymphocytes levels were higher than those of PMN cells in the peripheral blood and the spleen, the levels of both populations were almost similar in the kidney and the head kidney (Fig. 2). In acute stress, our results are in agreement with previous studies (Barker et al., 1991;Banerjee and Leptin, 2014) showing significant increased levels of pro-inflammatory cytokines (IL1β, IL6 and TNFα), as well as of down-regulatory ones (IL10 and TGFβ) ( Table 3). It is possible that the joint elevation of these regulatory cytokines with that of the pro-inflammatory ones was involved in the shortened time of the proinflammatory response and the restoring of homeostasis. The pro-inflammatory cytokines are known to be involved in "fight or flight" response (Tort, 2011) in order to overcome ephemeral stressors. The effect of stress on fish was widely variable between individuals. Consequently, the leukocyte averages in lymphatic organs were ambiguous (Fig. 2). Therefore, a follow up of changes in leukocyte levels of peripheral blood throughout stress treatments (Table 2) elicited the question of the sampling size, or of the research tool. As a result, following changes in leukocyte levels of peripheral blood were considered as a significant parameter. In acute stress, the leukocytes levels showed significant changes only by following levels in peripheral blood. Lymphocytes decreased 10% possibly due to the significant decrease of up to 50% in the B-cell like (Table 2). In chronic stress, CD4 and CD8a mRNA levels decreased up to the second week (p≤0.05), but while CD4 mRNA remained depressed towards the third week, CD8a mRNA returned to homeostasis (Fig. 4C). This may explain the drastic decrease of 80% of macrophages, B-cell like and plasma-cell like in peripheral blood (Table 2). These results were in agreement with the decrease in leukocyte numbers in Oncorhynchus mykiss (Cristea et al., 2012), suppression of phagocytic and lymphocyte proliferative activities in Platichthys flesus and Solea senegalensis (Pulsford et al., 1995) and apoptosis of B cells in Cyprinus carpio (Verburg-Van Kemenade et al., 1999). Nevertheless, IL12b and IFNγ2b mRNA products (Fig. 4B) decreased to null throughout 22 days and their levels did not recover even after the third week of chronic stress. It was possible that this dramatic decrease was a result of the deleterious functions of producing IFNγ2b and IL12b mRNA in CD4 cells, especially following the impairment of Th1 cells (Wojtaszek et al., 2002;Cristea et al., 2012). Production in Th1 cells alone can't explain the zero levels of IFNγ2b. Therefore, it is suggested that additional impaired cell types like NK cells, might be involved because of the partial decline in CD8a mRNA. The sharp decrease in IFNγ2b production, macrophages levels, B-cell like and plasma-cell like amounts might explain the increased susceptibility to diseases occurring in chronic stress (Saeij et al., 2003;Small and Bilodeau, 2005;Mauri et al., 2011;Elenkov and Chrousos, 1999;Maule et al., 1989). Moreover, the improvement of CD4 mRNA amounts from 6 to 55% of control levels, which occurred between second and third weeks of chronic stress (Fig. 4C) may explain the recovery of inflammatory and regulatory functions at that time. The IL17 PCR results (produced in Th17 cells) (Fig 3) suggest that its levels decline in acute and chronic stresses but increase towards the third week as seen in the case of CD4 elevation (Fig. 4C). This finding might reveal a recovery of the inflammatory functions in progressing chronic stress. IL1β (p≤0.05) and IL6 (p≤0.06) mRNA ratios which decreased during the second week of chronic stress, reached homeostasis at the third week, whereas, TNFα mRNA (Fig. 4A) remained stable along three weeks of stress. This result was slightly different from that reported for cortisol induced chronic stress in rainbow trout (Cortés et al., 2013), which showed that TNFα and IL1β increased after 5 days. That occurred in our study in the acute stress but not in the chronic stress. This contradiction might be due to differences in experimental conditions, i.e., the use of cortisol implants versus repeated air exposure. The unchanged levels of TNFα and minor temporarily changes in IL1β and IL6 levels throughout the chronic stress (Fig. 4A), even though there was a drastic decrease in macrophages/monocytes, B-cells like, plasma-cells like (Table 2) and supposedly Th1 and NK cells, might point on additional stable proinflammatory resources. Moreover, the chemoattractant IL8 which was down-regulated along 22 days of the chronic stress (Fig. 4A) and did not relieve after the third week may explain the macrophage/neutrophil/leukocyte mobilization decline in different compartments of the body as shown by others (Wojtaszek et al., 2002). FoxP3, known to be produced by regulatory cells (CD4 cells) decreased towards the second week before being elevated to homeostasis levels at the third week as also seen by the moderate elevation of CD4 mRNA ( Fig. 4C and 4E). Other regulatory cytokines, IL10 and TGFβ behaved in different ways throughout the stress period. While, IL10, was down-regulated throughout the three weeks of chronic stress and slightly rose at the third week (p≤0.09), TGFβ showed considerable changes only at the second week of chronic stress and increased at the third week to homeostasis levels (Fig. 4E). This may indicate that regulatory functions may have different resources that respond in different ways to stress. In general, regulatory functions were indeed influenced by chronic stress but only for a while and eventually almost returned to homeostasis. The increase of TGFβ together with IL6 mRNA at the third week may also explain the recovery of Th17 cells as indicated by the followed up-regulation of IL17 (Fig. 3) and CD4 mRNA (Fig 4C). IgM mRNA levels neither changed in acute stress nor in chronic stress in our experimental conditions (Table 3, Fig 3 and 4D). This result was in contradiction with husbandry, confinement or crowding induced stresses findings (Varsamos et al., 2006;Nagae et al., 1994;Maule et al., 1989;Rotllant et al., 1997;Ruane et al., 1999) and to the decrease in B-cell likes and plasma-cell likes (Table 2 and trend in Fig. 2), but in agreement with other studies (Douxfils et al., 2011;Cuesta et al., 2004;Vargas-Chacoff et al., 2014). These discrepancies were also shown in our lab as a result of pollution and temperature stress (not yet published) and it might be due to a presence of inhibitor controlling IgM humoral activity. Similarly, C3s mRNA showed no significant changes in both acute and chronic stresses, although its levels fluctuated throughout the chronic stress period ( Fig. 3 and 4D). These results differ from the hemolytic findings of previous reports (Demers and Bayne, 1997;Sunyer and Tort, 1995;Mauri et al., 2011), but were in agreement with the reported hypoxia and cortisol induced stress (Douxfils et al., 2012;Eslamloo et al., 2014). However, one cannot disregard that the measured plasma ACH50 which reported above (Demers and Bayne, 1997;Sunyer and Tort, 1995;Mauri et al., 2011) is based on the sum of the protein cascade in the complement activity and not solely on the C3s mRNA production. Therefore, it is likely that during stressful events complement protein variants, stress intensity and its duration, individual variations and the presence of inhibitors, represented a possible cause of these disagreements and fluctuations in C3s levels. The involvement of some immune components in acute and chronic stresses, as discussed above, emphasizes which of these functions need further clarifications as follows: (1) The unchanged production in IgM and C3s mRNA levels in our study is not in agreement with the reported decrease in their activity in the blood (Varsamos et al., 2006;Nagae et al., 1994;Maule et al., 1989;Rotllant et al., 1997;Ruane et al., 1999;Demers and Bayne, 1997;Sunyer and Tort, 1995;Mauri et al., 2011). This discrepancy elicited the necessity to further clarify the existence of inhibitory functions during chronic stress. (2) Were the zero levels in IFNγ2b and IL12b throughout chronic stress, due only to the impairment in monocytes, NK/Th1 and B cells? (3) The unchanged levels of TNFa and almost unchanged levels of IL1b and IL6, even after a decrease up to 80% in monocytes/macrophages levels, might enable the verification of the involved cellular mechanisms in the production of these cytokines during chronic stress. (4) We need to clarify the meaning of a persistent increase of PMN cell levels in the head kidney in trend tests (p≤0.05) (Fig. 2). Conclusion Based on the above findings, it can be concluded that: • The decrease up to null in mRNA levels of IFNγ2b and IL12b throughout chronic stress is probably due to the presence of an additional impaired population of cells, producing these cytokines, besides probable Th1 and/or NK cells • The cells that were the most affected by chronic stress were macrophages, B-cell likes, plasma-cell likes and to a certain extent, the Th1 cells and subtypes of the NK cells/cytotoxic cells. This decline in these cells might explain the susceptibly to diseases during chronic stress • The decrease in IL8 mRNA levels during chronic stress undoubtedly reduced leukocytes mobilization. As a result, the leukocyte recruitment at the affected sites might be injured • The increase in pro-inflammatory and regulatory cytokines seems to counter balance temporary stresses but these cytokines stay almost unchanged throughout chronic stress • The levels of some constituents of innate immunity such as C3s and IgM mRNA were unchanged following acute and chronic stresses
v3-fos-license
2019-10-03T09:05:48.816Z
2019-08-13T00:00:00.000
208121314
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://journals.uran.ua/eejet/article/download/175811/177043", "pdf_hash": "efc467e6f31b5b4acf8d48d2cb16d0a7e34422bf", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42255", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "80b89f9d6f0351cd58b8f69e82c42e43ce09c197", "year": 2019 }
pes2o/s2orc
QUALITY ASSESSMENT OF MEASUREMENT INSTRUMENT SOFTWARE WITH ANALYTIC HIERARCHY PROCESS The requirements of the Measuring Instruments Directive 2014/32/EU (MID) [1] form the basis of the legislation of Ukraine on conformity assessment of measuring instruments (MI). According to the new version of the Law of Ukraine “On metrology and metrological activity” (came into force on 01.01.2016), MI intended for application in 9. Cocina, V., Colella, P., Pons, E., Tommasini, R., Palamara, F. (2016). Indirect contacts protection for multi-frequency currents ground faults. 2016 IEEE 16th International Conference on Environment and Electrical Engineering (EEEIC). doi: https:// doi.org/10.1109/eeeic.2016.7555701 10. Czapp, S., Guzinski, J. (2018). Electric shock hazard in circuits with variable-speed drives. Bulletin of The Polish Academy of Sciences: Technical Sciences, 66 (3), 361–372. doi: http://doi.org/10.24425/123443 11. Czapp, S. (2010). The effect of PWM frequency on the effectiveness of protection against electric shock using residual current devices. 2010 International School on Nonsinusoidal Currents and Compensation. doi: https://doi.org/10.1109/isncc.2010.5524515 12. Syvokobylenko, V. F., Vasylets, S. V. (2017). Matematychne modeliuvannia perekhidnykh protsesiv v elektrotekhnichnykh kompleksakh shakhtnykh elektrychnykh merezh. Lutsk: Vezha-Druk, 272. 13. Pat. No. 135438 UA. Sposib kontroliu aktyvnoho oporu izoliatsiyi vidhaluzhennia elektrychnoi merezhi z napivprovidnykovym peretvoriuvachem chastoty (2019). MPK6 G01R 27/18, H02H 3/16. No. u201901598; declareted: 18.02.2019; published: 25.06.2019, Bul. No. 12. Introduction The requirements of the Measuring Instruments Directive 2014/32/EU (MID) [1] form the basis of the legislation of Ukraine on conformity assessment of measuring instruments (MI). According to the new version of the Law of Ukraine "On metrology and metrological activity" (came into force on 01.01.2016), MI intended for application in The rules and procedures for testing MI software are established by the document [2] of the International Organization of Legal Metrology (OIML), as well as documents and recommendations of regional metrology organizations. Testing procedures for MI software are governed by the recommendation [3] of the Euro-Asian Cooperation of National Metrological Institutions (COOMET), document [4] and guidelines [5,6] of the European Cooperation in Legal Metrology (WELMEC). At the national level, appropriate lists of national standards have been established, which, in particular, give the presumption of MI compliance with the essential requirements of the TR. The analysis of the state of the regulatory framework for testing MI software at the international, regional and national levels has been the subject of previous research [7][8][9][10]. National metrology institutes and conformity assessment bodies test MI software according to pre-established methods and algorithms. From January 2016 to the present, the number of MI software tests has been more than 500 in Ukraine and is constantly growing at a significant rate. The latter are based on the requirements of the national standard [11] with the additional use of the OIML D 31 [2] and WELMEC 7.2 requirements [5]. This approach contributes to the consideration of all the elements necessary to achieve the presumption of compliance of the software with the essential requirements of the TR. However, these approaches do not answer the question of the quality of software compliance assessment. The urgency of the work is confirmed by the urgent need for conformity assessment of legally regulated MI in accordance with the requirements of national legislation, TR or European directives. National metrology institutes and conformity assessment bodies are interested in effective testing methods for MI software and in assessing the risks associated with the application of such MI software. Given this, the pressing issue is to validate the MI software test results. Literature review and problem statement The analysis of various aspects of MI software testing has been the subject of previous research of the authors [7][8][9][10]. In [7], the peculiarities of the regulatory support of MI software testing are investigated. The main stages of MI software testing and features in accordance with the requirements [2,5,6] are discussed in [8]. The main factors and algorithms for MI software testing in accordance with OIML and WELMEC requirements are considered, a universal algorithm for MI software testing is proposed in [9]. However, these studies did not analyze quality assessment indicators of MI software regarding the effect on the overall test results of MI software. In [10], the main differences are identified and the necessary elements are established to achieve the presumption of software compliance with the essential TR requirements when conformity assessment of MI. However, the methods and algorithms described in [10] make it impossible to determine the validity of the results obtained from conformity assessment of MI software. In [12], approaches to software quality requirements and software testing methods are compared, and different approaches to software quality assessment in different international standards and guidelines, in particular on issues related to the quality assessment of MI software, are determined. In [13], issues related to the validation of MI software covered by the MID [1] are considered. A methodology is presented that can be extended not only to software on MI categories that fall under the MID, but also to most other MI categories. The paper [14] also discusses the validation of MI software based on risk classes for MI software and some possible testing methods. In [15][16][17][18], a method for assessing the risks and current threats posed by MI software, including those integrated into open networks is presented. The method uses a structure and combines elements of specialized international standards and may be useful for conformity assessment bodies and industry. However, [11][12][13][14][15] do not provide a comparative analysis of the importance of the impact of specific characteristics of MI software on the overall result of the software quality assessment. The requirements of the international document [2] and the possibility of software application for local MI are not taken into account in [16][17][18]. In [19], a system architecture is considered that could eliminate the risks of general-purpose operating systems. This is achieved both through the use of custom software and control of communication between major software components and the environment. Thus, it can be concluded that the above studies did not analyze the influential indicators and results of quality assessment of MI software, and did not apply methods of their validation. Therefore, the analytic hierarchy process (АНР) was chosen to investigate such complex objects as MI software: the basic method [20,21] and its modifications [22][23][24]. This method allows to structure a complex decision-making problem in the form of a hierarchy in a clear and rational way, to compare and quantify alternative solutions. Recently, АНР has been actively used in practice in various fields of activity. The АНР mathematical apparatus is described in detail in [25]. It is therefore necessary to conduct research and identify the most influential indicators that are analyzed in assessing the suitability of MI software, with a view to improving MI software testing methods. The aim and objectives of the study The aim of the study is to develop approaches to improve methods of testing and conformity assessment of special MI software at the national level. To achieve this goal, the following objectives were set: -to carry out a comparative analysis of the MI software testing results by all indicators using the chosen method; -to determine the quality indicators of both built-in and universal computer MI software, which have the greatest impact on the overall assessment results. Materials and methods of research for measuring instrument software quality assessment The problem of comparative analysis of the MI software testing results using AHP is solved by means of three hierarchy levels: -the first level of the hierarchy corresponds to the aimto define the most preferred MI software; -the second -contains criteria (indicators) to define the most preferred MI software; -the third is the specific MI software that should be compared. In general, the list of indicators should be such that the most comprehensive evaluation of each MI software is made. Each generalized indicator can be estimated by partial indicators contained in software documents or other available sources. For a relevant comparison, when assessing a particular MI software, it is necessary to consider all the elements compared. Therefore, they are grouped into generalized indicators, each of which is evaluated separately, and pairwise comparisons and all other stages of assessment, using the AHP, are performed on the basis of generalized indicators. The main stages of comparative quality assessment of MI software based on AHP are as follows [25]: 1. Perform the following actions: -to compile a list of M of MI software that will be compared; -to carry out an analysis of available information about MI software (software description, user manual, etc.); -to determine the list of indicators for comparative assessment, which should contain a sufficient number of indicators (not more than 9) in order to fully reflect all the essential features of MI software. with the further ranking of global priorities n G for all MI software and definition of MI software that has the greatest advantage -the software with the maximum value . n G The structure of the model of quality assessment of MI software using AHP is shown in Fig. 1. The structure of links between the requirements for quality assessment of MI software according to WELMEC 7.2 [4] is shown in Fig. 2. The list of partial indicators, which make up the generalized indicators K P , K U , K L , K T , K S , K D , K I and expressions to obtain a numerical value for each generalized indicator are determined. The numerical value of the built-in computer software characteristic K P is determined by where N P -the total number of estimated MI software indicators (N P =7); P i -constituent estimates of the generalized indicator K P (numerical characteristics of partial indicators with certain weight coefficients P i w ): P 1 -software user documentation; P 2 -MI software identification; P 3 -impact through user interfaces; P 4 -impact through transmission interfaces; P 5 -modification protection; P 6 -software modification protection; P 7 -parameter protection. The numerical value of the universal computer MI software characteristic K U is determined by 9 1 , where N U -the total number of estimated MI software indicators (N U =9); U i -constituent estimates of the generalized indicator K U (numerical characteristics of partial indicators with certain weight coefficients U i w ): U 1 -documentation; U 2 -software identification; U 3 -influence through user interfaces; U 4 -impact through transmission interfaces; U 5 -modification protection; U 6 -software modification protection; U 7 -parameter protection; U 8 -software authentication and results transfer; U 9 -impact of other software. The numeric value of the test indicator of storage devices K L is determined by 8 1 , where N L -the total number of estimated MI software indicators (N L =8); L і -constituent estimates of the generalized indicator K L (numerical characteristics of partial indicators with certain weight coefficients L i w ): L 1 -completeness of stored data; L 2 -protection against accidental or conscious modification; L 3 -data integrity; L 4 -authenticity of stored data; L 5 -conference keys; L 6 -recovery of stored data; L 7 -automatic storage; L 8 -storage capacity and sequence. The numeric value of the test indicator of data transfer devices K T is determined by 8 1 , where N T -the total number of estimated MI software indicators (N T =8); T i -constituent estimates of the generalized indicator K T (numerical characteristics of partial indicators with certain weight coefficients T i w ): T 1 -completeness of the transmitted data; T 2 -protection against accidental or conscious modification; T 3 -data integrity; T 4 -authenticity of transmitted data; T 5 -conference keys; ( 8 ) where N I -the total number of estimated MI software indicators (N I =6); I i -constituent estimates of the generalized indicator K I (numerical characteristics of partial indicators with certain weight coefficients I i w ): I 1 -failure recovery; I 2 -availability of duplicate equipment; I 3 -suitability indication; I 4 -preventing the reset of accumulation registers; I 5 -dynamic behavior; I 6 -protection of parameters specific to electricity meters. Specific requirements have been set for software of certain MI groups, in particular for: electricity meters, water meters, heat meters and more. Results of comparative software quality assessment The numerical values of the PCM elements of indicators A with the normalized eigenvector A i for the selected indicators for comparative quality assessment of built-in computer (P) or universal computer (U) MI software are shown in Table 1. If the quantitative relationships between the indicators do not satisfy the expert performing a certain comparative quality assessment of MI software, then they can be modified as necessary. Table 1 Numerical values of the elements of the PCM of indicators for comparative quality assessment of MI software In the case of the data in Table 1, the consistency condition for the consistency ratio (С d ≤0.1) is satisfied (С d =0.02). The consistency index is I c =0.028, and the largest eigenvalue of the vector is λ max =7.17. The weight coefficients w i for the selected MI software quality assessment indicators are defined in Table 2. At the beginning of the assessment, it is necessary to determine the basic configuration of software: with a built-in computer P or with a universal computer U. Then, a complete set of requirements relating to the corresponding basic configuration must be used. The numerical values of the MI software quality indicators were converted into the numbers needed for assessment by the AHP in the range from 1 to 9 using the Saati scale. The results of assessment of MI software quality indicators in accordance with the presented methodology are shown in Table 3. Specialized software "AHP Competence 1.2" (Ukraine), which implements AHP, was used for the necessary calculations. The comparison of the global priorities G n of the MI software under consideration with the ranking by their reduction (AHP 1.2 Competence, Ukraine) is shown in Fig. 3. Table 3 Results of assessment of MI software quality indicators Table 2 Weight coefficients for the selected MI software quality assessment indicators 1 2 2 3 3 2 3 4 2 5 5 5 5 3 3 3 3 4 4 4 4 3 4 3 4 2 2 3 3 -3 3 5 4 4 2 2 --2 6 3 3 3 3 --3 7 3 3 2 2 --2 8 -2 3 2 --- The analysis of the obtained results shows the advantage of MI software in the following sequence: SW1>SW4> >SW5>SW7>SW8>SW6>SW2>SW3. It should be noted that the universal computer MI software has an average level of quality. The built-in computer MI software for the SW1 and SW2 meters has significantly different levels of quality. It is important to analyze the values of the MI software quality indicators to identify the weight of their impact on the overall software quality assessment. Comparative analysis of the priority vectors of the MI software quality indicators shown in Table 3 showed the following. Without the submission of documentation on software with MI with built-in and universal computers and its identification, it is not possible to start the quality assessment process in accordance with the established requirements. That is, built-in computer (K P ) and universal computer (K U ) software indicators are important by default, although their maximum eigenvalues are average (8.000 and 7.994, respectively). The test indicator of storage devices (K L ) and the test indicator of data transfer devices (K T ) are some of the important quality indicators (maximum eigenvalues of 8.073 and 8.099, respectively). At the same time, reading test indicator (K S ), specific test indicator of software for specific MI (K I ), and test indicator of software separation levels (K D ) have a small contribution to the overall quality indicator (maximum eigenvalues of 8.000, 7.994 and 7.985, respectively). The diagram of the weight of MI software quality indicators, which allows to present clearly the obtained results, is shown in Fig. 4. It should be noted that the reading test indicator (K S ) and the test indicator of data transfer devices (K T ) are practically impossible to apply to software with a built-in computer that is used in simple MI (such as simple electricity meters). This is because such MI lack reading and data transfer devices. At the same time, these MI software quality indicators are important for software with a versatile computer used in complex MI, such as cardio monitors and liquid chromatographs. Thus, the main quality indicators of MI software with a built-in and versatile computer that have the greatest impact on the results of conformity assessment, are identified. The results of the study can later be used to modify and improve the algorithm and methodology for conformity assessment of MI software. 1. A comparative analysis of the testing results of software for MI with built-in and universal computers using AHP is carried out. Based on the analysis of the requirements of the 7.2 guidelines for MI software testing, generalized and partial indicators are identified to assess the quality of MI software. Expressions to obtain the numerical value of each partial indicator by each generalized indicator are generated. The results of the comparison showed the AHP suitability for pairwise comparisons of all quantitative and qualitative indicators of quality assessment of MI software. 2. The quality indicators of MI software with built-in and universal computer, which have the greatest impact on quality assessment results are determined. It is found that without the submission of documentation and identification of MI software with built-in and universal computer, it is not possible to start the conformity assessment procedure as required. That is, the quality indicators of built-in computer (K P ) and universal computer (K U ) software are important by default. Also, the test indicator of storage devices (K L ) and the specific test indicator of software for specific MI (K I ) are some of the important indicators. It is determined that the reading test indicator (K S ) and the test indicator of software separation levels (K D ) are practically inapplicable and can be neglected. Introduction The Quanton diagnostic and health complex is a technical innovation designed to perform a triple function: non-invasive diagnostics of people, gaining information about the characteristics of the desired health normalizing effect on them and implementation of this effect. Non-invasive diagnostics combines spectral and binary methods. The spectral method with a certain level of reliability allows identifying organs that have deviations from the standards. The binary
v3-fos-license
2020-12-10T09:07:57.159Z
2021-05-15T00:00:00.000
229648586
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://ieeexplore.ieee.org/ielx7/6488907/9425408/09279315.pdf", "pdf_hash": "7f9d4130684d31f28b0341726c17aeb3e4bb53e6", "pdf_src": "IEEE", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42256", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "sha1": "01766d149a79cf1f066bd9730e11a9332423357c", "year": 2021 }
pes2o/s2orc
Operating Latency Sensitive Applications on Public Serverless Edge Cloud Platforms Cloud native programming and serverless architectures provide a novel way of software development and operation. A new generation of applications can be realized with features never seen before while the burden on developers and operators will be reduced significantly. However, latency sensitive applications, such as various distributed IoT services, generally do not fit in well with the new concepts and today’s platforms. In this article, we adapt the cloud native approach and related operating techniques for latency sensitive IoT applications operated on public serverless platforms. We argue that solely adding cloud resources to the edge is not enough and other mechanisms and operation layers are required to achieve the desired level of quality. Our contribution is threefold. First, we propose a novel system on top of a public serverless edge cloud platform, which can dynamically optimize and deploy the microservice-based software layout based on live performance measurements. We add two control loops and the corresponding mechanisms which are responsible for the online reoptimization at different timescales. The first one addresses the steady-state operation, while the second one provides fast latency control by directly reconfiguring the serverless runtime environments. Second, we apply our general concepts to one of today’s most widely used and versatile public cloud platforms, namely, Amazon’s AWS, and its edge extension for IoT applications, called Greengrass. Third, we characterize the main operation phases and evaluate the overall performance of the system. We analyze the performance characteristics of the two control loops and investigate different implementation options. with features never seen before is promised, while the burden on developers and application providers is reduced or more exactly, shifted toward the cloud operators. On-demand vertical and horizontal resource scaling in an arbitrary scale, dependability, fault tolerant operation, controlled resiliency are just highlighted features provided inherently by cloud platforms. However, latency sensitive applications with strict delay constraints, such as several distributed IoT services, generally do not fit in well with the new concepts and today's platforms and pose additional challenges to the underlying systems. When strict delay bounds are defined between different components of a microservice-based software product, or between a software element and the end device, novel mechanisms and concepts are needed. A crucial first step toward the envisioned future services is to move compute resources closer to customers and end devices. Edge, fog, and mobile edge computing [30], [31], [37], [38] address this extension of traditional cloud computing. Nevertheless, solely adding cloud resources to the edge is not enough as the cloud platform itself could significantly contribute to the end-to-end delay depending on the internal operations, involved techniques and configurations. In this article, we adapt some relevant aspects of the cloud native approach and related operating techniques for latency sensitive IoT applications operated on public cloud platforms extended with edge resources. Our general design concepts are applied to one of today's most widely used and versatile public cloud platforms, namely, Amazon Web Services (AWS) [1], and its serverless services. We identify the missing components, including novel mechanisms and operation layers, required to achieve the desired level of service quality. More precisely, we focus on serverless architectures and the Function as a Service (FaaS) cloud computing model where the microservice-based application is built from isolated functions which are deployed and scaled separately by the cloud platform. In our previous work [7], we proposed a novel mechanism to optimize the software "layout," i.e., to minimize the deployment costs, in central cloud environment, e.g., in a given AWS region, while meeting the average latency constraints defined on the application. A dedicated component is responsible for composing the service by selecting the preferred building blocks, such as runtime flavors (defining the amount of resources to be assigned) and data stores, and the optimal grouping of constituent functions and libraries which are packaged into respective FaaS platform artifacts. This approach can be extended to edge cloud infrastructures This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ but further considerations are necessary. More specifically, Amazon provides an edge extension for IoT services, called Greengrass, where the edge infrastructure nodes are owned and maintained by the user (or application provider) but managed by AWS. Obviously, the pricing scheme and the performance characteristics of serverless components in this realm is totally different from the regular billing policy and operation, therefore, our models should be adjusted accordingly. In this article, we aim to extend our basic model for edge cloud platforms and to enable dynamic and automated application (re-)deployment if an online platform monitoring module triggers that. Our contribution is threefold. 1) We propose a novel system on top of public cloud platforms extended with edge resources which can dynamically optimize and deploy applications, following the microservice software architecture, based on live performance measurements. We add two different control loops and the corresponding mechanisms which are responsible for the online reoptimization of the software layout and constituent modules at different timescales. The first one addresses the control of the steady-state, long-term operation of given applications and it is suitable for following, e.g., the daily profiles, while the second one implements a more responsive control loop which can directly reconfigure the runtime environments of deployed functions if the monitoring system triggers that as a response to, e.g., SLA violation. 2) We provide a proof-of-concept prototype. In this article, we target AWS and its edge extension for IoT applications, called Greengrass, however, the concept is general and it can be applied to other public cloud environments as well. Our current solution supports geographically distributed edge cloud infrastructures under the low-level control of AWS. The system encompasses a layout and placement optimizer (LPO), a serverless deployment engine (SDE) and a live monitoring system with dedicated components and operation workflows. 3) We characterize the main operation phases and conduct several experiments and simulations to evaluate the overall performance of the system. We analyze the performance characteristics of the two control loops as well and investigate different implementation options. Finally, we reveal further challenges and open issues. The remainder of this article is organized as follows. In Section II, the background is introduced and a brief summary on related works is provided. In Section III, an illustrative use case is defined which motivated our work. Section IV highlights the main principles driving our system design and presents the high level architecture of the system. In Section V, the proposed models related to the applications and the underlying platforms are presented, and the optimization problem is formulated. Section VI is devoted to the proposed system including the details of the relevant components. In Section VII, we evaluate the performance of the overall system and our main findings are discussed in detail. Finally, Section VIII concludes this article. II. BACKGROUND AND RELATED WORK The cloud native paradigm aims to build and run applications exploiting all the benefits of the cloud computing service models. It includes several techniques and concepts, from microservices across DevOps to serverless and FaaS architectures, and everyone defines that in a slightly different way. According to the cloud native computing foundation (CNCF) [6], the ultimate goal is an open source, microservicebased software stack, where distinct containers are separately orchestrated and scaled by the cloud platform enabling the optimal resource utilization and agile development. The serverless approach allows to shift the focus from "where to deploy" to "how to create" the applications. It can be realized by following either the Container as a Service (CaaS) computing model or the FaaS paradigm, depending on the granularity level that the developer can consider when creating the software. In this article, we focus on the latter approach because it provides finer granularity in the organization of the application and more opportunities for optimization. There are several public cloud providers offering both services, such as Amazon [1], Google [13], Microsoft [24] or IBM [15], and a number of open source platforms are also available for private deployments, such as Kubernetes [18], Knative [17], OpenWhisk [2], or OpenFaaS [27]. This section provides a brief introduction on Amazon's serverless solutions over cloud and edge domains. Tools for automated deployment of serverless components fostering the development and operation of such applications are also highlighted together with open issues. A. Serverless on Amazon Web Services AWS [1], the platform of the market leader public cloud provider, offers a wide selection of services that can support building applications in the cloud. Among those, two options are adequate for executing serverless code: elastic container service (ECS) with the Fargate launch type, and Lambda which is a FaaS solution. Both can ease the task of deploying application components in different ways providing diverse configuration options and pricing models. Lambda offers fewer options for configuration but at the same time it simplifies automatic deployment and connecting other AWS services or Lambda functions. The service increases the assigned CPU power together with the only adjustable flavor parameter, available memory size. Instance startup, load balancing between the instances and networking configuration is taken care of by the Lambda framework without any need for developer interaction. There is also a select set of AWS services that have built-in triggers for Lambda, while other, third party services can invoke Lambda functions via the software development kit (SDK). Lambda defines methods for easy versioning and branching of deployed functions via Lambda versions and aliases. Compared to Lambda, ECS offers more options for setting up resources and networking, while also providing possibilities for quick invocations, however, it lacks the automatic load balancing options. Larger sized function code and related artifacts are better suited for deployment with ECS, since Lambda poses a 250-MB size limit of uncompressed packages. AWS's CloudFormation service [3] provides possibilities for automating the deployment process of application components realized by either of these services. However, code deployment to edge nodes is only available via AWS IoT Greengrass. B. Serverless at the Edge With AWS IoT Greengrass AWS IoT Greengrass is a service that is part of AWS's IoT offerings and its main task is to make AWS Lambda functions available on edge devices. The service's basic building blocks are Groups that can be configured in the cloud and their deployment is managed by AWS. They are the collections of different entities serving different roles. The 1) Core is at the center of each group that has a two-pronged representation. It is present in the cloud as a link to the edge node while it is also a software instance running on the edge node and handles communication with the cloud. Every message flowing between the edge and the cloud is encoded using RSA keys for which an X.509 certificate is used. This also has to be set up in the cloud, and assigned to the Core, as well as transferred to the edge node before starting up the Core software; 2) Devices and local resources [e.g., devices connected via USB or machine learning (ML) artifacts] serve as inputs; for 3) edge Lambda functions which are linked to cloud Lambdas via AWS Lambda aliases. Configuration of edge and cloud functions is handled separately which enables extensions upon AWS Lambda functionality. Although code size limits are inherited from the cloud version, edge functions do not have lower or upper values in their memory settings and increments can be made in 1-kB steps, as opposed to the cloud version's 64-MB steps. A remarkable difference compared to on-demand cloud Lambdas is that edge functions can be longlived (pinned in AWS terminology) and the single long-lived function instance can be kept running indefinitely. On-demand edge functions are handled by the Core similarly to cloud functions, multiple instances of a single function can run concurrently and they are stopped after reaching the configured timeout value. Three containerization method is offered for executing edge functions: a) Greengrass; b) Docker; or c) no containerization. The first option is the most versatile while the rest severely limit available functionality. Access to other edge and cloud functions is granted; and via 4) subscriptions. C. Automated Serverless Deployment and Optimization Deploying cloud applications across different platform services is a complex task. In order to ease this process, multiple tools exist that are able to set up required resources with different cloud service providers. For example, the Serverless Framework [35] uses a YAML configuration file to declare resources in a provider agnostic way and, with its own CLI, provides an interface for managing these resources. The service is able to cooperate with, e.g., AWS [1], Microsoft Azure [24], Google Cloud Platform [13], and Apache OpenWhisk [2]. Terraform [34] is a similar tool that enables setting up and managing cloud infrastructure spanning over multiple public cloud domains. The higher level, provider agnostic interface makes it easier to move the infrastructure from one provider to the next but it cannot fully hide provider specific parameters. These tools were designed to receive external parameters to be used at deployment from other services, e.g., for specifying resource types or memory size. One such external service is Densify [9] that, leveraging its separate optimization and monitoring components, makes cloud applications self-aware. It monitors AWS virtual machine (EC2) instances with a proprietary monitoring component and collects CPU, memory and network utilization data. Based on these, the optimization component uses ML to model the application's utilization patterns while also estimating the best fit of compute resources for current needs and predefined specifications. Such estimations can give recommendations on instance flavors to be used and on the number of such instances. These recommendations can be forwarded to application maintainers via different channels (e.g., Slack or email), or can be applied automatically. Such automatic redeployments can happen using templating tools that support dynamic parameter assignment or parameter stores, e.g., AWS CloudFormation, Terraform or Ansible. The service enhances change recommendations with a cost monitoring interface as well. As for AWS specific deployment options, the provider offers different services for managing resources. All of them use the same AWS API but they provide different levels of complexity. Low-level options, such as the Web console, the SDKs and the CLI have smaller granularity thus they make handling of applications containing multiple resources overly complex. CloudFormation [3] (that is also used by the AWS Cloud Development Kit and many third party options) can treat a whole deployment as a unit of workload. It can handle the setup, modification and deletion tasks of complex applications (called stacks or stack sets in CloudFormation terminology) using its own templating language. Stackery [33] is a set of development and operations tools accelerating serverless deployments on top of AWS. It supports the management of production serverless applications throughout their life cycle. Albeit the availability of these versatile tools in deployment, they do not prove to be adequate for deploying applications to hybrid edge cloud scenarios when latency is of concern. While AWS tools offer edge node management, and the AWS Compute Optimizer [4] serves as a recommendation engine to help right-sizing EC2 instances, such an optimization engine for serverless applications is not available. AWS independent tools share a similarity in this regard, as they do not venture into the serverless domain and consider resource utilization but omit the investigation of application performance. Additionally, they usually lack the capability of handling edge resources altogether or have started to support this feature only recently thus not covering yet the full feature set made accessible by the cloud provider. Besides the tools supporting deployment and orchestration over cloud platforms, there are only a few papers in the literature dealing with cloud native and cost-aware service modeling and composition. Eismann et al. [10], Fotouhi et al. [12], Leitner et al. [20], and Winzinger and Wirtz [36] provided pricing models for microservice-based application deployment over public clouds, but they focus only on supporting offline cost analysis for predefined deployment scenarios. Online cost tracing of a serverless application is a cumbersome task due to limited billing information provided by the cloud platforms. To tackle this issue, Costradamus [19] realizes a per request cost tracing system using a fine-grained cost model for deployed cloud services, however, it lacks any optimization features. Researchers in [11] and [21] studied the optimization problem of cloud native service composition and provide offline solutions based on game theoretic formulation and a constrained shortest path problem. Other recent works in [5], [8], and [22] target similar problems of performance optimization of serverless applications leveraging public cloud resources, but only regarding the placement problem of the service components and missing any adaptive and automated service reoptimization task. III. TARGETED USE CASE In this section, we highlight an envisioned use case motivating our work. The application exploits cloud features and serverless tools in order to provide IoT services at large scale. The use case, presented in Fig. 1, addresses live object detection on Full HD video streams. As we target cloud native deployment and follow the serverless approach, we have stateless functions requiring all data as input. Therefore, making use of dedicated data stores is the reasonable (or the only feasible) way of data exchange. Here, we strive to decrease bandwidth requirements by preprocessing images before submitting them to elaboration and finally marking them with detailed object classification results. The preprocessing stage in steps 1 -10 resizes and grayscales captured video frames and performs a preliminary object detection on the modified picture. At the end of the preprocessing stage, the full size image is cut into pieces based on the bounding boxes provided by the preliminary object detection. As a next step, the Cut function calls the second stage object detection function for each cropped image which performs the object classification task. Observe that the number of calls depends on how many objects we found during the preprocessing stage which we consider as an application specific metric. It depends on the software whether the calls are synchronous and invoked serially or asynchronous and handled in parallel. Consequently, the implementation of the next function, Collect Results, could be different for the two approaches. In any case, it awaits while each second stage detection function finishes and collects their individual results. Finally, this function calls the Tag function that marks detected objects on the full size image and annotates it with object classification results. For our use case, we interpret the end-to-end (E2E) latency as the average elapsed time between the arrival of a frame and the event when a recognized object's classification is written out into the data store. In our implementation, we used Python and leveraged features of the OpenCV [26] library for image processing and object detection steps, relying on its deep neural networks module and the MobileNet-SSD network. In the remainder of this article, we focus on steps 1 -12 , the main parts of the application, and the components related to the rest of the steps are not deployed in our tests. IV. SYSTEM DESIGN This section is devoted to the main goals and principles driving our architecture design and the high level system description is also provided. A. Design Goals Our main goal is to foster the development and operation of latency sensitive IoT applications by adapting the cloud native paradigm. More specifically, we aim at improving latency control for serverless applications and allowing optimization of operation costs on public cloud platforms extended with privately owned edge infrastructures. We focus on the FaaS cloud computing model, however, the concepts are general and can be applied to container-based serverless solutions as well (such as Fargate containers or Kubernetes pods). Although, the finer granularity in the construction of the application provided by the FaaS approach yields more optimization options and requires more sophisticated solutions. Formally, the operation cost of the application is to be minimized by finding the cost optimal software layout required to meet the average latency bounds. To enable this optimization, we need to construct accurate application and platform models capturing the performance characteristics and operation prices. The first reasonable way of controlling latency is the careful placement of software components: the functions can be run in the central cloud or in available edge domains. Current APIs of today's systems typically do not provide sophisticated placement control based on delay information, therefore, we strive to explicitly select the domains to run the functions. We assume that edge resources are scarce, following different cost models as cloud resources, and the preferred deployment option is always the central cloud while edge resources are used only if the delay constraints require that. We argue that besides placement, the efficient grouping of constituent functions and libraries, which will be packaged into respective FaaS platform artifacts, and the selection of the runtime flavors are crucial tasks which significantly affect both the performance (e.g., end-to-end latency) and the operation costs. A top level component is able to address all these targets and generate a software layout description including the function grouping with the selected flavors and placement information. Based on this general description, an adapter layer can directly deploy the application to the underlying cloud infrastructure while exploiting the exposed APIs and related cloud services. As user demands, application characteristics and platform performance can vary in time, dynamic reoptimization is an essential feature which can be provided based on a versatile monitoring system. We target such a system making use of available cloud services and custom extensions. Two different approaches are considered to implement control loops. The first option is to realize a full reoptimization cycle starting with a model update gathered from live measurements, followed by the optimization task and the full redeployment of the application. Obviously, this yields a larger operation timescale. In order to ameliorate the response time, we address an alternative option as well, which realizes a shorter control loop. If different deployment options are onboarded in advance, the reconfiguration of the application can be executed much faster. However, we need to add a dedicated component to control the specific application based on monitored metrics, while a customized version of the FaaS runtime is also required in order to allow on-the-fly reconfiguration. B. High Level Architecture and Operation The high level architecture of the proposed system is depicted in Fig. 2. The system is capable of composing, deploying and dynamically reoptimizing IoT applications operated on serverless resources. We note, that the first and basic version of the system, without any support for the edge, was introduced in [7] and [29]. In the former, we focused on the optimization layer, while in the latter, we investigated deployment tasks. In the current work, we leverage their composition and extend upon it with support for edge deployment and a second option for inducing changes in the application layout. At the top level, the Layout and Placement Optimizer (LPO) receives input data from the developer (or the operator in other scenarios). The data consists of the application model (Application Components with Requirements) and the platform model (Cloud and Edge Node Properties). The graphbased service model encompasses functions, data stores (as nodes) and invocations (function calls), read, write operations (as edges). Average function execution time, call rates, latency requirements on critical paths, etc. can also be defined for the service. The other input of the system is the platform model which describes the cloud platform's performance and pricing schemes and the list of available edge nodes with their properties. It can be given a priori based on previous measurements, however, the model parameters can be adjusted on-the-fly based on live monitoring. The LPO works with these service-and platform agnostic abstract models and constructs an optimal application layout by grouping the functions into deployable units (e.g., FaaS artifacts), defining the corresponding minimal flavors together with the hosting domains (central cloud versus edge) and determining the required data stores and invocation techniques (e.g., one for invoking functions on the edge, a different one for calling functions in the central cloud). The main objective is to minimize the operation costs while meeting the average latency bounds given by the developer or user. The application layout together with monitoring conditions is passed to the Serverless Deployment Engine (SDE) in step 1 that transforms incoming data into platform specific API calls and adapts the application layout to the underlying edge or central cloud environments. In today's systems (such as AWS), the central cloud and edge domains are controlled via distinct deployment engines (Cloud/Edge Deployment Engines on Fig. 2) and APIs in separate calls (steps 2 and 4 ). As a result, the Managed Application can have parts running on edge nodes or in the central cloud launched in steps 3 and 5 , respectively. We assume that the platform can run the same function artifacts in both runtime environments and in-memory data stores can be used for state management. In either case, the grouped application components are executed by our special-built Wrapper which is an essential extension to the platform's own runtime environment. The purpose of the Wrapper is threefold. 1) It enables grouping of functions into artifacts by handling both the internal interactions among the encompassed functions and the interactions with the outside world: state store access and invocation to other components. 2) The Wrapper logs measured metrics on these operations, including platform related and application specific ones, to the managed monitoring system. 3) The Wrapper grants on-the-fly reconfiguration access to the runtime environment via a novel API which is used by the runtime optimizer (RO), the controller of the shorter control loop (step 9 ). This reconfiguration allows to change the function calls (e.g., invoking the central cloud version of a function instead of the edge variant) or data store access in the artifact based on live monitoring without the need of redeployment. The monitoring infrastructure, consisting of the managed monitoring system and the RO, is deployed in steps 6 and 7 when the application has already been set up. The monitoring system aims at monitoring performance and application level metrics and it can send alarms to the LPO and the RO. In addition, a periodic querybased operation is also provided to support enhanced responsiveness (steps 8a and 8b ). V. OUR MODELS AND OPTIMIZATION PROBLEM In this section, we define our service and platform models capturing the main performance and cost characteristics. The introduced notations are summarized in Table I. To establish accurate models, a comprehensive performance analysis of AWS Lambda and Greengrass is the essential first step. A. Performance of AWS Lambda In our previous works [7], [28], we provided a comprehensive performance study of delay characteristics of AWS FaaS and CaaS offerings, based on short-and long-term experiments. Here, we give a summary on them focusing on our main findings with regards to AWS Lambda. Each AWS region operates using multiple CPU types with different capabilities, and the configured resource flavor (memory size) can have an impact on the selected CPU type. For single-threaded Python code, Lambda performance approximately doubles as assigned memory size is doubled until reaching the peak performance at around 1792 MB (one physical core is allocated). Our measurements indicate that execution time has no correlation with the time of the measurement but it is highly affected by the assigned CPU type and the selected Lambda resource flavor. We observed that AWS, time independently, assigns Lambda instances to different types of CPUs available in the chosen region in an undisclosed manner. At small flavors we measured significant differences among CPU types, but as higher flavors were selected, the differences diminished. Many different methods exist for invoking Lambda functions but most of them are inadequate for handling latency sensitive applications as they impose high delays with high variation, even for small transmitted data size. The quickest Lambda invocations are the SDK's and the API Gateway's synchronous calls, however, they have adverse effects on the execution time (thus the price) of the invoker function. Therefore, using asynchronous SDK calls can be a better fit for latency constrained applications. Long-term SDK asynchronous invocation tests showed no dependency on either the time of the call, the CPU type or the flavor of the instance. On average, we measured 103 ms when transmitting payloads with 130-kB size and 79 ms for 1 kB. Considering the asynchronous nature of the call, we measured surprisingly high blocking delay (the time while the invoker function gets blocked during an invocation) in the invoker function (52 and 44 ms, respectively). As Lambda is designed to serve stateless functions, whenever states should be stored we have to use an external service. In our previous work, we concluded that Amazon ElastiCache for Redis outperforms every other AWS offering for serving such purposes. It can handle both read and write operations under 1 ms for data smaller than 1 kB. Redis performance is among the best throughput-wise as well, and it handles increasing concurrent access notably well. B. Performance of AWS Greengrass Although Greengrass and cloud Lambda functions share many features, they differ in multiple aspects, as discussed in Section II-B, that significantly affect performance. In case of latency sensitive applications, the most important performance features to measure are flavor dependent computation proficiency and invocation latency. In order to investigate these aspects with AWS Greengrass, we repeated the respective benchmarks discussed in [28]. We used two different edge nodes to execute the tests: a local server with four Intel Xeon E5-2650 v3 CPU cores, 6 GiB of memory running Ubuntu 18.04 and an Amazon EC2 t2.micro instance with 1 vCPU and 1 GiB memory running the same OS in the eu-west-1 (Ireland) AWS region. Each measurement was repeated 100 times to obtain average values and standard deviation. 1) Execution Time: As opposed to AWS Lambda behavior, Greengrass does not apply a memory size dependent access to compute resources. The service limits instance access to resources by using cgroups, however, it always provides access to unlimited processor time for each running function instance as their cpu.share parameter is set to 1024. Our measurements proved to be perfectly in line with this, as running multiple instances of the same function cause no significant increase in execution time until every core has been occupied by a function instance. However, when we start up twice as many function instances as the number of CPU cores, the execution times doubles. The behavior shows that the management jobs executed by the Greengrass core do not require significant CPU resources when no messaging is performed among the function instances. 2) Invocation Delay: As Fig. 3 depicts, there can be four different call paths among functions when AWS Greengrass is used depending on the location of the Invoker and Receiver functions. 1) When both functions are on the same edge node and local invocation is used. 2) The function locations are the same, but the call goes through the AWS IoT Cloud topic. 3) The two functions are on different edge nodes. 4) The Receiver function is an AWS Lambda function residing in the central cloud. We benchmarked these scenarios on both of our edge nodes. In accordance with our previous measurements in [28] using the same methodology as here, invoking a function in the central cloud from the edge is the slowest, taking 125-231 ms to complete. In terms of latency, this invocation type is one of the slowest of available AWS Lambda calls and is 20-30-ms slower than asynchronous SDK calls between Lambda functions. Results for the rest of the cases measured on the t2.micro instance are shown in Fig. 4 together with the blocking delay caused by the invocations. (We opted to exclude the depiction of edge to central cloud calls from the figure in order to provide better visibility on invocation delay characteristics between edge functions). We can conclude that Greengrass local calls (calls between functions managed by the same Greengrass Core) are extremely fast compared to other Lambda function invocation options. As the local AWS Greengrass Core can handle the invocations, they last only 2.3-4.3 ms depending on payload size. Because of the Greengrass service's architecture, any other invocation has to interact with the AWS IoT Core, thus calls have to traverse the IoT Cloud topic. These invocations experience 7.8-19.5-ms delay when the Receiver function is found on a Greengrass node. When using our on-premise edge node, the increase in latency corresponded to the latency between our premises and the AWS region we used for the test. Blocking delay, the time while the Invoker function gets blocked during an invocation, is always small, ranging from 1.5-2.8 ms which is a fraction of those measured for the asynchronous cloud calls (50-70 ms). Comparing the above results with those given by [28], we can conclude, that using AWS IoT Greengrass solutions result in relatively low latency only when the cloud functions are not involved. If an application requires low latency as well as edge and cloud functions, it is better to use SDK calls between them instead of relying on AWS IoT. C. Service Model The service model describes the user-defined service request including the software components and their interactions. Let S be the service structure description which is basically a directed multigraph. Function nodes F represent the simple, stateless and single-threaded basic building blocks, which use invocations I to call other functions and read R, write W arcs to perform I/O operations on data store nodes D. A dedicated platform node ℘, in the role of the API Gateway or the user, represents the main entry point of the service and designates the ingress service invocations. Recursive loops are modelled in their expanded form in which each iteration step is given with explicit invocations. This concludes the invocation subdigraph S[F ℘ ], unlike Control Flow Graphs, to be loopless, that is, a directed acyclic graph (DAG). Moreover, functions are considered to have only a single entry point which has a strict syntax typically predefined by the execution framework. The single-predecessor function characteristic further restricts S[F ℘ ] to be a directed rooted tree with ℘ as the root node. Functions are characterized by the execution time τ measured on one vCPU core, while arcs have the average invocation rate ω r attribute along with the explicit blocking delay δ introduced in the invoker function. Data stores can be described by their workload capacity in general. In addition to the graph-based description, the service model also keeps track of user-defined node-disjoint path(s) with associated latency limit l π as the basic constraints for the layout optimization. D. Platform Model Our platform model captures the performance characteristics and cost models of function execution, invocation and data store access methods, respectively. For the runtime environment, we only consider single-threaded serverless functions. However, our models can be extended to use containers [7] or to support multithreaded functions by using explicit function execution profiles. Runtime flavors are specified by their offered vCPU fraction n c . To extend our previous model with edge computation capabilities, we introduce edge nodes as standalone flavors. Thus, an assigned flavor implicitly carries basic placement information, that is, designating the specific edge node or the central cloud as required by the deployment engine. While Greengrass Lambdas always have access to one vCPU core on edge nodes, i.e., n c ( E ) 1, the core fraction of cloud Lambdas can be derived from their assigned memory as n c 1}. The first Lambda flavor granting one core is * λ = 1792 MB as stated in Section V-A. Regarding invocation types, we assume two different options relevant to latency sensitive applications. More specifically, async SDK invocation, depicted in Section V-B, and local invocation are considered. Local invocation is used when one function directly invokes another function in the same group and its blocking overhead is negligible in terms of latency. E. Cost and Latency Models Making use of our service and platform models, we can describe the end-to-end latency and the operation costs of the application. While serverless platforms support parallel function execution via autoscaling, internal parallelization (within a Lambda function) could also be realized by applying multithreading and internal asynchronous calls scheduled by the runtime. However, we consider single-threaded functions and runtime environments with a single core. Therefore, in order to calculate overall latency (and costs), we can model all functions as single-threaded components. These functions can be composed together, where they can call each other directly in a synchronous manner, and executed in a single Lambda function. This way, the grouping of functions can reduce the overall latency by eliminating SDK invocation overheads in return for additional costs. In the same time, function grouping introduces serialized execution of the encompassed functions resulting in increased group execution time. The number of consecutive executions of a function is determined by its caller component's behavior. This can be modelled with a multiplier, i.e., the serialization ratio, which is the ratio of the caller and called component's invocation rates. This quotient is greater than 1 when the caller iteratively performs invocations, around 1 if it realizes one-toone mapping and less than 1 if outgoing calls are filtered by conditional statements. First, let T s define the overall service runtime. Then, let t p denote the execution time of function group p ∈ P F on selected flavor φ p ∈ . In (1) we define t p as the sum of the actual function execution times including flavor-related data and egress invocation overheads A + (p), and multiplied by the serialization ratio. Invocation i f and i p mark the ingress invocations of function f and belonging group p In accordance with AWS billing patterns, we use rounded up group execution time for the Lambda cost calculation. In addition, we define the summed number of received requests as r p ω r (i p )T s for group p ∈ P F . The flavor-dependent group cost function c p is formulated in (2), where C r , and C p are the billing constants specified by the cloud provider for the total number of requests and rounded group execution time c p p, φ p = r p C r + C p t p 100 ms . (2) Although service cost calculation relies on the entire group execution time, the observed latency differs from t p values. The end-to-end latency measured at a function can include different number of consecutive executions of the preceding functions based on their position in their serialization sequence. Thus, the number of distinct execution variations from which the measured latency value is computed is determined by the serialization ratios of the preceding functions. As these execution variations contribute evenly to the average latency value we define a modified formulal p for the group latency calculation in With the same approach, we can formalize the cost function for data stores as well. As there are no outgoing data transfers, the data store cost only depends on the service runtime T s and instance type C i . Therefore, it can be expressed as a single layout-independent cost value C i T s . F. Optimization Problem The LPO's output is the service layout which defines the function partitioning P F (equivalently called as clustering in the literature) along with the flavor assignment ϕ. Thus, the optimization task is to find the cost-efficient layout over the cloud/edge environment considering latency requirements. Our problem, which falls under the topic of graph partitioning, is a complex task in general. For simplicity, we make the following assumptions without losing the original target. 1) We consider only one central cloud Lambda flavor and one edge flavor. 2) Since data stores S[D] do not form a connected subdigraph and their cost is layout-agnostic depending on T s solely, data store flavor assignment can be realized as a separated upper-bounded aggregation. Thus, we focus on the S[F] partition problem in the following (as ℘ must not be part of any group). 3) We do not assume internal thread-based parallelization as functions represent simple software building groups. This means, P F has to be technically a valid graph partitioning of S[F] where partition groups are considered to be directed linear chains with no limit either on their number or size. Summarizing the above, we define our objective function as to find the chain partitioning (P F , ϕ) with the minimal cost min p∈P F c p (p, ϕ(p)) (4) such that the following constraints must be met. 1) Latency limit l π of a given path π is not to be violated. 2) Function group p ∈ P F must contain exactly one chain. VI. PROPOSED SYSTEM We have applied our general design principles presented in Section IV to AWS Lambda and Greengrass and the complete system is shown in Fig. 6. Our prototype was implemented in Python3 making use of the AWS SDK and AWS IoT Greengrass SDK. In this section, the main components, algorithms and workflows are described in detail and we present how the exposed APIs of AWS are exploited by our system. A. Layout and Placement Optimizer The main task of the LPO is to solve the optimization problem defined in (4). Graph partitioning or clustering have been well-researched for decades. While partitioning is known to be N P-complete for arbitrary directed graphs as well as weighted trees [14], several polynomial-time algorithms exist for sequential graph partitioning (SGP) which restricts the partition groups to contain only consecutive nodes [16], [25]. The available techniques for solving SGP assume either an upper bound for the group sizes or consider only fixed number of groups. However, our problem differs from the traditional variants of SGP in several aspects. Since we aim to split up trees explicitly into chains and the partition groups are bounded by the latency limits of service-wide critical paths in contrast to the locally verifiable group size or count limits, the aforementioned methods cannot be applied directly to our problem. By extending our prior algorithm designed for public clouds [7], we propose a heuristic approach for cost-efficient and latencyconstrained partitioning of trees into chains on cloud and edge resources, called Chain-based Tree-Partitioning (CTP). 1) Chain Partitioning: First, we define the relaxed Chain-Partitioning (CP) algorithm utilized by CTP as a subproblem to solve tree partitioning. CP specifies chain partitioning as a variant of noncrossing sequence partitioning and leverages its related divide-and-conquer approach [23]. Suppose an n-length chain of functions f with their measured performance characteristics, the number of counted subcase latency bounds B and an optional path [π s , π e ] limited by l π as the algorithm's input. Here, the cost-efficient partitioning along with the assigned flavors, overall cost and latency values can be derived by iteratively evaluating the recurrence relations in (5). In the recursive formulas, the subcase of the first i nodes of the chain grouped into j groups is divided into two subparts: the previously calculated subcase of the first k −1 nodes into j − 1 groups and the remaining last k * − → i nodes as a single group. Since the assigned flavor φ of the last group and its invoker group's flavor ν inherently predetermine the group execution times and the invocation delay between the two subparts, the selection of a minimal cost subcase cannot be guaranteed to be globally optimal regarding the overall latency constraint. Therefore, we use B precalculated latency bounds for each subcase and cache the related cost-optimal partitioning which enables tracking of more expensive subcase variants with better latencies that are optionally chosen during a subsequent iteration. These bounds are calculated evenly between the overall latency limit l π and the smallest execution time of single function groups in descending order, keeping cheaper variants with lower bound indices b. The cost-optimal partitioning of a subcase is designated by the specific k * value where the summed cost of the two subparts is minimal. In case of multiple minima, the subcase with the lowest index k, that is, the lowest group count, is chosen. During each iteration, all flavor combinations ν, φ are examined and only those prior subcases with feasible bounds b k ν,φ are taken into account which meet the given bound b including the execution time of the last group and its invocation delay. To track the relevant subcases' values, dedicated matrices C and L are introduced for storing the summed cost and latency calculated with c p andl p from (2) and (3). Latency calculation formulated in l(k, i, ν, φ) is performed only for the constrained path [π s , π e ] using the flavor-dependent invocation delays formed in matrix D. To be able to reconstruct the partition groups, matrices K, B and F are used for caching the barrier node k * , by which the optimal subcase is divided, the opted latency bound b * of the prior subcase and the last group's flavor φ * opted for k * , respectively The dynamic programming technique provides an efficient way to solve the recursive formulas in (5). Algorithm 1 Algorithm 1 Chain-Partitioning 1: procedure CHAINPARTITION(. . . , π s = 0, π e = n, B = n, l π = ∞) 2: Define DP ← n × n × r matrix with 5-tuples as C, L, K, B, F 3: Apply memoization to functionsl p , c p with cache size n * | | 4: if l π = ∞ then Calculate latency bounds 5: bounds ← [∞] 6: else Decreasing bounds of evenly spaced r ranges 7: bounds ← LINSPACEBOUNDS(l π , min f ∈F,φ∈ l p (f , φ), B) 8: for i ← 1 to n; b ← 1 to | | do Precalculate trivial subcases 9: REVERSESORTBYLATENCY(DP[i, 1]) 11: for i ← 2 to n; j ← 2 to i; k ← j to i; ν, φ ∈ do 12: 2) Tree Partitioning: Following an analogous formalization, CTP recursively calculates the partitioning of a subtree in S by leveraging the CP algorithm and previously calculated subtree groupings to enforce the node-disjoint critical paths . To accomplish this efficiently, CTP precalculates all the reachable leaves from each node by labeling the nodes using postorder DFS and the label definition in the following equation, In order to ensure CTP to inspect every candidate partitioning of a subtree, we define the Subchain-Pruning action which leverages Node-Labeling to track the chain from subtree root r to target leaf l, while it also fetches the chain-adjacent subtree roots N + c (r * − → l). It operates roughly as follows: Starting from the subtree root, Subchain-Pruning iteratively checks the labels of descendant nodes. The child node that has the target label is a member of the actual chain and marked as the next step, while the remaining successors belong to the chain neighbors. We can get valid chain partitioning of an arbitrary subtree if we perform Subchain-Pruning, then apply Chain-Partitioning on the resulting root-leaf chain and take the partitioning of chain-adjacent subtrees. Consequently, we shall cover the cost-optimal subcase if we perform Subtree-Pruning on each leaf-ending chain designated by the subtree root's labels. To ensure the latency constraints, a separated chain traversal step is realized, similarly to Subchain-Pruning. Each critical path originating on the chain is checked during the traversal, while the related latency fragments of impacted paths are cached in L. The latency limit that fits entirely on the chain is enforced by CP itself. This follows that CTP accepts constraints assigned for distinct node-leaf subchains solely, otherwise only the critical path originating the closest to ℘ can be guaranteed. Finally, we formulate our recursive CTP algorithm in To for all node ∈ REVERSEDBFS(tree, root) do 5: for all leaf ∈ L(node) do 6: chain, nghbrs ← SUBCHAINPRUNING(tree, node, leaf ) 7: π , π s , π e ← GETCRITICALPATH(tree, chain, ) 8: params ← GETCHAINPARAMETERS(chain) 9: part, cost ← CHAINPARTITION(params, π s , π e , B, l π ) 10: valid, sub_lats ← CHECKCRITPATHS(chain, part, , L) 11: sum_cost B. Serverless Deployment Engine One level below the LPO, the SDE is responsible for translating application layout and monitoring conditions arriving in step 1 (see Fig. 6) from the LPO to calls that AWS can process for setting up resources. On a high level, the SDE communicates directly with AWS accessing its CloudFormation (CF) and Greengrass services. The former is configured via its own templating language. CF processes incoming template requests describing what resources to set up, in which order and what connections will these resources have with each other, and creates individual CF stacks or stack sets from them. In our implementation, the SDE synthesizes templates specifying single CF stacks as a simplification. In step 2 , the SDE passes a template to CF that defines all the components for the AWS Managed Application in the cloud as well as it configures Greengrass related resources to be deployed to edge nodes. When cloud resources have been set up by CF in 3 , the SDE calls AWS Greengrass directly for deploying resources to edge nodes in step 4 , since CF is incapable of deploying code to edge resources. After completing the whole application setup with edge deployment in step 5 , the SDE configures elements required for the AWS Managed Monitoring of the deployed application and the RO component. These are also exchanged with CF in step 6 and are set up in step 7 . In steps 2 and 6 , application and monitoring code and other artifacts are shared between the SDE and CF in compressed format using AWS's own object storage service, Amazon S3. In accordance with these, the SDE goes through four phases internally for creating the Application and Monitoring CloudFormation templates and artifacts for applications written in Python. (Of course, the concept can be applied for other programming languages supported by AWS Lambda as well). In phase D1 , external libraries and developer defined function resources required by the application components are collected. These are compressed depending on component placement and then uploaded to Amazon S3. In phase D2 , the actual code of application components gets processed. The code for every component group defined by the LPO is collected and purpose-built Wrapper code is added to them as well. Two special functions are added to the application. The Entry point function is able to divert incoming requests to the application's own entry point be it on an edge node or in the cloud. The Edge monitor function performs CPU and memory load measurements on edge nodes. The resulting AWS Lambda functions are compressed and uploaded to S3. In phases D3 and D4 , the SDE formulates the application and monitoring CF templates, respectively. During application template creation, incoming layout and flavor specifications are used. These are complemented with code and artifact locations in S3 as well as additional AWS resources that are needed in order to set up the application properly. Such resources include but are not limited to AWS Lambda layers, versions, aliases, Amazon VPC, subnets, Internet Gateways, NAT Gateways, ElastiCache clusters, AWS IAM security policies and roles as well as Greengrass groups, cores, resources and subscriptions. The SDE's Python3 implementation contains around 2500 LoC. The AWS Managed Application is ready to run as soon as CF finishes with step 3 if the application does not use edge resources or at step 5 otherwise. As depicted by the bottom side of Fig. 6, all interactions among application components with each other or with data stores, traverse our Wrapper. This lightweight runtime extension is capable of hiding (edge or cloud) placement differences, function invocation and data store access specifics from application components. It serves as the unique standardized entry point to functions that have been grouped together by the LPO, and it even relays function specific environment variables. Configuration of the Wrapper is also performed via environment variable assignment in the template at phase D3 within the SDE. Here, the specific Lambda, IoT topic and Redis endpoints are supplied to the Wrapper. During normal application operation, our Wrapper implementation, comprising of 630 lines of Python code, adds negligible latency to application E2E latency as configuration parameters are cached (in Python dictionaries and objects) and the Wrapper's internal handler components are extremely lightweight. In case of cold start up, when configuration parameters need to be processed, Wrapper overhead is slightly greater but still remains under 2 ms. C. Automated Monitoring Since every communication attempt between application resources goes through the Wrapper, it proves to be ideal for handling monitoring related functionality as well. As these invocations and data store accesses traverse the Wrapper, it measures then logs call latency and rate, blocking delay as well as function execution time. An interface for logging custom application level metrics of application components is provided as well. Measured values are reported to the AWS Managed Monitoring component. This entity has three tasks: aggregating metrics, sending out alerts and providing a queryable interface. The first two tasks are handled by Amazon CloudWatch (CW). When logging monitoring data to CW, the Wrapper experiences significant, available CPU dependent delay. In case of the smallest Lambda flavor (128 MB), we experienced 125 ms on average with high variance using the highest available batching (20 metrics). However, thanks to implementation details, this does not contribute to application delay at all (metrics logging is running virtually in parallel with the application). It does, however, contribute to the price of maintaining the application. Data coming from the Wrapper goes to CW Metrics and limit violations are handled by CW Alarms. This latter AWS service is configured within the monitoring template in phase D4 of the SDE for conditions coming directly from the Developer or the LPO. Alerts are sent out from the Monitoring component to the LPO and RO in steps 8a and 8b , respectively, using the integration between CW and Amazon Simple Notification Service (SNS). For measurement data that does not trigger alarms, the component offers access via a Metric Inquirer function that is also deployed at steps 6 -7 . D. Dynamic Reoptimization The above discussed features of the Monitoring component serve as a basis for the closed loop reoptimization of the application. After deployment, the application starts to log usage metrics automatically that either trigger an alarm, or one of the optimization components discovers a nonalerting change in the application's behavior and initiates a change (see steps 8a and 8b ). Depending on which component reacts, we define two control loop behaviors that differ in their reaction timescale as well as in their possibilities to make changes in the application. 1) Steady-State Control: The steady-state control loop strives to follow usage trends, daily profiles, or changes in the application users' behavior. As a default means to accomplish this, the LPO periodically queries the Managed Monitoring component via the Metric Inquirer facility of the latter (see step 8a ) and updates its own Platform and Application Models. Periodicity of the query is dependent on LPO configuration and certain use cases can require more frequent updates than others. For convenience, the Monitoring component is able to trigger the LPO directly as well, supplying notifications about changes in reported metrics out of regular query periods. In the current implementation, the SDE sets up such triggers as application E2E latency alarms in the Monitoring component in step D4 of the deployment, when a latency constraint is provided in the service specification. Both types of changes can induce service reoptimization in the LPO. In order to decide whether the deployed layout is worth replacing with a new one, a dedicated redeployment metric is applied. The LPO compares the user-given threshold value with the weighted sum of the following values to make the deployment decision: 1) costs of the relative change in the layout; 2) relative profit gain which is the difference of the deployed layout cost calculated with the updated service parameters and the new layout cost; 3) summed latency gain; 4) relative latency margin on critical paths; and 5) the number of avoided latency constraint violations. When the LPO deems a new layout better than the currently deployed based on this metric, it initiates a full redeployment incorporating steps 1 through 6 . 2) Dynamic Runtime Reconfiguration: In this case, the RO is the component making changes in the application. This has limited possibilities as it can switch between predeployed layouts by offloading functions from edge nodes or reverting these changes. The RO interacts with the Monitoring component in step 8b . With push-based alerting, the RO cannot get triggered sooner than 10 s after an alarm condition presents itself, because of CW Alarms limitations. Faster reaction time is achieved by placing periodic queries to CW Metrics, realizing poll-based execution. In order to perform such queries by the RO, we use a combination of an Amazon EventBridge event and an AWS Lambda function. Both of these are deployed at step 7 , and EventBridge sets up a trigger event that gets fired every minute (which is the shortest time period for the service). The event trigger invokes our custom-made RO Trigger function that schedules the RO to run frequent periodic queries to the monitoring component. In either push or poll-based execution, the RO interacts directly with the Wrapper as shown by step 9 in Fig. 6. For on-demand functions in the cloud, the SDE configures a Redis instance at deployment, while for edge functions it uses the one available on the edge. The RO writes offloading information to these Redis instances and the Wrapper checks them before each function execution. As this data is small in size and Redis read operations have small latency, the average delay of this overhead is negligible (less than 1 ms) compared to the execution time of the application component. After reading a change request, the Reconfiguration Handler in the Wrapper changes subsequent invocations from edge local calls to cloud calls or vice versa. VII. EVALUATION In this section, we evaluate the performance of our system investigating the use case presented in Section III, in varying operating regimes. First, the main operation phases of the overall system are characterized. Second, the performance of the steady-state control loop is analyzed, and finally, we evaluate the performance of the dynamic runtime reconfiguration loop. For describing our software deployment layouts, we introduce the following interval-based notation: Here, groups of single or multiple consecutive application functions denoted by their ordinals 1-n indices i, j, k ∈ N (see also in Fig. 1) are defined using square brackets and subscripts C or E identify the assigned cloud or edge flavors, respectively. E.g., in case of [6] C }, functions #1-#4 (Image Grab-Object Detection Stage 1 in Fig. 1) are placed within a group assigned to the edge, while functions #5 (Cut) and #6 (Object Detection Stage 2) are deployed in two distinct groups in the cloud. The experiments are conducted in Amazon's data centers located in the Ireland (eu-west-1), Frankfurt (eu-central-1) and Oregon (us-west-2) regions. Table II illustrates the performance characteristics of the overall system when deploying select layouts. The first five options are generated by our system during normal operation. These show how the LPO changes the application's layout as it transitions from being completely cloud-based to completely deployed to the edge node, depending on different circumstances. (We discuss these cases and circumstances in more detail in Section VII-B.) The last three layouts in the table are corner cases created manually for comparison. The operation of the SDE is split into four distinct phases: translation from LPO to AWS CloudFormation (CF) format, application code management (application source code and external library collection, and upload), CF and edge deployment. For each layout, we executed 25 iterations where only application components were updated, state stores were not changed. Our system was executed on a t3a.2xlarge Amazon EC2 instance running in the same region chosen for deploying the application. A. Overall System Performance As shown in the table, LPO execution, and LPO→CF format translations have the lowest impact on deployment delay, both having subsecond values (under 7 ms) with our simple application. The LPO does not display high variance between different layouts as its execution time is dependent only on the number of used flavors which is now a fixed parameter. Using a fixed application, the translation's execution time depends on two factors: 1) number of groups in the layout and 2) the placement of these groups. As Table II shows, creating more groups naturally increases translation time, as setting up a function in AWS usually requires the specification of multiple resources. Assigning functions to the edge node slows down the translation step for the same reason. Code management and CF deployment take significantly more time. The former requires 20-81 s to complete, as handling external libraries and ML models contribute heavily to phase latency. In case of the simplest P C layout, a single artifact containing all the code, libraries and ML models is created and uploaded to AWS. In the worst case, P 6C , all these are packaged separately and sequentially for the six different functions, resulting in six comparatively big deployment packages. Phase delay is reduced when functions are mainly deployed to the edge, thanks to merging libraries and ML models in one single artifact on the edge. In case of P ECC , however, the SDE still has to create deployment packages for functions #1-#4 and their shared libraries as well as separate ones for functions #5 and #6 that results in a comparatively high phase delay. CF deployment adds another 1-3.3 min to complete deployment time since connected Lambda functions are deployed sequentially instead of parallelly by CF and their update takes around 20 s each. As the difference between P 6C and P 6E (every function deployed separately in the cloud or on the edge, respectively) shows, edge related setup further adds to phase delay. The increase is due to the fact that for the edge, AWS needs to configure the complete Greengrass setup. Not only functions but the merged artifact containing libraries and ML models, as well as AWS IoT communication topics between the function groups. Edge deployment is comparatively quicker and less dependent on function grouping as external packages, shared among application functions, are deployed together in a common edge resource by AWS Greengrass. One or two function groups are deployed in 6.1-7.3 s, while assigning each function to a different group increases phase latency only with an additional 0.9 s. Overall, our measured complete deployment delay is 1.2-4 min depending on application layout. As the LPO's measurement update period is 15 min in our tests, delay for a complete reoptimization cycle via the steady state control loop can reach 19 min in total. B. Reoptimization via the Steady State Control Loop In order to design and conduct comprehensive test scenarios covering all cases for our proposed system, we perform preliminary simulations with the LPO module. The optimization algorithm is validated using a test request based on our use-case application described in Section III. The service Fig. 7(a) depicts the resulting groupings for the applied limits (horizontal) and the assigned flavour for each function component (vertical), while Fig. 7(b) shows the predicted values of E2E latency, overall application cost and partial cost required to be paid to the cloud provider. The results align with our expectation as stricter latency limits enforce the LPO to utilize compute resources at the edge, otherwise prefer the cheaper but, in terms of E2E latency, underperforming public cloud. It can be observed that the jumps in the overall cost at 2.1 and 2.6 s correlate with the increases in the aggregated function execution time assigned to the edge, while the predicted latency values give close approximation to the upper latency limits, but always fall below them. Regarding the different deployment scenarios, we can also notice that only five distinct and feasible software layouts are distinguished and generated by the LPO, out of the 132 possible grouping options. (Since the number of noncrossing partitions of an n-element set/chain is given by the nth Catalan number C n , where n equals to the number of functions in our case, our use case application has C 6 = 132 distinct layouts [32]). These results show that the LPO can also be used to calculate feasible application layouts for a given latency limit in advance, thus, significantly reducing the state space of deployment options for additional layout reconfiguration features (see Section VII-C). Based on the simulation outcomes, we construct a comprehensive and all-encompassing experiment to validate the behavior and performance of our system on AWS. Although, our proposed system implicitly manages the cloud-related performance fluctuations with the help of the control loops, there is no way to control the internal network characteristics and server workloads in a public cloud environment. For this reason, we select E2E latency and detected object count (an application specific metric) as the two input parameter which may vary in time and may affect the deployed application layout considerably. Therefore, our steady state control loop experiment is divided into two phases to observe the effect of these parameters' change separately, from a common initial state, while covering all the feasible deployment options. For the experiment, we utilize dedicated requests generated during the previous simulations and apply two distinct input sources: a low (LO) and a high (HO) object count video stream resulting in 1 and 5 objects per frame in average, respectively. The detected object count directly influences the invocation rate between the last two functions, Cut and Object detection stage 2, as highlighted in Section III. The experiment is conducted in the Ireland region, whereas a dedicated VM with 8 vCPU in Frankfurt is set up as edge node. Each function is assigned to the runtime flavor with 1024-MB memory. The LPO is configured to apply a 15-min reoptimization period which is the time window used for periodically obtaining the measurement updates and for predicting the different layout costs, as well. The used system parameters are also summarized in Table III. 1) Phase 1: In the first phase of our experiment, we deploy our use case application using a reasonably permissive latency limit of 3.0 s and apply the LO video stream as test input. Then, we switch to the HO stream during reoptimization period 2, altering the application specific metric. At the initial deployment, the LPO decides to encompass all functions in a single group (P C = {[1-6] C }) resulting in 2.2 s measured E2E latency. Based on live measurements acquired directly from CloudWatch, shown in Fig. 8(a), we can state that both the deployment and detected object count metric remain unchanged after the first reoptimization period. As the input stream is altered from LO to HO, the detected object count, thus, the invocation rate of the last function rises. Consequently, the measured E2E latency of the active deployment layout exceeds the 3.0 s constraint, which is detected by the LPO at the end of the second period. At this point, the LPO initiates the service redeployment process. During the reoptimization, the LPO calculates a new optimal layout, while meeting the given latency constraint by moving the last component into a separate group (P CC = { [1][2][3][4][5] The reason behind this decision is that the E2E latency can be reduced by eliminating the significant intragroup serializations and leveraging the platform-supported parallelization, in exchange of higher operational cost and additional intracloud invocation delay. Afterwards, the new layout remains optimal, keeping a steady state setup with an experienced 2.7 s E2E latency, and no other redeployment is performed in spite of the fluctuations in the measured values. Fig. 8(b) sheds light on the decision process of the LPO from an internal point of view. It depicts the predicted cost in millionth dollar units (μ$) and the E2E latency predicted at the beginning of the given periods along with the measured E2E latency acquired at the end of the periods for each step. It also visualizes the predicted cost of the nonreoptimization option, which is the recalculated cost of the layout in operation, but with the updated metrics, and used at the layout replacement decision. We can observe at period 2, when the measured value exceeds the limit and deviates from the predicted latency, that the LPO opts for a new layout, despite being 3.4% more expensive, in order to avoid the constraint violation. Fig. 8(b) also confirms that the predicted E2E latency aligns with the measured values in steady state, having only 0.8-2.6% difference. 2) Phase 2: In the second phase, we examine the effects of different E2E latency limits on the generated layouts, similarly to the simulation tests before. Continuing our experiment, we carry on with the HO video stream and set a 4.0 s latency limit to ensure the same initial cloud-only deployment as for Phase 1. After reaching the steady state (P C ), we deploy different layout options by iteratively sending new service requests with decreasing latency limits. The used arbitrary limits, which are 4.0, 3.0, 2.6, 2.1, and 1.7 s, are chosen from the simulations' results to cover all the generated deployment options. Between the deployments we leave enough time (at least 15 min) for our system to update the application metrics and confirm the steady state before proceeding to the next deployment. Fig. 9(a) presents the measured E2E latency acquired from CloudWatch for the entire duration of Phase 2. We can observe that the experienced latency values stepwisely follow the decrease of the applied limits, providing stricter E2E latency in each step. As it is examined in the previous phase, between the first two cloud-only deployment, P C and P CC , we can achieve around 0.9 s latency gain due to the platformprovided parallelization. By applying the next two constraints, we get mixed deployments of P ECC = { [1][2][3][4] [6] C }, where the limits force the first several functions to be grouped together and assigned to the edge. With these layouts we can further reduce the E2E latency, despite introducing higher edge-cloud invocation latency. Utilizing edge resources moves processing closer to the video source, while keeping the last function in the cloud can still leverage its innate parallelization capabilities. Finally, applying the strictest latency limit results in a two-group, edgeonly layout P EE = {[1-5] E , [6] E }. Apart from the cloud-only scenarios, we can observe notable downtime during the layout replacement operation when the edge flavor is involved. The lack of support for seamless transition stems from the limitation of AWS CloudFormation, as described in Section VII-A. Although, supporting downtime-free replacement in the steady state control loop is matter of future work, our system offers rapid and seamless switching between layouts leveraging the runtime reconfiguration loop. Additionally, we also observed increased relative standard deviation (2.9-6.0%), which is calculated offline from exported CloudWatch logs, in the measured E2E latency compared to cloud-only layouts (0.8-1.1%). This stems from the presence of edge-cloud invocation in the deployments. Fig. 9(b) depicts the predicted and measured latency values along with the predicted costs for the aforementioned layouts. For the sake of comparison, we also deploy and measure three manually assembled layouts, which represent the de facto, cloud-native deployment approaches of executing each code component separately (P 6C , P 6E ), or encompassing them together (P E ). Applying these corner cases we can achieve similar E2E latency as with the corresponding cloudonly and edge-only layouts (P C , P EE ) generated by the LPO, but at 2-2.4 times a higher cost (up to 22%). In addition, if we compare the LPO-calculated layouts to these manually crafted ones, while considering the associated latency limits, we can observe a significant 3.2 times cost increase in the worst case (P EE ↔ P CC ). These differences in the layout costs confirm our argument, that is, an additional optimization mechanisms with precise models are required for operating serverless applications over public cloud in a cost-efficient manner. Moreover, it is worth highlighting that during Phase 2 of the experiment, the predictions approximate the measured values well, including the mixed deployments, experiencing only 0.5-3.8% overestimation. C. Dynamic Runtime Reconfiguration As presented in Section VI-D2, two versions of the runtime reconfiguration loop are available: a push-based solution where Amazon CloudWatch (CW) sends out alarms to the RO, and a poll-based mechanism where the RO actively queries CW for limit violations. Both approaches are affected by the capabilities of CW. The former is limited by a 10 s, while the latter by a 1 s measurement window. CW also needs an undisclosed amount of time to consolidate metric data. As depicted in Fig. 10, we set up a test environment using our system to investigate detection time of limit violations. Our test application consisting of a single Trigger Event Source component is deployed in the cloud. Monitoring happens the same way as described by Section VI-C, via CW Metrics. The application component sends out trigger events that cause limit violations and logs their generation time. The RO also logs the time when it detects these violations. Time difference between the event generation and its detection is calculated by a separate Lambda function. Our measurements show that the effective feedback delay in case of the push-based option is 20.13 s, on average with 15.25 s minimum and 20.4 s maximum values based on our 100 tests. For the poll-based one, however, we can achieve 3.2 s average delay with 0.58 s minimum and 8.95 s maximum values. To determine total reconfiguration time of the application, we have to add another approximately 2 ms in both cases, when communicating with cloud functions. This delay is due to data exchange between the RO and function Wrappers via Redis instances. Edge reconfiguration is slower, as the 2 ms exchange latency is increased by network delay between the RO's cloud region and the edge location. We also investigated the performance of component offloading from edge to cloud in our object detection application with both available options. We set up our edge node having four CPU cores in the eu-central-1 (Frankfurt) AWS region while us-west-2 (Oregon) was chosen for cloud execution. The sample video was streamed from Budapest, outside of AWS, with a sample frame rate of 2/s. In this experiment, we use two layouts from those given by the LPO in Section VII-B: P EE = {[1-5] E , [6] E } as initial deployment, and P EC = {[1-5] E , [6] C } for offloading Object Detection Stage 2 to the cloud. RO-driven offloading is triggered by the object count application level metric, supplied by the Cut function, surpassing the number of the edge node's CPU cores. In case of the application being triggered more frequently than the minimum execution time of the Object Detection Stage 2 function, the metric can signal an edge node overload condition. In such cases, concurrent instances of the function would consume more CPU resources than available. Fig. 11 depicts the effect the different alarm detection options have on the application performance. Displayed metrics are taken from CW and in case of the object count and E2E delay, use a 1 s measurement window for aggregation. In case of CPU load, however, the Edge Monitor component logs the aggregated utilization metric less frequently. As expected, the poll-based mechanism outperforms the push-based in every regard. As the object count in the video stream increases, the push-based loop is slow to react and the edge CPU load reaches 100% while E2E latency tops at 9.87 s. The poll-based option experiences far lower rise in the E2E latency (with a maximum of 2.75 s) and manages to to keep the CPU load on the edge in check, with a maximum of 79% which is a 16% rise compared to normal behavior. After the end of application reconfiguration and function cold start latency, the E2E latency settles at 2.3 s (up from the original 1 s) and edge CPU usage at 33%. The 16 s transient time of the poll-based option is significantly shorter than the 43 s of the push-based (refer to the intervals T 3 and T 1 , respectively, in the figure). The comparatively long transient time is caused by the increased execution time on the edge node as well as clod start delay for starting up functions in the cloud. After the object count decreases below four, the RO shifts Object Detection Stage 2 back to the edge. As both E2E latency and edge CPU usage return to their original values, we can observe that transition, in case of the poll-based reconfiguration, is unsurprisingly quicker again (T 4 = 7 s compared to the push-based version's reaction time of T 2 = 16 s). Based on our tests, it is clear that although push-based application reconfiguration is cheaper to realize, it might not be sufficient for avoiding edge node overload. Depending on application characteristics, the poll-based option can improve performance, but with higher invocation rates that might fail as well. As our implementation reaches the limits of CW, if an even smaller reaction time is required, a different solution should be used for collecting application metrics. VIII. CONCLUSION In this article, we adapted the cloud native programming and serverless operating techniques for latency sensitive IoT applications. A novel system was proposed on top of public cloud platforms providing serverless solutions for central and edge domains. The general approach was applied to Amazon's AWS leveraging its FaaS offerings, Lambda and Greengrass. Our main findings are summarized as follows. 1) We argue that application latency and operational costs are significantly affected by the grouping of the constituent functions (how to group and package user functions into FaaS platform artifacts); the selected flavors providing the runtime for the functions; and the placement of the components (central cloud or edge domains). Developers or operators of latency sensitive applications can benefit from defining their expectations on latency and cost, while scaling to current workload is delegated to the cloud providers. 2) We propose to add an optimization component on top of public cloud stacks to optimize deployment costs while keeping soft latency boundaries. This component controls the deployment via available services and exposed APIs. Such a control loop allows supervising serverless deployments in the range of minutes or tens of minutes, which is sufficient to follow daily profiles and usage trends. 3) In order to support control on lower timescales, the platform and the FaaS runtime are required to provide direct configuration interfaces for swapping layouts. We presented an extension to a state-of-the-art FaaS platform implementation. As a result, control within a few seconds can also be realized if different deployment options are onboarded in advance. 4) Instrumentation is needed to implement the detailed monitoring required as input for optimization. Customization of cloud monitoring offers a simple implementation, which enables capturing the performance characteristics of the deployed applications and the underlying platforms with acceptable accuracy. Therefore, adequate models of applications and platform components can be established, hence such a monitoring system fulfills all requirements to enable closed-loop control for latency sensitive serverless applications. Since then he worked with the Department of the Telecommunications and Media Informatics contributing in several research projects and gained wide knowledge about SDN, NFV, and microservice technologies. His current Ph.D. research focuses on cloud-native service modeling and provisioning.
v3-fos-license
2018-04-03T03:43:35.010Z
2018-03-08T00:00:00.000
3802535
{ "extfieldsofstudy": [ "Business", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00267-018-1015-8.pdf", "pdf_hash": "57d447f086988714e80b8e039d8a3343a2b506a4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42258", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "sha1": "57d447f086988714e80b8e039d8a3343a2b506a4", "year": 2018 }
pes2o/s2orc
African Forest Honey: an Overlooked NTFP with Potential to Support Livelihoods and Forests In parts of the developing world, deforestation rates are high and poverty is chronic and pervasive. Addressing these issues through the commercialization of non-timber forest products (NTFPs) has been widely researched, tested, and discussed. While the evidence is inconclusive, there is growing understanding of what works and why, and this paper examines the acknowledged success and failure factors. African forest honey has been relatively overlooked as an NTFP, an oversight this paper addresses. Drawing on evidence from a long-established forest conservation, livelihoods, and trade development initiative in SW Ethiopia, forest honey is benchmarked against accepted success and failure factors and is found to be a near-perfect NTFP. The criteria are primarily focused on livelihood impacts and consequently this paper makes recommendations for additional criteria directly related to forest maintenance. Introduction Tropical forests are under threat from deforestation and degradation, caused by over-exploitation, logging, and conversion to other land uses (Megevand 2013;Bennett 2015;FAO 2016). Many different solutions have been explored, ranging from forest certification, statutory protection and community forest management, (Nelson et al. 2009;Kalonga et al. 2016). One strategy that gained traction in the 1990s focussed on the development of NTFPs as a means of making the forest pay its way and become a competitive land use for forest-fringe households (Peters et al. 1989). The idea is that if forests have value for local communities, they will be more inclined to maintain them. NTFP harvesting is described as "the practice of extracting economically valuable, non-timber forest products leaving the forests structurally and functionally intact", (Nepstad and Schwartzman 1992). Evans (1993) called this the "conservation by commercialization" hypothesis. Enthusiasm for this ''win-win'' solution to both poverty and deforestation resulted in significant research and action in the 1990s, with the initiation of development projects aimed at commercializing NTFPs to increase their value. These explored the potential of NTFPs as diverse as ant larvae in Indonesia and baobab juice in Malawi (Césard 2004;Kambewa and Utila 2008). However, many of these failed to achieve commercial viability and studies began to review the concept's efficacy in safeguarding forests (Arnold and Pérez 2001;Kusters et al. 2006;Kusters 2009). A rich body of literature has identified and discussed various criteria for success and failure Schrekenberg 2007, Shackleton et al. 2011). The purpose of this paper is to present the case of one NTFP, African forest honey, and to consider it against these increasingly wellunderstood factors. systems, such as large-scale bee farming or back-yard beekeeping (Wainwright 1989;Clauss 1992;Crane 1999;Bradbear 2008;Lowore and Bradbear 2015). African forest beekeeping utilizes the wild honey-bee population as a resource and does not involve manipulating this natural population. The bee colonies of the indigenous African honey bee Apis mellifera live within the forest and forage on nectar and pollen from a very wide range of floral species. Forest beekeeping involves the construction and siting of man-made beehives thus increasing the number of bee nest sites in a given area. Hives are made from locally available materials, sourced from the forest and vary in materials and design (e.g., where the entrance is), but the basic structure is a hollow cylinder. These are placed in forest trees and occupied by wild swarms of bees that are genetically undistinguishable from the wild population. Once or twice a year, depending on local seasonal cycles, beekeepers harvest honey comb, comprising two products in one, honey and beeswax. Forest beekeeping is not honey hunting, which involves taking honey comb from wild-bee nests located in natural cavities (e.g., hollow trees and cavities in rocks). It also does not include the use of frame or top-bar hives, even if these are located in forests, since they are movable comb beekeeping systems that allow colony manipulation. In movable comb systems, beekeepers tend to focus on individual bee colonies as productive units with hives kept close to home, to manage and protect the colonies and hives. In forest beekeeping, the productive unit is the forest and its whole bee population and the system utilizes large forest areas that are unpredictable, undefensible and distant, hence making individual colony management impractial. Forest beekeeping is an extensive, low-input system (Bees for Development 2012, 2013a, 2013b. Communities that engage in forest beekeeping in Africa depend heavily on income derived from selling honey and beeswax. In Mwinilunga, North West Zambia, 40,000 people depend on forest beekeeping using 60,000 ha of forest with 1000 tonnes of honey purchased from beekeepers in 2016 (Dan Ball Oct 2014, Oct 2016. For many households in south-west (SW) Ethiopia, honey is the primary source of cash (Endalamaw 2005) and the number of hives is a wealth indicator, with anyone having 100 + hives considered rich. Unlike other wealth indicators, such as livestock, which the poor can rarely afford, many poor people do have small numbers of hives (van Biejnen et al. 2004). In Cameroon, honey accounts for just over half of household income for thousands of beekeepers (Ingram and Njikeu 2011). Beekeeping in Tanzania is so important (average annual export earnings of US$2.5 million) that it has a dedicated government department and 39,000 ha of forests set aside as bee reserves (Mwakalukwa 2016). Forest Beekeeping as an NTFP and its Relationship to Forest Management Despite the interest in NTFP commercialization, forest honey seems relatively absent from the literature. For example, various notable NTFP research collections barely mention honey (Table 1). However, honey has not been overlooked regarding livelihoods (Bradbear 2004). Concerning forests, beekeeping is often promoted as being forest-compatible but less Hausser and Savary 2002;IUCN 2012;Timmer and Juma 2005). In West Africa IUCN have supported beekeeping projects as components of their biodiversity conservation program (Arsene Sanon 2015, personal communication) and the Tanzanian government have a policy that promotes beekeeping to support forest conservation (Hausser and Mpuya 2004). However, the scientific rationale for these projects and evidence on their efficacy for forest conservation is limited (Ingram 2014) and the role of forest beekeepers as forest conservers is not understood. Mickels-Kokwe (2006, p 19) argues that in Zambia, '''the linkage between beekeeping and forest management has been considered to be strong. … the precise nature of this relationship, however, appears not to have been researched explicitly." Bradbear (2009, p 58) concurs, ''there has been little research to investigate how beekeepers make deliberate and conscious efforts to protect and conserve forests… this is an area of investigation that has been neglected." Literature on these relationships is limited, with few exceptions. De Jong's study on forest conservation in West Kalimantan reported that honey had been traded since the 1930s and found that strong customary rules to protect honey forests existed among forest beekeeping communities, including one ''Maté maté rule that no person except the owner of the honey tree may slash the forest within a radius of about 100 m… This rule ensures that the forest surrounding a honey tree is maintained and the habitat for bees is preserved (De Jong 2000, p 636)." Most NTFP literature focuses on the income benefits of honey. For example, Ahenkan and Boon (2011) highlight the importance of NTFPs (including honey) for women's empowerment in Ghana, but make no link to forest conservation. Some authors consider the promotion of beekeeping as a livelihood alternative to others that cause forest loss, but focus on farm-based, not forest beekeeping (Appiah et al. 2009;Tomaselli et al. 2012). Andrews (2006) and Labouisse et al. (2008) consider beekeeping as compatible with forest conservation but do not regard it as a driver. Within the wider beekeeping literature there is more insight into the conservation impacts of beekeeping. Clauss (1992) noted that Zambian beekeepers were worried about the impact of late fires 1 between August and October when trees and flowers of key nectar species are particularly vulnerable to scorching. Consequently, beekeepers advocate early burning to prevent such damage. Nshama (2003) reported that Tanzanian beekeepers sustained specific bee fodder plants, and Lalika and Machangu (2008) found beekeepers protected the forest around their hives and actively discouraged people from cutting timber. Endalamaw (2005) reported that 97% of beekeepers in SW Ethiopia were involved in at least one form of forest enhancement activity, including tree planting, preserving big trees, and protecting young ones; 34% helped to conserve the forest by lobbying or by entering into local agreements to reduce bushfires. Wiersum and Endalamaw (2013) also found that local forest governance arrangements in SW Ethiopia helped beekeepers support forest conservation that maximized honey production. Bradbear (2009, p 58) draws evidence of the positive link between beekeeping and forest management from Congo, Benin, Zambia, and Tanzania and explains that ''Apiculture's unique feature as an activity is the fact that its continuation, through pollination, fosters the maintenance of an entire ecosystem, and not just a single crop or species.'' In Cameroon, Ingram and Njikeu (2011, p 36) noted that ''Beekeeping can contribute to environmental integrity because some beekeepers protect the forest'', and Ingram (2014) later concluded that beekeepers rarely self-identified as active conservationists, but were so as a result of their pragmatic interventions. Finally, Neumann and Hirsch (2000, p 88) noted ''that customary management for commercial NTFP production appears to occur least often in natural forests'''but that ''one example of commercial NTFPs that are managed in natural forests is honey and beeswax from beekeeping in Miombo woodlands in Africa.'' Methodological Approach The paper considers how African forest honey can deliver positive outcomes for livelihoods and forests, using evidence from a project in south-west Ethiopia. The analytical process sought to: • Identify and discuss known success and failure factors from the NTFP literature; • Present a case where African forest honey is successfully commercialized; • Compare forest honey against the known success and failure factors, with evidence from the case; • Analyze the NTFP-PFM project's 2 support of forest 1 Set by farmers clearing land, or by hunters. honey regarding livelihoods and forest conservation and a reflection and critique of NTFP failure and success factors. In order to assess the potential benefits of forest honey as a near-perfect NTFP, a case study analysis was undertaken (Yin 2011). The case study uses the honey-related evidence from the European Union (EU)-funded Non-Timber Forest Products-Participatory Forest Management Project in southwest Ethiopia. This case was selected because the length of the project intervention (2004 to present 3 ) provides a wealth of documented information about changes and impact, and the authors have all been involved in the management of the project. The case study approach utilizes evidence from the five project area woredas, Anderacha, Bench, Gesha, Masha, and Sheko. Evidence was drawn from five principal project reports: van Biejnen et al. (2004), Abebe (2013), Bekele and Tesfaye (2013), NTFP-PFM (2013), and Lowore (2014). Two non-project reports, but covering the same area also provided important data (Endalamaw 2005) and (Bees for Development 2017). NTFP Commercialization, Success, and Failure Factors The enthusiasm for NTFP commercialization led to a rich body of work, which latterly has been much more nuanced and grounded (Ros-Tonen and Wiersum 2005;Kusters et al. 2006;Belcher and Schrekenberg 2007;Sills et al. 2011). These, and other researchers, considered the features of NTFP trade that aid and hinder positive outcomes. It is important to clarify that these success and failure factors (Table 2) are considered in the relatively narrow sense of achieving commercialization, livelihood, and conservation outcomes and do not reflect broader benefits. Sills et al. (2011) analyze the evolution of the ''conservation by commercialization'' concept, and explain the swing from optimism to pessimism back to an ''emerging middle ground.'' They argue that the strengths of NTFPs are not so much in their promise as a ''silver bullet'' but in their diversity and collective contribution to rural livelihoods, and that forest modification is an important step in increasing incomes from NTFPs (Sills et al. 2011). They suggest that a holistic understanding is required and concur with Ros-Tonen and Wiersum (2005) that NTFPs are best understood in relation to the overall context of land uses and livelihood conditions. This paper makes no attempt to dispute these evident truths, yet chooses forest honey from among the diversity of NTFPs, as a focus for analysis. This approach is taken because the merits and features of forest honey as a commercially traded NTFP has been relatively unexplored, yet appears to offer considerable potential to deliver on both forest management and livelihood outcomes. Forest Honey Trade in SW Ethiopia The NTFP-PFM project 4 ran from 2003-2013, and is now continuing with new funding and a new name. 5 The project was located in the moist montane forests in the Bench-Maji, Sheka and Kefa Zones of the Southern Nations, Nationalities, and People's Regional State in south-west Ethiopia. These forests have high species endemism (Tadesse and Arassa 2004) and perform essential hydrological functions. They are highly valued by local people for domestic and economic purposes, and are the natural habitat of wild Arabica coffee (Hein and Gatzweiler 2006). The forests cover approximately 3 m ha and form one of two major remaining forest blocks in the country (Sutcliffe et al. 2012), yet are experiencing a high rate of agricultural expansion and are exposed to considerable livestock and population pressure (Place et al. 2006). In some areas, the forest is highly modified to favor coffee management. All natural forest has been state owned since the late nineteenth century creating a degree of alienation between local communities and the forest. A management vacuum resulted since the state is largely absent as a forest manager, although it does allocate forest land to private investors for plantation agriculture, exacerbating community alienation (Dessalegn 2011). There are some traditional tenure arrangements in the forest, called ''kobo'', discussed later, but these are unrecognized by government. The potential of these forests to yield NTFPs such as wild coffee, spices, and honey was a major factor for the initiation of the project, which aimed to ''maintain a forested landscape to support improved livelihoods of local forest-dependent communities and ensure the delivery of environmental services in a wider context.'' The project sought to facilitate formal Participatory Forest Management (PFM) arrangements between local communities and government and to increase NTFP income-generating activities. Honey is one of the most important forest products in the project area (Hartmann 2004;van Biejnen et al. 2004;NTFP-PFM 2013) and honey trade pre-dates the project. Westphal (1975) described the local economy as an ensete based mixed-cropping system including ensete, teff, barley, beans, and vegetables, but the most prominent cash income source was honey. Hartmann also explains, "Almost every payment is done during the honey harvest from the returns of honey marketing………… honey even can be used as payment instead of money" (Hartmann 2004, p 7). Honey is bought by local traders who supply the local and Ethiopiawide tej 6 industry. A tej by-product is beeswax, much of which is exported, with Ethiopia annually exporting over 400 tonnes, the fourth largest global exporter (FAOSTAT 2005). A beeswax trader from the project area claimed he had exported 80 tonnes of beeswax each year for the last 20 years (Lowore 2014). The project sought to achieve its objective by introducing Participatory Forest Management (PFM) and supporting NTFP trade. PFM agreements were crafted following T aids or constrains commercialization and trade, L aids or constrains livelihood benefits, C aids or constrains conservation outcomes boundary setting and negotiating roles and responsibilities. These agreements, with government as a signatory, formalized local rights over the forests, but within certain limits. For example, each community must commit to not converting forest to farmland. The project helped honey producers to access new and larger markets, through the establishment of farmer PLCs 7 and honey co-operatives. These became focal points for Addis Ababa-based buyers who then provided training to farmers concerning harvesting methods, quality assurance, and storage. These value chain interventions resulted in Ethiopian honey being exported to Europe for the first time. These initiatives achieved a growth in honey trade of 500% from 50 tonnes in 2005 to 300 tonnes in 2012 (Abebe 2013). In 2014, 500 tonnes of honey were traded by groups and co-operatives, with an unknown volume traded in other channels (Lowore 2014). Overall Abebe (2013, p 12) concluded that there had been a "big leap in supply of honey by producer groups and traders from the area to national and international markets through project facilitated market linkages." The market price for honey rose from ETB 5 ($0.6 cents) to ETB 50 ($2.50) per kilo, an increase well exceeding the rate of inflation supporting the claim that, "The project has had a positive impact on the local honey trade. This NTFP trade is now well established and likelihood of long term benefits are high", (NTFP-PFM 2013, p 35). In terms of livelihoods, honey is one of the highest earning NTFPs with 97 households out of 115 reporting that at least 34% of their household income is derived from forest honey (Bekele and Tesfaye 2013). Regarding forest conservation, community members reported a notable fall in forest encroachment and illegal harvesting and a notable increase in forest regeneration and healthy young seedlings. Before the project, 8.7% of respondents said forest regeneration was moderate or high, but afterwards 100% said forest regeneration was moderate or high (Bekele and Tesfaye, 2013). These changes led the project evaluator to report, "This is a substantial achievement and has potential to reduce the risk of deforestation in the area" (NTFP-PFM 2013, p 7). The evidence from the project facilitates the examination of forest honey against the factors presented in Table 2. Inferior Inferiority concerns product perishability, seasonality, and economic inferiority. Neither honey nor beeswax are highly perishable and can be bulked at collection centers with no time constraints or specialist storage required. It can be accumulated in economically viable volumes for transport. Project area buyers stipulate a minimum volume of 5 tonnes (one lorry load). Its non-perishability means that beekeepers not in immediate need of cash can store it until they need the money. The project found richer farmers tended to store honey, while the poorest sold it quickly, a pattern observed in other Ethiopian studies (van Biejnen et al. 2004;ILRI 2013). In this respect, honey compares well with other NTFPs, such as bush mango Irvingia spp (Nigeria) and Gnetum leaves (Cameroon and Nigeria), which are often wasted because of poor storage and inadequate transport, of which some transporters, knowing the urgency of sales, take advantage (Babalola 2009;Ingram et al. 2012). The seasonal nature of NTFPs means that harvest times can be unpredictable, and income confined to limited periods. This presents challenges for poor families, although the non-perishability of honey alleviates this to some extent. However, while steady and predictable income may be preferable to seasonal income, it is better than none. Additionally, many high-value cash crops are seasonal, including coffee, cocoa, and pineapples, and this is not seen as a disadvantage. Inferiority can also refer to products that are rejected as incomes rise. African forest honey is not an inferior good and is highly valued in most societies by rich and poor alike (Bradbear 2003;Ingram 2014). Honey harvested in SW Ethiopia, finds markets throughout Ethiopia, the Middle East, and Europe. The lack of pollution in SW Ethiopia means produce from this area is free from chemical contamination, and is consequently in demand by European high-value markets (Ingram and Njikeu 2011;David Wainwright 2016, personal communication). Substitutability Some NTFPs can be readily substituted by alternatives. For example, vegetable ivory (Phytelephas macrocarpa) can be replaced by plastic (Barford et al. 1990). Sills et al. (2011) discuss how culture can maintain the value of NTFPs, and this might partly explain why forest honey is not readily substituted by sugar in Ethiopia. The Ethiopian tradition of making tej from honey also maintains high national demand. The authenticity of Ethiopian honey is highly appreciated in the Middle East and in Europe Ethiopian forest honey successfully competes with Chinese massproduced honey. It performs well in specialist niche markets where it is sufficiently differentiated on taste, freedom from contamination, organic status, and authentic ''natural'' back-story. Unmanageable NTFPs are natural products and since quantity, harvest time, and location are unpredictable and hard to manipulate, returns on labor can be low. However, this is not the case for forest honey. Forest beekeepers can increase the number of nest sites by placing more hives. Since hive ownership confers ownership of the honey harvest, beekeepers can rely on their harvest (except in the rare cases of theft). No time is wasted looking for wild nests, and so harvesting time can be managed (Bradbear 2009). Elite Capture When a resource gains value, elites who previously had no interest in the product can take over extraction, processing, and trade (Dove 1994). In SW Ethiopia, access to the forest and specific trees does vary between ethnic groups and families (Hartmann 2004), but this is not a new phenomenon. There are few barriers to entry with youngsters embarking on beekeeping before they leave school (Bees for Development 2017). Hive ownership is a wealth indicator, but even the poorest have some hives, which are easier to accumulate than other wealth indicators (van Biejnen et al. 2004). For many households, honey is one of the most important sources of income: 70% of households derive some income from honey (Bees for Development 2017). While the benefits of honey trade are not equally spread, this does not present a clear case of ''elite capture'' following NTFP commercialization. Poverty Traps Belcher and Schrekenberg (2007) classify NTFP activities as poverty traps where decreasing prices lead to increased harvesting to maintain income. There is no evidence that this applies to forest beekeeping. On the contrary, beekeepers invest more in beekeeping as the prices rise and the number of hives owned is positively related to wealth (van Over-Exploitation NTFPs can be subject to over-exploitation. For example, the commercialization of Cameroonian Prunus africana bark led to degradation of the resource base and the ''bread tree'' (Encephalartos cerinus) was so depleted it is now subject to CITES trade prohibition (Stewart 2003;Donaldson 2008). Forest beekeeping does not cause resource degradation; the primary resource is nectar. Honey bees are merely agents, transforming nectar (a readily replenished plant product) into honey. Even where total cropping is practiced (when all the honey is taken, causing the bees to abscond) there is no evidence that the bee population is threatened. In fact, as honey demand increases, beekeepers place more hives in the forest, which is likely to increase the survival rate of swarms, although this has yet to be studied. Comparing Forest Honey Against the Success Factors The natural resource base must be abundant Cunningham (2011) argues that NTFPs can be successfully commercialized only when the natural resource is abundant, citing the successful commercialization of baobab (Adansonia digitata) and marula (Sclerocarya birrea). The honey bee, Apis mellifera is found widely across the whole of Africa (Crane 1999) and feeds on many flowering plants (Fichtl and Adi 1994;Latham 2005;Bradbear 2009). Provided there are flowers and cavities (natural or humanmade), bees will live in a wide range of habitats and the African population remains intact and healthy (Bees for Development 2013a;Bradbear 2009;Dietemann et al. 2009). Sustaining a market requires quality, quantity, and timeliness Commercial markets have demanding expectations regarding quality, quantity, and timeliness of supply and in their absence the potential of NTFPs will be limited (Ingram and Njikeu 2011). Beekeepers in the project area initially had difficulties meeting the market expectations of EU buyers. The project improved quality through interventions in the value chain by responding to concerns regarding the use of goatskins and fertilizer bags for storing honey, by providing plastic buckets. Growing demand has been met by beekeepers increasing their harvest and since forest honey supply is relatively elastic, they can continue to do this. Upgrading within value chains Adding value can be important for the success of NTFP trade (Meaton et al. 2015), but it is not always necessary for primary producers to do so. Opportunities to add value to forest honey include separating honey from wax, packaging, retailing, and even developing secondary products. In SW Ethiopia these opportunities are relatively limited. Most beekeepers sell the crude product with value addition occurring downstream. For example, the Ethiopian firm Beza Mar Ltd. separates honey from wax, it is ultra-filtrated in the UK, and The Body Shop uses the honey in their high-value products. This has increased demand with beekeepers benefitting without having to invest in time-consuming or capital-expensive interventions (Lowore 2014;Bees for Development 2017). Clear rights to land Tenure and access are key issues for commercializing NTFPs. In SW Ethiopia the ''kobo'' system, a longstanding inherited, customary tenure arrangement exists in two forms. In tree-kobo, individual trees are claimed for hanging hives, while land-kobo refers to an area of forest, which may contain many trees for hive-hanging. This privatization of a common resource appears to positively influence forest management. "In kobo ……. trees are properly managed and promising trees that could be a good nest tree will be tended and protected from damage. Beekeepers remove less vigorous trees to avoid competition on potential hive-hanging trees. Maximum protection is made to avoid damage to standing trees while felling trees for hive making or other purposes" (Endalamaw 2005, p 51). Increasing demand for honey in the project area has resulted in families re-asserting their claims over their kobo forest areas (Bees for Development 2017). This re-assertion of customary rights is related to Neumann and Hirsch's (2000) observations that trade that develops in the ''absence of existing controls of access'', is least likely to be sustainable. The kobo has been overlain by the introduction of PFM and it is hard to disentangle these overlapping forms of tenure. PFM arrangements are recognized by government and give protection from private investors, so securing multiple forest benefits, including honey (NTFP-PFM 2013). The interaction between these external and local governance arrangements of the honey forests of SW Ethiopia are well described by Wiersum and Endalamaw (2013). Local self-sufficiency According to Cunningham (2011) if NTFPs are to be successfully commercialized, domestic self-sufficiency should be maintained. Brasileiro (2009) reported how commercialization of the Acai berry caused the price to rise beyond the reach of local people. There is no evidence that domestic honey use is undermined by commercialization in the project area, since domestic use is relatively low compared to the volumes traded (Melaku et al. 2014). Conflict resolution mechanisms The honey trade in the project area has experienced some conflicts and difficulties. For example, the Mejengir 8 honey co-operative did not succeed as a result of internal conflicts and misunderstandings with their honey buyers (Freeman 2012). However, overall trade is resilient, and even where project-designed structures fail, trade continues. "Experience from the NTFP-PFM project has shown that various actors will innovate once the market situation changes. It will change anyway" (Freeman 2012, p 11). Price incentives must be right The abundance of NTFPs in ''good'' years can depress prices and incentives to harvest. This can be prevented by growing product demand outside the local area and having enough buyers with working capital to buy the unusually large volumes on offer. Although honey supply has increased, the demand is coming from international buyers and has not resulted in lower prices; rather the reverse. Honey prices in the project area rose from 5 ETB ($0.6 cents) per kg in 2005 to 50 ETB($2.50) in 2015, a rise well beyond the rate of inflation largely due to linkages to Addis Ababa-based buyers and their contracts with overseas markets. Building on existing markets, local markets Commercialization of NTFPs is more successful when built on existing markets. The honey trade in SW Ethiopia has been established for many decades (Westphal 1975). The project strengthened an existing knowledge base and helped honey producers respond to the new buyers' quality requirements by modifying harvesting and storage methods. The presence of pre-existing markets also reduces risk. When one project-supported marketing group had difficulties with their export buyers, they were able to sell the honey to local traders, and consequently did not suffer catastrophic loss (Lowore 2014). Honey that fails to meet export quality standards can be sold into other end markets. Sills et al. (2011) highlights the importance of local markets, an attribute enjoyed by forest honey. Visionary champions make a difference Visionary champions often play an important role in the development of new products and markets. For example, the success of the rooibos tea industry can be traced to Dr. Pieter Le Fras Nortier (Joubert and de Beer 2011). The African honey trade has similarly benefitted from industry champions. The first European buyer of honey from the project was Tropical Forest Products Ltd (TFP) founded by David Wainwright, who was determined to market African honey in the UK (Wainwright, 2002). He worked hard to convince UK customers to buy African honey, overcoming doubts about the marketability of its stronger taste (Traidcraft 2007). Niche markets can reduce competition Globally honey prices are highly competitive (Bees for Development 2006) and to compete, African honey needs to stress its ethical, natural, and environmental credentials. These features differentiate it from cheaper mass-produced Chinese honey. The use of project honey by The Body Shop evidences the success of this approach. A winning product needs to be quite special (to reduce competition), but not too special that it cannot be produced in sufficient quantities or so novel as to be un-marketable at scale. African forest honey meets this ''goldilocks'' ''quite special but not too special'', characteristic. The power of strategic partnerships Natural products sold into distant markets have complex trade requirements that are hard for farmers to negotiate (Ingram and Njikeu 2011). Strategic partnerships can overcome this. In 2005, 3 years before the first export of Ethiopian honey to Europe, Tropical Forest Products Ltd 9 and Beza Mar 10 , attended an African honey trade workshop organized by Bees for Development (Bees for Development 2005). In 2008, Ethiopia achieved EU ''third country'' listing with support from SNV 11 (Desalegne 2011), which meant that Ethiopian honey was eligible to be imported to the EU. Without these partnerships, access to this market would have been impossible. More locally the project forged links between producers and marketing organizations, established trade links, and strengthened the bargaining capacity of producers. NTFP trade must make the forest more valuable than the alternative land use Even if a NTFP has a market with positive impacts on livelihoods, the alternative land-use options must be understood. An economic analysis of land-use options in the project area showed that forests modified for coffee production yielded $547 per hectare, agriculture generated $303 per ha and sustainable forest management, including honey, generated $68 per ha, leading to the conclusion that "The limited revenues achieved from most NTFPs … leaves the … forest uncompetitive and encourage communities to engage in forest clearance. Hence … doubt can be cast on the ''conservation by commercialization'' hypothesis…" (Sutcliffe et al. 2012, p 479). Ingram's (2014, p 205) research in Cameroon concurs, "the opportunity costs of other forest uses (for agriculture, hunting, grazing, fuelwood, and Prunus africana bark harvesting) are too high for apiculture chain actors to compete with." However, these analyses focus on economic returns from land, not the activity. Where capital and/or labor is in short supply and forest land is abundant then local people will also strongly consider returns on investment of cash and time (Kusters 2009). Endalamaw (2005) reported that forest beekeeping was not considered labor or capital intensive compared to other land-use activities and that beekeepers recognize its economic advantages. Hanging hives requires no capital, and can even be undertaken by teenage boys with no land or other assets of their own (Bees for Development 2017). Wainwright (1989) similarly found that forest beekeeping yielded good returns on time invested compared to other activities. Additional Factors Biological characteristics of the NTFP determines likelihood of sustainable harvest Neumann and Hirsch (2000) argue that there is greater potential for sustainability where NTFPs are fast-growing, fast-reproducing, and where harvesting does not impinge on reproductive potential. Honey production performs in this regard. Nectar harvesting has no negative impact on the plants and pollen transfer resulting from foraging is an essential ecological service. African honey bees reproduce easily so even if a colony is lost during honey harvest, it is likely to have already produced several swarms, easily compensating for losses. NTFP specialization, forest modification, and biodiversity There is increasing evidence that modifying the forest to favor NTFPs is the norm (Sills et al., 2011) and can yield enhanced incomes, but may impact negatively on biodiversity (Ruiz Pérez et al. 2004;Kusters et al. 2006). Beekeepers engage in active management and modification of the forest, with 95% of a sample in the project reporting individual actions to increase bee forage and to favor bee trees, by removing lianas from seedlings, protecting trees from fire, and avoiding crushing seedlings when felling larger trees (Bees for Development, 2017). However, this forest modification enhances biodiversity since beekeepers know that a variety of tree species is required so that nectar and pollen are available at different times of the year; that hive-making materials are sustained; and that good hivehanging trees are protected. Forest honey is therefore derived from multiple forest species and this further motivates beekeepers to maintain forest biodiversity. Direct link between action and benefit Elliot and Sumba (2012) discuss conservation logic, the link between livelihood gains and conservation action. The silviculture practices discussed above exemplify this. Beekeepers clearly perceive a honey-derived income benefit from these actions. Another linkage is created by financial support for PFM. A key step in the project's PFM process was establishing Forest Management Associations (FMAs) responsible for managing demarcated PFM forest and upholding the PFM agreement. The costs incurred (paying for patrols and taking offenders to court) are often funded through contributions from the honey co-operatives, creating a direct link between honey income and forest conservation. Discussion The evidence suggests that forest honey in SW Ethiopia does not suffer from the barriers that have caused other NTFP commercialization endeavors to stumble. Forest honey is a non-perishable, highly marketable, high-value product with demand in local, regional, and international markets. It has local uses, but when carefully harvested and handled, it commands high prices by international buyers who use it in value-added products. The pre-existence of trade, local controls, knowledge, and experience prior to the development of new market opportunities provide a springboard for commercialization and the ability to respond to new market quality and quantity expectations. The nature of forest beekeeping itself affords protection from overexploitation. Provided the forests remain, bees and nectar will be abundant, and beekeepers can respond to higher demand without eroding the resource base. There is clear evidence that the project generated increased beekeeping income. Evidence from the project also identifies positive forest impacts, but it is not easy to attribute the slowing down, and in some cases the reversal of forest degradation, to honey trade alone. Bee-friendly silviculture, customary tenure (kobo) and PFM appear to support forest maintenance, yet the first two were being practiced long before the project, when deforestation was happening. The recent increase in honey trade coincides with the introduction of PFM, but it is too soon to make a causal link between this and strengthened silviculture and kobo practices. Sutcliffe et al.'s (2012) conclusion that NTFP income was insufficient to deter local communities from engaging in forest clearance, further underlines the importance of over-stating any such success. Despite these cautions, it is clear that there is genuine community enthusiasm for PFM. Local communities can choose to adopt PFM or to continue with the status quo, where they have no locally devolved rights and the forest is essentially open access and vulnerable to the risk of investors being allocated forest for agri-business. Without PFM they risk losing access to forest resources, including honey. By agreeing to PFM local people gain tenure security and access to many forest products and environmental services. More work is required to understand how important honey income is in generating the widespread support for PFM and in order to do so, it must be considered ''in relation to the overall context of land uses in the area'' (Ros-Tonen and Wiersum 2005). An aspect of this is beekeepers' willingness to support forest conservation through FMAs by financial contributions from honey co-operatives and individual households. This constitutes direct and tangible evidence, linking honey production to forest conservation. It supports the argument that beekeeping plays a role in the economic calculation local communities make when deciding to adopt PFM since honey-derived income offsets some of the opportunity costs of accepting the restrictions which PFM brings, and helps pay for some of the direct costs. Reviewing Success and Failure Factors Some of the factors identified appear to be less important, not only to honey, but to other NTFPs. Seasonality, for example, is not an insurmountable barrier to NTFP commercialization. Self-sufficiency is similarly a largely misplaced concern. NTFPs important for local use, e.g., firewood, are unlikely to have high commercial value, and for products traded to distant markets, subject to highquality standards, there will almost always be a portion of the crop that is rejected and can be used in local markets or for domestic use. Niche markets can be ''double-edged swords'' in that they may need to be created, with significant marketing, and can be vulnerable to ''faddish'' changing tastes (Belcher and Schrekenberg 2007), but honey is unlikely to be susceptible. Part of the success of African forest honey is attributable to its characteristics in that it is a special product, but well recognized, niche, but not too niche! NTFP trade is most likely to lead to the dual outcomes hoped for when the conservation logic between action and benefit is strong, ''doable,'' and delivers gains within an acceptable timeframe for poor people. The evidence in this paper suggests that honey delivers on all these points. Beekeepers know that tree loss leads to nectar loss leading to loss of income. They know what to do to prevent this and can see the immediate benefits of their actions, with a minimal time gap between action and benefit. A possibly unique characteristic of forest beekeeping is the ability to own a wild resource within a natural landscape. The transaction costs of managing common resources can be high, and the unpredictability of wild harvest can undermine returns on labor investment. These problems (associated with honey hunting), are overcome by beehive ownership. The placing of beehives affords ownership over the bees that choose to settle there, and the honey they subsequently store. This ownership is universally understood. This simple and inexpensive action removes uncertainty, reduces timecosts, and overcomes the unpredictability of honey hunting, and is a key reason for the economically rewarding nature of forest beekeeping. Conclusion SW Ethiopia forest honey delivers important income for forest communities, and forest beekeepers are motivated to undertake actions that help maintain forests. Benchmarked against the factors influencing the success or failure of NTFPs, African forest honey performs exceptionally well. Key to this success was the pre-existing honey trade, the high-value nature of the product, and its appreciation in a range of markets. These afford producers flexibility and broad opportunities. Quality control improvements and the development of international trade links have driven both demand and price. Furthermore, forest beekeeping is sustainable and does not undermine the reproductive capacity of the bees, or the plants on which they feed. However, although these factors suggest that forest honey trade has considerable potential to deliver both livelihood and conservation outcomes, it would be unwise to claim that honey alone can halt forest loss. PFM clearly played a role in the success of honey trade in the project area and honey income in turn contributed to the broad support of PFM. This mutually supportive relationship requires more detailed examination so that the synergies it generates can be more fully understood. The success factors observed in the case of forest honey in SW Ethiopia are likely to apply to other parts of Africa where forest beekeeping is practiced. Forest beekeeping systems are well-crafted resource utilization systems that combine elements of management with wild harvesting. Ownership of simple beehives and the utilization of abundant natural resources combine to offer an efficient and profitable livelihood activity that also has the potential to deliver on sustainable forest management. However, forest honey is not necessarily a near-perfect NTFP. Evidence presented in this case study has shown that its contribution to livelihoods and forest conservation has to be undertaken with regard to past and present land-use practices. In this case, the historic precedence of demand and the more recent establishment of tenure through PFM are thought to be key. For forest honey to deliver on livelihoods and forest conservation in other parts of Ethiopia and Africa, a full understanding of the context of trade and land-use needs to be achieved.
v3-fos-license
2022-08-08T06:15:43.428Z
2022-08-06T00:00:00.000
251371953
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "c7c6e6198d27c50d20e0d14bd1c5d04b6e2324fc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42259", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "2abed53a46b970f6113435fcc666045b69839bfd", "year": 2022 }
pes2o/s2orc
Investigating calcification-related candidates in a non-symbiotic scleractinian coral, Tubastraea spp. In hermatypic scleractinian corals, photosynthetic fixation of CO2 and the production of CaCO3 are intimately linked due to their symbiotic relationship with dinoflagellates of the Symbiodiniaceae family. This makes it difficult to study ion transport mechanisms involved in the different pathways. In contrast, most ahermatypic scleractinian corals do not share this symbiotic relationship and thus offer an advantage when studying the ion transport mechanisms involved in the calcification process. Despite this advantage, non-symbiotic scleractinian corals have been systematically neglected in calcification studies, resulting in a lack of data especially at the molecular level. Here, we combined a tissue micro-dissection technique and RNA-sequencing to identify calcification-related ion transporters, and other candidates, in the ahermatypic non-symbiotic scleractinian coral Tubastraea spp. Our results show that Tubastraea spp. possesses several calcification-related candidates previously identified in symbiotic scleractinian corals (such as SLC4-γ, AMT-1like, CARP, etc.). Furthermore, we identify and describe a role in scleractinian calcification for several ion transporter candidates (such as SLC13, -16, -23, etc.) identified for the first time in this study. Taken together, our results provide not only insights about the molecular mechanisms underlying non-symbiotic scleractinian calcification, but also valuable tools for the development of biotechnological solutions to better control the extreme invasiveness of corals belonging to this particular genus. In hermatypic scleractinian corals, photosynthetic fixation of CO 2 and the production of CaCO 3 are intimately linked due to their symbiotic relationship with dinoflagellates of the Symbiodiniaceae family. This makes it difficult to study ion transport mechanisms involved in the different pathways. In contrast, most ahermatypic scleractinian corals do not share this symbiotic relationship and thus offer an advantage when studying the ion transport mechanisms involved in the calcification process. Despite this advantage, non-symbiotic scleractinian corals have been systematically neglected in calcification studies, resulting in a lack of data especially at the molecular level. Here, we combined a tissue micro-dissection technique and RNA-sequencing to identify calcification-related ion transporters, and other candidates, in the ahermatypic non-symbiotic scleractinian coral Tubastraea spp. Our results show that Tubastraea spp. possesses several calcification-related candidates previously identified in symbiotic scleractinian corals (such as SLC4-γ, AMT-1like, CARP, etc.). Furthermore, we identify and describe a role in scleractinian calcification for several ion transporter candidates (such as SLC13, -16, -23, etc.) identified for the first time in this study. Taken together, our results provide not only insights about the molecular mechanisms underlying non-symbiotic scleractinian calcification, but also valuable tools for the development of biotechnological solutions to better control the extreme invasiveness of corals belonging to this particular genus. In scleractinian corals (Cnidaria, Anthozoa), also known as stony corals, calcification leads to the formation of a biomineral composed of two fractions, one made of calcium carbonate (CaCO 3 ) in the mineral form of aragonite [1][2][3] , and the other made of organic molecules [4][5][6] . Based on the ability of scleractinian corals to build reef structures, they are functionally divided into two main groups, namely, hermatypic (i.e. reef-building) and ahermatypic (i.e. non-reef-building). The majority of hermatypic corals hosts symbiotic dinoflagellates of the Symbiodinacae family 7 in their tissues, commonly known as zooxanthellae 8 . This symbiotic association, which is lacking in most ahermatypic corals, provides the nutritional foundation for the host metabolism and boosts calcification in nutrient-poor tropical waters 9 . Given the ability of hermatypic corals to build reefs, and given the economic and ecological importance associated with reef structures 10 , symbiotic scleractinian corals have been a major focus of calcification research over the years 2,11 . Whereas, ahermatypic non-symbiotic scleractinian corals have not been extensively studied and to date they remain under-represented especially in terms of molecular data 12 . These corals, however, should not be neglected as they represent important resources for scleractinian calcification research. This is because, in symbiotic scleractinian corals, calcification is linked to the photosynthetic fixation of CO 2 -both at the spatial as well as the temporal scales-which makes it difficult to disentangle these processes. Whereas, non-symbiotic scleractinian corals allow studying the transport mechanisms involved in calcification without the confounding factor of symbiosis 13 . In addition, studying calcification in non-symbiotic scleractinian corals further allows obtaining comparative information on the different scleractinian calcification strategies, therefore aiding in a better understanding of how calcification evolved within this order. One of the main questions surrounding scleractinian calcification is how (i.e. via which molecular tools) corals promote a favorable environment for calcification 14 . As in other biological groups, coral calcification is a biologically controlled process, meaning that the precipitated mineral is not a byproduct of metabolic processes (also known as biologically induced biomineralization), but rather under strict biological and physiological control 15,16 . This control is exerted by a specialized tissue called the calicoblastic epithelium, that comprises the calcifying calicoblastic cells 17 . These cells control and promote calcification by modifying the chemical composition at the sites of calcification, which comprise intracellular vesicles and the extracellular calcifying medium (ECM) 14 . As recently suggested, calcification begins with the formation of amorphous calcium carbonate (ACC) nanoparticles, within intracellular vesicles, in the calicoblastic cells. ACC nanoparticles are then released via exocytosis into the ECM 18 . Here, ACC nanoparticles attach (i.e. nanoparticle attachment) and crystallize, while ions fill the interstitial spaces between them (i.e. ion-by-ion filling). Both, nanoparticle attachment and ion-byion filling processes require the calicoblastic cells to regulate ion transport and their concentration at the sites of calcification 2,19 . Furthermore, the calicoblastic cells also secrete an organic matrix which may stabilize ACC in the intracellular vesicles and play other roles, such as aiding and promoting ACC crystallization [20][21][22][23][24] . Ion (i.e. calcium, carbonate, protons, and others) transport, to and from the sites of calcification, is of particular interest 2,19 . For instance, calcium and carbonate ions, the building blocks of the coral skeleton, have to be constantly supplied to the sites of calcification to sustain its growth 25 . Whereas, protons must be removed from the sites of calcification to increase the aragonite saturation state, prevent dissolution of calcium carbonate nanoparticles, and promote ion-by-ion filling mechanisms 14 . Over the years, the ion transport model underlying scleractinian calcification has been well characterized for the calicoblastic cells through physiological and molecular studies [26][27][28] . However, such understanding is only partial and many calcification-related ion transporters still need to be identified. When searching for calcificationrelated candidates, different approaches are possible. One is the so called "targeted" approach and is based on the analysis of genes and/or proteins that have been chosen a priori-generally based on known biological functions in other model systems. This approach is extremely powerful for studying the genetic architecture of complex traits, such as calcification, in addition to being an effective approach for direct gene discovery 29 . Nevertheless, although the targeted approach has led to the identification of some of the most relevant calcification-related candidates in scleractinian calcification 13,[30][31][32] , it is largely limited by the requirement of existing knowledge about the gene(s) under investigation. To overcome this limitation, other approaches, the so-called "broad" approaches, have been developed throughout the years. Broad approaches have the potential to discover novel candidates and pathways that have not been previously considered in the context of calcification, thus allowing a more holistic understanding of the process. These approaches have been performed at different levels, including the transcriptomic one, which relies on the use of RNA-sequencing (RNA-seq) technology [33][34][35] . To date, however, the use of RNA-seq to identify calcification-related candidates has been limited to analyzing coral molecular responses to environmental parameters known to influence calcification (such as light 33 and CO 2 36 ), and only one study, performed in the symbiotic scleractinian coral Stylophora pistillata, has analyzed genes being more highly expressed in the coral calcifying tissue 37 . Therefore, given the high potential of broad approaches in discovering novel candidates, and given the scarce amount of data available for non-symbiotic scleractinian corals 12 , we have performed, in this study, RNA-seq on coral species belonging to the ahermatypic non-symbiotic scleractinian genus Tubastraea (Lesson, 1829) 38 . Tubastraea corals include invasive saltwater species 39-42 that were introduced into the southwestern Atlantic on oil platforms 42 . Since the late 1980s, these corals have been colonizing the rocky shores of the southeastern Brazilian coast 40 . Their rapid spread and growth provides them a competitive advantage and, therefore, represent a serious risk for endemic biodiversity loss 43 . In the absence of innovation in control methods, the dispersal of Tubastraea is expected to continue. In this context, calcification studies are fundamental to a better understanding of the life histories and population ecology of this genus. Of particular interest is the rapid linear skeletal growth of Tubastraea that could increase the competitiveness of these species 44 . In this study, we searched for calcification-related candidates, by sequencing the whole transcriptome from total colonies and oral fractions (i.e. fractions devoid of the aboral tissues that contain the calicoblastic cells) of Tubastraea spp., obtained through a tissue micro-dissection technique. After assembling and annotating a highly complete transcriptome for Tubastraea spp., we identified and analyzed genes enriched in the total colony transcriptomes compared to the oral fraction transcriptomes. The analysis included both a comparison with calcification-related candidates previously characterized in symbiotic scleractinian corals, as well as a search for novel calcification-related ion transporter candidates. This study provides insights into the molecular mechanisms underlying non-symbiotic scleractinian calcification and identifies valuable tools for the development of biotechnological solutions to better control the extreme invasiveness of corals belonging to this genus. Results Sequence read data and raw data pre-processing. RNA sequencing was performed for two sample groups, total colony and oral fraction (i.e. fraction devoid of the aboral tissues containing the calcifying calicoblastic cells), of three independent biological replicates (n = 3) of Tubastraea spp. Both groups, obtained through a previously developed micro-dissection protocol 45 , produced a total of 539,331,300 raw reads with an average of 44.9 ± 8.7 (mean ± SD) million read pairs per sample. Raw reads were subjected to quality trimming, which included adaptor removal, yielding a total of 369,357,576 trimmed reads. De novo transcriptome assembly and quality assessment. Trimmed reads were subjected to de novo whole transcriptome assembly using Trinity, after being further reduced to 73 Functional annotation. To evaluate the completeness of the transcriptome library, functional annotation-including GO terms, EggNOG and KEGG pathway enrichment analysis-of the whole transcriptome of Tubastraea spp. was performed using Blastx results and OmicsBox. A summary of the whole transcriptome assembly and annotation results is listed in Table 1. Differential expression analysis. To identify differentially expressed genes between the total colony and the oral fraction, we first selected genes that had count per millions (CPM) more than 1 in at least two samples. Differential expression analysis was then performed using OmicsBox, followed by Benjamini-Hochberg multiple test correction. A total of 4,483 genes were reported to be differentially expressed (FDR < 0.05, LogFC < ± 1) between the total colony and the oral fraction (Table 1). Of these, 3,174 genes were significantly enriched in the total colony compared to the oral fraction, and 1,309 genes were significantly enriched in the oral fraction compared to the total colony. Differentially Expressed Genes (DEGs) have been clustered using Pearson's correlation and displayed in a heatmap (Fig. 2). In this heatmap, biological replicates (1, 2 and 3) show strong clustering www.nature.com/scientificreports/ within the same group (Total and Oral), which are also clearly separated. In addition, a Multi-Dimensional Scaling (MDS) plot was performed to examine the homogeneity across biological replicates (Fig. 3). According to the MDS results, biological replicates showed strong clustering within each group and each group formed a distinct cluster. Our results also show that 34 (62%) of the 55 protein sequences are not differentially expressed between the total colony and the oral fraction, and thus are not found in the heatmap (Fig. 4). These included: H v CNs, SLC9s, PMCAs, and VGCC. www.nature.com/scientificreports/ Functional annotation and identification of unigenes putatively involved in coral calcification. DEGs were annotated using the same databases used for the whole transcriptome annotation. First, GOterm enrichment analysis using Fisher's exact test was performed to infer which biological processes are associated with the enriched genes in the total colony compared to the oral fraction. Our results show 13 enriched GO-terms, including biological processes associated with "carbohydrate metabolic process", "extracellular space", "cell adhesion", "extracellular matrix" and "extracellular matrix organization" (Fig. 5). EggNOG functional annotation of the enriched genes in the total colony compared to the oral fraction was then performed. A total of 2,841 out of 3,174 enriched genes (88.6%) are functionally annotated into 23 COG functional categories, including inorganic ion transport and metabolism (P), and intracellular trafficking, secretion and vesicular transport (U) (Fig. 6). www.nature.com/scientificreports/ Finally, using the KO term, provided by the EggNOG mapper, of each annotated gene, we performed KEGG annotation. KEGG annotation further divided genes into multiple families. Among these, 39 KO terms are associated with ion transporters (Table 2). Discussion The "calcification toolkit" is the collective term documented and/or hypothesized to be involved in biomineral formation at various stages of an organism's life history 52 . Out of all the toolkit components, proteins have been the most intensively characterized 26,53,54 . As a result, proteomic studies have suggested that, although proteins from distant organisms share common properties 53 , each taxon-specific suite appears to have evolved independently through convergent and co-option evolution. This has led to variable contributions, from new lineage-and species-specific proteins, to the "calcification toolkit", which show contrasting rates of conservation between and www.nature.com/scientificreports/ within lineages 55 . Several tools of the "calcification toolkit" have also been identified in scleractinian corals 48,56 , yet to date only few experiments have been conducted and solely for symbiotic species 54 . Other than being particularly attractive for calcification studies because of the lack of symbiotic dinoflagellates in their tissues, corals belonging to the Tubastraea genus have been the focus of numerous biological [57][58][59] and ecological research studies 60,61 aiming at identifying key parameters underlying their invasiveness. Nevertheless, their "calcification toolkit", which may include specific components providing these corals with an advantage in terms of calcification strategies, has never been investigated at the molecular level. In this study, we aimed to fill this knowledge gap by searching for candidates of the "calcification toolkit" in the non-symbiotic scleractinian coral Tubastraea spp. using a tissue micro-dissection technique to remove the oral fraction (easily accessible and free of the calicoblastic cells) from the total colony of Tubastraea spp. This previously developed technique has already been used in the past and has contributed to the identification of some of the most frequently searched and studied candidates in a wide range of calcifying metazoans 26,62,63 , other than corals 31,45,47 . By coupling this technique with RNA-seq, we have then identified and analyzed differentially expressed genes with a focus on those enriched in the total colony compared to the oral fraction. Indeed, these genes are specific of the aboral tissues and include calicoblastic cell-specific genes, that could play a role in calcification. This is supported by our results showing that, although many genes are ubiquitously expressed in the total colony-and thus in both oral and aboral tissues -, others are differentially expressed, with clearly distinct expression profiles between the total colony and the oral fraction ( Figs. 2 and 3). It follows that the different expression profiles reflect specific gene functions related to the oral and aboral tissues. Amongst the 3,174 aboral-specific genes (Table 1), we identified most calcification-related candidates previously described as part of the "calcification toolkit" of symbiotic scleractinian corals (Fig. 4). These include the bicarbonate transporter SLC4-γ 64 . SLC4-γ has been proposed to play a role both in the regulation of intracellular HCO 3 homeostasis-which is critical to buffer excess of H + generated during CaCO 3 precipitation-and the supply of HCO 3 to the calcifying cells in several organisms, including sea urchins 63,65 , mussel 62 , coccolithophores 66 and corals 31,67 . We also identified an ammonium transporter belonging to the AMT1 sub-clade (Fig. 4). AMT transporters have been suggested to play a role in calcification in multiple metazoans, including mollusks [68][69][70] and symbiotic scleractinian corals [71][72][73] . Although their role in coral calcification still needs to be investigated in detail, it has been suggested that AMT1 transporters mediate pH regulation in the ECM by transporting NH 3 into the ECM which could buffer excess of protons. Organic matrix proteins, including 2 CARPs and 1 SCRiP, were also identified (Fig. 4). CARPs are proteins with dominant Low Complexity Domains (LCDs) that have been described in the secreted organic matrix of biominerals in different metazoan taxa [74][75][76][77] . CARPs have been identified also in previous proteomic studies on coral skeletons 35 , where they have been suggested to play a role in CaCO 3 formation given their high affinity to positively charged ions (i.e. Ca 2+ ) [78][79][80] . SCRiPs, instead, are a family of putatively coral-specific genes for which different roles have been suggested based on their molecular features (i.e. presence of signal peptide, high amino acidic residues content and cysteine-rich) 50 . Moreover, three galaxins-like proteins and three CAs were also identified (Fig. 4). Galaxin was first identified in the exoskeleton of the scleractinian coral Galaxea fascicularis and was described as a tandem repeat structure with a di-cysteine motif fixed at nine positions 49 . Since this discovery, galaxin homologs have been observed in the exoskeleton of other scleractinian species [81][82][83] , as well as in mollusks 84 and squid 85 . It has also been shown that galaxin is associated with the developmental onset of calcification after larval stage in Acropora millepora 86 . Whereas CAs are metallo-enzymes that catalyze the reversible hydration of CO 2 into HCO 3 -, the source of inorganic carbon for CaCO 3 precipitation. In metazoans, CAs belong to a multigenic family and are widely known to be involved in calcification in diverse metazoans such as sponge spicules 87 www.nature.com/scientificreports/ been suggested to play a direct role in calcification 13 . Here, we have identified 3 CAs with higher expression in the total colony compared to the oral fraction, which suggests a potential role in calcification. The presence of these candidates amongst the aboral-specific genes of a non-symbiotic scleractinian coral strongly suggests a calcification-related function and, in a wider context, further supports the hypothesis of a "common calcification toolbox" in scleractinian corals, as previously suggested 93 . However, several components of the toolbox have not been identified amongst the aboral-specific genes of Tubastraea spp. (Fig. 4). These include: (1) voltage-gated H + channels (H v CN), that have been suggested to participate in the pH i homeostasis of calcifying coccolithophore cells 94 , in the larval development and shell formation of the blue mussel 62 and in the calicoblastic cells of several symbiotic scleractinian coral species 47,95 ; (2) SLC9s, that have been suggested to play a role in H + removal during trochophore development in mussels 62 and coral calcification 47 , (3) Plasma Membrane Ca 2+ -ATPase (PMCA), which have been suggested to take part in Ca 2+ supply to the sites of calcification in mussels 62 , as well as in the pH ECM regulation of the ECM in corals 30 , neurexins, that connect the calicoblastic cells to the extracellular matrix in corals 48 and (4) Voltage-Gated Ca 2+ -channels (VGCC), which have been suggested to facilitate Ca 2+ transport in the calcifying epithelium of oysters 96 and corals 30 . One possible explanation of these results is that gene gain/loss or even change of protein function has occurred during scleractinian evolutionary history, resulting in a different "calcification toolkit". This is further supported by the hypothesis that, although calcification-related proteins from distant organisms share common properties 53 , they have evolved independently, through convergent evolution and co-option, in each taxon, thus resulting in contrasting rates of conservation between and within lineages 55 . In addition to comparing these calcification-related candidates between non-symbiotic and symbiotic scleractinian corals, we have also searched for novel ion transporter candidates of the "calcification toolkit" in Tubastraea spp. by focusing on the rest of the aboral-specific genes. To explore their involvement in biological processes, we performed a GO enrichment analysis, and showed that, among several processes, these aboralspecific genes are enriched in "extracellular space", "cell adhesion", "extracellular matrix" and "extracellular matrix organization" (Fig. 5). These results highlight the importance of extracellular matrices in the aboral tissues, in which they play a pivotal role in the spatial organization of the cells, as they organize them according to their function. Some examples include both the organic extracellular matrix (ECM), which facilitates cell-cell and cellsubstrate adhesion with the help of desmocytes 97 , as well as the skeletal organic matrix (SOM), which facilitates the controlled deposition of the CaCO 3 skeleton. Also, the expression of genes in the aboral tissues linked to "carbohydrate metabolic processes" suggests an enrichment of biochemical processes involved in carbohydrate metabolism, which may ensure a constant supply of energy needed to support the energy-demanding process of calcification 98,99 . Moreover, EggNOG annotation (Fig. 6) shows that some of the aboral-specific genes belong to the following categories: "inorganic ion transport and metabolism" and "intracellular trafficking, secretion, and vesicular transport", thus underling the importance of membrane and vesicular transport linked to calcification in the aboral tissues 2,14,18 . These results are supported by recent observations of intracellular vesicles moving towards the calcification site both in corals 14,18 and sea urchins 100 . Calcification-related ions have been suggested to be highly concentrated in these vesicles in order to promote the formation of ACC nanoparticles, which are successively deposited into the calcification compartment where crystallization occurs 14 . The regulation of endocytosis and vesicular transport between membrane-bound cellular compartments is therefore strictly necessary in coral calcification, and the identification of genes related to these pathways, among the Tubastraea spp. aboral-specific genes, further underlines their importance also in non-symbiotic scleractinian species. KEGG analysis further allowed us to identify a list of candidates that could play a role in calcification related ion transport ( Table 2). The list includes several genes belonging to the ammonium transporter family (AMT/ Rh/MEP), notably, AMT and Rh homologs. As well as for AMT transporters, also Rh transporters have been suggested to be involved in coral calcification. Rh homologs have been identified in the calicoblastic epithelium of the symbiotic scleractinian coral Acropora yongei, where they have been suggested to mediate a possible pathway for CO 2 -a critical substrate for CaCO 3 formation-in the ECM 101 . The identification of these genes also among the Tubastraea spp. aboral-specific ones strongly suggests a direct role of these transporters in non-symbiotic scleractinian calcification. We also identified a large number of transporters belonging to the SoLute Carrier (SLC) families that, in vertebrates, constitute a major fraction of transport-related genes 102 (Table 2). Some of these members (SLC7, SLC25 and SLC35) have been previously reported to be involved in coral thermal stress, while others (SLC26) have been proposed to participate in coral larval development 103 , as well as cellular pH and bicarbonate metabolism 31 . Two plasma-membrane homologs belonging to the SLC13 family have also been identified (Table 2). These transporters function as Na + -coupled transporters for a wide range of tricarboxylic acid (TCA) cycle intermediates 104 , and have been widely described in vertebrates for their role in calcification [105][106][107][108] . The tricarboxylic acid citrate has also been found to be strongly bound to the bone nanocrystals in fish, avian, and mammalian bone 109 , whereas in corals no study has shown the presence of citrate in the skeleton. In invertebrates, SLC13 members have been mainly described for their role in nutrient absorption 110,111 , as they provide TCA cycle metabolites, that are used for the biosynthesis of macromolecules, such as lipids and proteins 2,20 . These macromolecules are among the principal components of the skeletal organic matrix and SLC13 members might contribute to their transport into the coral aboral tissues. SLC16 family members, and precisely the monocarboxylic acid transporters (MCTs), are also enriched in the total colony compared to the oral fraction ( Table 2). Members of the SLC16 family comprise several subfamilies that differ in their substrate selectivity 112 . In corals and sea anemones, SLC16 subfamilies transporting aromatic amino acids have been mostly characterized for their role in nutrient exchange between the coral host and its symbionts 36,113 , while no information is available for those transporting monocarboxylic acids. In human, MCTs function as pH i regulatory transporters by mediating the efflux of monocarboxylic acid (predominantly lactate) www.nature.com/scientificreports/ and H + , in tissues undergoing elevated anaerobic metabolic rates 114,115 , and in Tubastraea spp. they might be involved in H + extrusion at the sites of calcification perhaps functioning as pH i regulators. Interestingly, members of the SLC23 family (FC = 916.1), which comprise ascorbic acid transporters, are the most enriched in the total colony compared to the oral fraction, in Tubastraea spp. (Table 2). This result is in agreement with another RNA-seq study performed on swimming and settled larvae of the coral Porites astreoides, also showing that an SLC23 transporter is among the most highly expressed ion transporters in larvae initiating calcification 116 . Ascorbic acid is an essential enzyme cofactor that participates in a variety of biochemical processes, most notably collagen synthesis 117,118 . Collagen is a fibrillar protein, that forms one of the main components of extracellular matrices 119 . Previous studies have shown that the addition of ascorbic acid stimulates collagen production in many metazoans [120][121][122] , including corals 119 . It is thus possible that SLC23 transporters might provide ascorbic acid to the aboral tissues, and potentially the calcifying cells, which use it to promote collagen that, together with other ECM proteins, builds a structural framework for the recruitment of calcium binding proteins, as previously suggested 48,[123][124][125] . Last but not least, our study also identified so-called "dark genes", i.e., genes that lack annotation 126 , within the list of aboral-specific genes. These genes are potentially equally important, as they are expressed in the aboral tissues along with other genes with known functions in calcification. It is therefore possible that "dark genes" and calcification-related genes may be linked, as they can be involved in the same pathway e. g. enzymes and/ or regulatory factors. Heterologous expression of "dark genes" in model systems, easy to manipulate and with available molecular tools for visualizing gene expression and protein localization (e. g. Nematostella vectensis) could be taken in consideration to investigate their role and contribute to functional annotation 127 . Conclusion The Tubastraea spp. transcriptome here provided is a fundamental tool which promises to provide insights not only about the genetic basis for the extreme invasiveness of this particular coral genus, but also to understand the differences between calcification strategies adopted by symbiotic and non-symbiotic scleractinian corals at the molecular level. The analysis of the aboral-specific genes of Tubastraea spp. revealed numerous candidates for a potential role in scleractinian calcification, including both previously described candidates (SLC4-γ, AMT-1like) and novel ion transporters (SLC13, −16, −23, and others) (Fig. 7). Future studies will then be required to better dissect the precise mechanisms behind these candidates and may offer further knowledge which could lead to the development of novel biotechnological strategies for prevention, management, and control of this and other invasive species. Methods Biological material and experimental design. Experiments were conducted on non-symbiotic corals belonging to the Tubastraea genus. Corals belonging to this genus possess poorly defined taxonomic features and several unidentified morphotypes that severely challenge species identification 128 Micro-dissection, RNA isolation and sequencing from Tubastraea spp. Three biological replicates of Tubastraea spp. were micro-dissected by separating the oral fraction from the total colony. Then, RNA was extracted from each fraction, as previously described 47 . Preparation of mRNAs, fragmentation, cDNA synthesis, library preparation, and sequencing using Illumina HiSeq™ 2000 were performed at the King Abdullah University of Science and Technology (KAUST) 93 . Data analysis pipeline. Data analysis pipeline contained three major sections including: raw data preprocessing, de novo transcriptome assembly and post-processing of the transcriptome. First, raw reads of six individual libraries were subjected to quality trimming, using the software Trimmomatic (version 0.36) 129 . This step consisted in trimming low quality bases, removing N nucleotides, and discarding reads below 36 bases long. Then, contaminant sequences were removed, using the software BBDuk 130 , by blasting raw reads against a previously created contaminant_DB of the most common contaminant species-including Symbiodiniaceae. Clean and trimmed reads from all samples were then pooled together and further assembled using Trinity software (version 2.8.0) with default parameters 131 . The in-silico normalization was performed within Trinity prior to de novo assembly. To obtain sets of non-redundant transcripts, we applied the following filtering steps: (1) transcripts with more than 95% of identity were clustered together using CD-HIT software 132 and (2) all likely coding regions were filtered by selecting the single best open reading frame (ORF) per transcript, using TransDecoder (version 3.0.0) 133 . Also, in the latter step, transcripts with ORFs < 100 base pairs (bp) in length were removed before performing further analyses. The final transcriptome (referred to as transcriptome_all) was subjected to quality assessment via generation of ExN50 statistics, using "contig_ExN50_statistic.pl", and examination of orthologs completeness, using BUSCO (version 3) against eukaryota_odb10 database 134 . Transcriptome_all was then aligned against NCBI's non-redundant metazoan databases using Blastx 135 , with a cutoff E-value of < 10 -15 , and the alignment results were used to annotate all the unigenes (= uniquely assembled transcripts). For their further annotation and classification, OmicsBox software (version 2.0.36) 136 was used to assign Gene Ontology (GO) terms 137 www.nature.com/scientificreports/ Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis 139 . Additionally, differential abundance analysis to identify differentially expressed genes (DEGs) was performed using OmicsBox 136 . To convert the RNA-Seq data into quantitative measure of gene expression, we calculated the number of RNA-Seq reads mapping to transcriptome_all. Transcripts that had at least a log fold change (LogFC) of ± 1 with a false discovery rate (FDR or adjusted p-value) less than 0.05 were considered as differentially expressed. Data availability All data needed to evaluate the conclusions in the paper are present in the manuscript and/or the Additional Files. Additional data related to this manuscript may be requested from the authors. Genomic and transcriptomic data were obtained from the public available database of the National Center for Biotechnology Information or from the private database of the Centre Scientifique de Monaco. 29-34 (1999).
v3-fos-license
2023-08-19T15:26:01.714Z
2023-08-01T00:00:00.000
260993079
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.cell.com/article/S2405844023063892/pdf", "pdf_hash": "cb5d1382af1c2456623fc88eb41fe73308381e68", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42260", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "sha1": "75eaf0033a7e0725aefe1074f4c2d1cee8d522b8", "year": 2023 }
pes2o/s2orc
Analyses of crop water use and environmental performance of small private irrigation along the white Volta basin of Northern Ghana Small private irrigation (SPI) is a farmer-initiated irrigation which has the potential to increase the contribution of the overall irrigation sector to global food security. However, there is no much information about these systems for effective policies for regulation. This study compared the resource use productivities and environmental impacts of SPI systems to those of a government-led irrigation scheme (GIS) in Northern Ghana. The results showed that land productivity was higher in the SPI than in the GIS. Productivity per unit cultivated area was 2571.00 US$/ha under SPI while that of the GIS was 676.00 US$/ha. Output per unit command area was also two times higher in the SPI than in the GIS; that is 2571.00 US$/ha and 1113.00 US$/ha for SPI and GIS respectively. For water productivity, output per unit irrigation supply was 0.33 US$/m3 and 0.08US$/m3 for SPI and GIS respectively while output per unit water consumed by ET was 0.60 US$/m3 for SPI and 0.06 US$/m3 for the GIS. The results implied that the SPI schemes performed better in land and water productivities compared with the GIS which is attributed to higher yields and the selection of high valued crops by farmers under SPI. However, both irrigation system types at the time of this study did not cause significant deterioration to the water bodies and surrounding environment as the biochemical oxygen demand (BOD) values of nearby water bodies were less than 3.0–5.0 mg/l, which is considered as acceptable levels for drinking water by World Health Organisation (WHO) while salinity levels were also within acceptable limits (<750 μS/cm). With appropriate policies to regulate and provide support systems to the SPI, these systems may increase the overall agricultural productivity and improve job creation for the teeming unemployed youth and women in the savannah agroecological zone of Ghana. Introduction Irrigation plays a central role as a source of food and fiber for the peoples of the globe. The world gets 40% of its food supply from irrigation even though it covers only 20% of the world's cultivated area [1]. In the 1970s, so much public funds were invested in the construction of large public irrigation schemes, but this trend eventually slowed down due to poor performances of these big systems [1]. The world demand for food however continued to rise. It is estimated that, by 2050 food supply must increase by 70% in order to meet the surging demand [1]. Irrigated agriculture is still expected to be an important source of food and fiber. While the formal irrigation sector is a major strategy in this course, the contribution of the informal irrigation sector cannot be overemphasized. In India, private farmers have developed millions of wells for private irrigation over the past 40 years [2]. The trend is similar in Sub-Saharan Africa where small private irrigation (SPI) is gaining popularity amongst local farmers in rural and urban dwellings. An estimated 5 million small scale farmers in the sub-continent use low technologies to cultivate 1 million ha of land under irrigation [3]. The sector is therefore expanding quickly but without much attention and regulation [2]. De Fraiture et al. [4] intimated that, SPI is taking centre stage in irrigation development around the globe yet it has received little recognition. Policy makers, donor community and researchers have not yet turned their attention towards the sector, leading to little understanding of the systems including the impacts, challenges, risks, equity issues, efficiencies and environmental consequences [3]. It has been reported that perception of negative environmental impacts related to irrigation farming is a significant deterrent for adoption of irrigation [5]. Also, if the benefits of an irrigation system are not too clear, it can also influence adoption negatively. It is thus imperative to conduct performance appraisals for better understanding of these systems. Performance assessment is a widely used concept in the management of irrigation and drainage systems. The concept first evolved in the industrial sector where performance-oriented processes were targeted at accomplishing process functions with less resources and time [6]. The concept was applied to the irrigation sector giving birth to a number of frameworks. Bos et al. [7] defined their concept of assessment of irrigation and drainage systems as " the systematic observation, documentation and interpretation of the management of an irrigation and drainage system, with the objective of ensuring that the input resources, operational schedules, intended outputs and required actions proceed as planned". They wet further to develop a general framework for a diagnostic" type of irrigation performance assessment where they indicated that the purpose of the assessment and the strategy to use must be clearly stated before any assessment takes effect. Their concept is anchored on some background questions such as " the purpose", "for whom it is being carried out", "whose view point is used", "who is going to carry out the assessment", "the type of assessment" and the "extent of the assessment". Small and Svendsen [8] categorised performance assessment of irrigation and drainage systems under four types. These include: accountability, operational, intervention, and sustainability. These are applied in a number of ways and settings and they also depend on your purpose and objective of the assessment. One example of the applications of performance assessment is the comparative assessment of one, two or more irrigation systems with another in order to set suitable bench mark standards or to undertake a diagnostic process of what is being done right at one system but not the other. Appropriate indicators will have to be selected after identifying the purpose of the assessment. The application of performance assessment indicators depends on the type of assessment intended. In line with this [9], developed nine indicators comprising of four external and five other indicators for comparative analysis of irrigation systems. Malano and Burton [10] also outlined a set of 23 indicators for benchmarking performance of irrigation and drainage systems. A rapid appraisal procedure (RAP) was also developed by Burt et al. [11]. A more comprehensive process known as benchmarking of technical indicators (BMTI) of irrigation systems was also developed by Gonzalez et al. [12] which combines RAP, benchmarking guidelines and report card process for feedback analysis. However, in this study we employed the indicators developed by IWMI which seems most suitable for comparative studies due to its simplification and standardization of the indicators. The introduction of the standardized gross value of production (SGVP) ensures cross-comparison of irrigation systems, both locally and internationally. The SGVP also allows for future comparisons. This study thus conducted a comparative performance analysis of land and water productivities of small private irrigation (SPI) systems along the White Volta basin and the Bontanga government irrigation scheme (GIS) of Northern Region of Ghana. The results of this study will ensure a better understanding of the contributions and implications of the system type on productivity and the environment for better regulation and policy direction. Study area description This study was conducted in the Nawuni catchment of the transboundary White Volta Basin (WVB) in the Ghanaian part of the basin (Fig. 1). The catchment is located between Latitude 9 • 87ʹ N to 11 • 15ʹ N and Longitude 0 • 5ʹ W to 1 • 26ʹ W [13] with a surface area of 96,230 km 2 [14]. The Nawuni catchment is the largest of all the catchments within the WVB and is characterized by a fairly low relief with moderate elevation in few parts of the north and east of the catchment given it a mean elevation of about 200 m [15]. Like other parts of the WVB, the climate of Nawuni catchment is driven the Intertropical Convergence Zone (ITCZ) air mass that controls the climate of the West African region [16]. As shown in Fig. 2, the Nawuni catchment is characterized by unimodal rainfall pattern that occurs between April-October and peaks in August/September with a long dry period between November and March. The mean annual rainfall in the area is approximately 978.83 mm [17] with about 80% of it occurring between June and September [16]. The mean daily temperature in the catchment is between 26 • C and 32 • C, and average annual potential evapotranspiration of 1800 mm with monthly amounts exceeding rainfall in nine months [16]. The dominant soils of Nawuni catchment are generally good for agriculture which is the main occupation of the inhabitants [15]. The Nawuni catchment hosts several small-scale private dry season irrigation farmers who draw water from the catchment with motor pumps. Most of these farmers are vegetable producers and engage in dry season vegetable cropping along the river banks, usually between October and March. The dominant vegetable crops cultivated are okra, pepper and onions. Pumping machines and poly vinyl chloride (PVC) pipes are the means of water supply from the river to the farms for irrigation. For the purpose of this study, the activities of these farmers are termed as "small private irrigation" (SPI). The Bontanga Irrigation Scheme is a government-led irrigation scheme (GIS), also located in the catchment, specifically between latitude 9 • 30′ and 9 • 35′N and longitude 1 • 20′ and 1 • 04′W. It is one of the 22 government schemes under the management of the Ghana Irrigation Development Authority. The scheme has a reservoir with an estimated maximum storage capacity of 25 million m 3 with an outlet structure consisting of 2 main canals with length of about 6 km each and 28 laterals or secondary canals [20]. It has an irrigable command area and irrigated cropped area of about 450 ha and 390 ha respectively [21], and hosts farmers from 17 communities, most of whom cultivate continually all year round. Farmers under the scheme mainly cultivate paddy rice during the dry season and vegetables such as pepper, okra, onion, and tomato are included during the wet season. Cultivated area, production data and crop prices Cultivated areas, average yield and crop prices were needed to determine agricultural productivity. Production data were gathered through a combination of structured questionnaires, focus group discussions and field measurements. A Global Positioning System (GPS) was used to measure on-site cultivated areas for 2013 under the SPI whilst cultivated areas under the GIS were obtained from records from the project management. Crop prices were obtained through focus group discussions as well as from secondary sources. Three-year average yield (2011-2013) was obtained through farmer interviews by the use of structured questionnaires. For the SPI, purposive sampling was done and three of the five sites (Dipale, Kuli and Walshei sites) were selected for the interviews, representing Savelugu, Kumbungu and Tolon districts respectively within the catchment. A total of 45 households were selected for interview from the three sites. For the GIS, 50 farmers were interviewed. Farmers were selected by obtaining a list of cultivators from the agricultural extension officer and the required number of farmers were selected through random sampling. Crop yields were estimated by obtaining information on the quantity of produce harvested within the past three seasons in local units such as bags and buckets which were converted to metric units i.e. in kg/ha. Three-year (2011-2013) farm-gate market prices were taken from statistics, research and information department (SRID) of the ministry of food and agriculture (MoFA) for rice, pepper, onion, tomato and okra. Farm gate prices of crops were also solicited from farmers through focus group discussions for validation purposes. Estimation of crop water requirement Crop water requirement (CWR) was calculated for the various crops using FAO cropwat 8.0 computer-based model. The model uses climatic, crop and soil data to estimate the potential evapotranspiration. The model uses the FAO Penman Montheith method to estimate the reference evapotranspiration (ETo) which was multiplied by the crop coefficients (Kc) to get the Potential Evapotranspiration (PET). To run the cropwat 8.0 model, long term climatic data were downloaded from the FAO software New_LocClim which included monthly rainfall amounts (mm), average minimum and maximum temperature ( o c) on monthly basis, average monthly humidity in %, average monthly wind speed (km/day) and monthly average sunshine hours (hrs). The Tamale synoptic station was the nearest to the study area, thus data from the station was used. It is located at latitude 9.50 N and longitude 0.85 W. The rainfall data was also used to estimate the effective rainfall using the United States Department of Agriculture Soil Conservation (USDA-SC) method. Crop data such as crop type, planting dates, rooting depth (m) and crop development stages (days) were also used as input data in Cropwat model. These were gathered by conducting farmer interviews and field observations. Soil moisture at field capacity, permanent wilting point and saturation (mm/m) are also inputs required for CWR calculation and were gathered from Savannah Agricultural Research Institute (SARI) for the Bontanga irrigation scheme and Kukobilla sites along the White Volta River. The predominant soil type, maximum rain infiltration rate (mm/day), and maximum rooting depth of the crops were generated from default data in the cropwat 8.0 model. Annual total volume of water diverted for irrigation Water use data for the SPI was gathered by checking the flow rates (m 3 /h) of farmers' pumps, number of hours of irrigation per day and the number of times of irrigation per week. The average suction plus delivery head was also observed while losses were also calculated. These were used together with pump characteristics from manufacturer's manual to estimate the total amount of water pumped out of the river per season. For the GIS, water use data was gathered by discharge measurements on-site using the Velocity-Area method with a floating object with a correction factor of 0.8. The discharges measured were then used to estimate the total amount of water diverted for irrigation in both systems. Selected comparative performance indicators A number of external indicators as described by Ref. [9] were used for the assessment as shown in equations (1)- (6). The indicators were selected due to the ability to make comparison with such indicators. Furthermore, the data required for calculation of those indicators were easily accessible. This makes these indicators cost-effective and less time consuming in assessment. The indicators are as summarised below. Land productivity In order to ascertain the outputs relative to the land cultivated and the land available to the farmers, the output per unit cropped area which is the cumulative gross value of production in relation to the total land cultivated was computed using equation (1) whiles equation (2) was employed in computing output per unit command area, which is the cumulative gross value of production in relation to the total land available to the farmers. The cumulative gross value of production is as shown in equation (3). Output per unit command area = SGVP Command area where SGVP = Gross Value of production standardized to world market value ($). Water productivity The outputs per unit water used were computed using output per unit irrigation supply (equation (4)), which indicates the cumulative gross value of production relative to the total amount of water diverted for irrigating the farms, and output per unit water consumed (equation (5)) which is the cumulative gross value of production relative to the actual water consumed by the plants. Output per unit irrigation supply = SGVP Diverted Irrigation supply, V div Output per unit water consumed = SGVP Volume of water consumed by plants Environmental performance The environmental performance was based on water quality parameters such as salinity levels of the irrigation water which is measured by the electrical conductivity of the water (ECi) and Biochemical Oxygen Demand (BOD). The salinity levels were used to calculate the rate of salt accumulation into the soil due to the source of irrigation water. Microsoft Excel was used to calculate the average BOD values and the difference between BOD of upstream and downstream sections of White Volta. The accumulated salt due to irrigation was estimated by the relation in equation (6) [22]. Accummulated Salt due to irrigation where. EC i = Electrical conductivity of the irrigation water in ds/m. V div/ha = Volume of water diverted for irrigation in m 3 per hectare per year. Average yield, crop prices and cropping pattern Three-year average yield of okra, pepper, onion, tomato and rice were 4515.1 ± 623.6 kg/ha, 3487.9 ± 189.3 kg/ha, 4471.3 ± 1419.7 kg/ha, 3505.4 ± 160.6 kg/ha and 2916.9 ± 404.2 kg/ha respectively under GIS whilst under SPI average yields were 4204.4 ± 301.5 kg/ha, 4522.1 ± 72.2 kg/ha and 3615.3 ± 146.5 kg/ha respectively for okra, pepper and onion (Table 1). Among the common crops grown in both systems, average yields of okra and onion were not statistically significantly different but average yield of pepper of the SPI was significantly greater than the average yield of pepper of the GIS at p < 0.05 (Table 2). Average prices of the crops showed that pepper had the highest price compared with the other crops. Average crop prices were 1.60, 3.80, 2.20, 2.40 and 0.60 GH'/kg for okra, pepper, onion, tomato and paddy rice respectively across sites (Table 3). Pepper had the highest monetary value per kg while paddy rice had the least value. The cropping pattern showed that farmers in the GIS cultivated only paddy rice, that is 100% of the land was used for rice cultivation in the wet season. In the dry season, 74% of the total cultivated area was put under rice cultivation, while 26% was used for cultivating other crops including okra, pepper, onion and tomato. Pepper was the least cultivated covering only 2.5% of the total cultivated area. In the SPI, crops grown included mainly vegetables with 31% of the total area under Pepper, 54% under okra, while the remaining 15% was used for onion and tomato production. Table 4 shows the total water used in both the GIS and SPI. An estimated total irrigation amount of 7.73 Mm 3 of water was diverted annually for irrigation in the GIS while total rainfall utilization both in the dry and wet seasons amounted to 4.00 Mm 3 which resulted in a total water used in the scheme of 11.73 Mm 3 . In the SPI, the total water abstracted from the river for the 65 ha is 0.5 Mm 3 for the dry season. Total rainfall contribution in the dry season is 0.09 Mm 3 which gives total amount of water diverted from the White Volta River for small private irrigation of 0.59 Mm 3 . Land productivity Two main indicators were used for comparison of overall output of land under the government scheme and under small private irrigation. The indicators include output per unit cropped area and output per unit command area. The SGVP was used to assess the overall scheme/site output. It is the sum total of the values of all products harvested in the scheme/site expressed in monetary terms (US$) with reference to a local crop tradable in the international market. The results indicated that the standard Gross value of Production (SGVP) of US$ 634,297.90 was achieved by cultivating an area of 938 ha (ha) of land in the GIS while under SPI, US$ 167,083.64 from 65 ha of land was achieved ( Table 5). The results of SGVP for the systems are shown in Table 5. Output per unit cropped area for the GIS was 676.00 US$/ha while average output per unit cropped area for SPI was 2571.00 US$/ha (Fig. 3). While output per unit command area for the GIS increased to 1113.00 US$/ha, that of the SPI remained as 2571.00 US$/ha as shown in Fig. 3. Water productivity Water productivity was analyzed using two indicators (1) Output per unit irrigation water supplied and (2) Output per unit water consumed by plants to meet crop water requirement. The total water used and the irrigation applied in both systems are compared to the outputs generated in United States dollars. The total quantity of water used in the systems amounted to 11.70 million cubic meters (Mm 3 ) in GIS for two seasons and 0.59 Mm 3 in the small private irrigation sites for only the dry season. The GIS achieved an output per unit irrigation supply and output per unit water consumed of 0.08 US$/m 3 and 0.06 US$/m 3 respectively. The SPI achieved averagely higher output compared to GIS. Output per unit irrigation supply was 0.33 US$/m 3 while as output per unit water consumed was 0.60 US$/m 3 as shown in Fig. 4. Environmental performance Electrical Conductivity (EC) and Biological or Biochemical Oxygen Demand (BOD) of irrigation water can be used as parameters for assessing environmental performance of irrigation systems. The EC of irrigation water of the Bontanga Government Irrigation scheme was 63.1 μS per centimetre (μS/cm) while three sites along the White Volta where small-private irrigation is practiced ranged between 54 and 87 μS/cm and these resulted in salt accumulation of 0.33 ton/ha annually in the government scheme while that of the small private irrigation sites ranged between 0.27 ton/ha and 0.43 ton/ha annually ( Table 6). The BOD for GIS was 2.3 mg/l whilst those of the three selected small-private irrigation sites indicated 2.3 mg/l, 1.0 mg/l and 0.65 mg/l respectively for Kukobilla, Dipale and Walshei sites (Fig. 5). The three small private irrigation sites represent upstream, midstream and downstream reach respectively of the catchment under study along the White Volta River. Influence of irrigation scheme type on land productivity indicators The area cultivated by the GIS in this study area was about 14 times the area cultivated by the SPI. However, output per unit land was higher under the SPI than the GIS. The higher output per unit cropped area under SPI could be attributed to the high proportion of land under vegetable production which has a higher value compared to grain crops which covers higher area of production under the GIS. The entire land in SPI was put under vegetable cultivation whilst under GIS only 26% of the land was used for vegetable cultivation and 74% under rice production. Vegetable crops are usually more profitable as they attract a good market price [24]. As shown in Table 3, paddy rice had the least average market price of 0.6 Ghana cedis per kg while pepper had an average price of 3.8 Ghana cedis per kg. Hence, the choice of crop played a significant role in improving the land productivity of SPI. Government irrigation schemes in Ghana were established to meet food self-sufficiency of the nation, so the primary crop cultivated in most government schemes is paddy rice. Owusu et al. [25] indicated that the 22 formal irrigation systems under Ghana irrigation development authority (GIDA) cultivate rice as their main crop. This phenomenon has stayed for several decades and has become the common practice and so farmers find it difficult to change and follow current market trends. This is not an obligation though by management of the government irrigation schemes however the difficulty in change is typical of farmers in Ghana. The other factor why farmers continue to cultivate rice even though it has a low price per kg compared to vegetables could be that most farmers in the government scheme are subsistence farmers whose primary aim is to obtain food for the family. Despite the higher price for vegetables, it is also important to note that vegetable production has been characterized by diseases and price fluctuation in the region and farmers risk when cultivating vegetables. It can therefore be said that private irrigators take risk than their colleagues in the government schemes. Furthermore, even though the government scheme also cultivated some high value crops, comparatively, the SPI still made higher yields in crops such as pepper compared to the GIS, further enhancing the productivity under SPI. The results of this study are comparable to other studies in the sub-region. In a similar study conducted by Molden et al. [9] which includes two sub-Saharan African countries i.e. Burkina Faso and Niger, the output per unit cropped area ranged between 771 US$/ha and 3085 US$/ha. In the same study, output per unit command area for irrigation schemes in Burkina Faso and Niger ranged between 679 US$/ha and 2652 US$/ha. Comparing their study to the current study, the GIS had low output per unit cultivated area and fairly good output per unit command area while the private irrigators had higher in terms of output per unit command area. Similarly, Dejen et al. [26] conducted a study of three selected irrigation schemes in Ethiopia and reported that output per unit cropped area ranged between 1650 US$/ha and 2660 US$/ha whilst output per unit command area ranged between 2000 US$/ha and 6000 US$/ha. Influence of irrigation scheme type on water productivity indicators Water productivity indicators were also higher under SPI than in GIS. Under GIS there was very low output per unit irrigation supply and output per unit water supply compared with that of the SPI as shown in Fig. 4. The low outputs per unit water used were influenced by the high production of paddy rice in the government scheme which is a high-water consumption crop. Crop water and irrigation requirements of rice are very high due to the inclusion of percolation losses into the water requirement of rice. This increased the average water use per ha in the government scheme compared to that of the small private irrigators. Also, SGVP is influenced by factors such as price, crop yield and area under cultivation. The higher the yield and price of a crop, the higher the SGVP, and vice versa. Meanwhile, high SGVP leads to higher productivity, hence, the low price of rice in the market coupled with low yields led to lower SGVP and eventually lower water productivities at the GIS. Furthermore, irrigation water pumped per ha in the small private irrigation was less due to farmers adoption of crops with less water consumption. This choice of crop is probably because of high cost of fuel for running water pumps. These results are in line with similar studies in other areas of Africa. Molden et al. [9] indicated that output per unit irrigation supply for Burkina Faso and Niger ranged between 0.05 US$/m3 and 0.37 US$/m3 whilst output per unit water consumed ranged between 0.11 US$/m 3 and 0.91 US$/m 3 . In Ethiopia, Dejen et al. [26] showed that output per unit irrigation supply ranged between 0.11 US$/m 3 and 0.33 US$/m 3 whilst output per unit water consumed ranged between 0.33 US$/m 3 and 0.48 US$/m 3 . Impact of irrigation scheme type on water quality The quality of water used for irrigation was assessed to measure the environmental performance as poor water quality will result in pollution of the environment and may also pose risk to human health. According to Malano and Burton [10], Electrical Conductivity (EC) and Biochemical Oxygen Demand (BOD) of irrigation water can be used as parameters for assessing environmental performance of irrigation systems. The EC of water sources of both the government scheme and the small private irrigation sites were less than 100 μS/cm. This is a safe level of irrigation water quality [27]. The EC is a measure of salt concentration of irrigation water. The higher the EC, the higher the salt concentration. In this study the EC values obtained implied that there is no significant risk of salinity since the EC values were less than 100 μS/cm. The BOD value shows the amount of dissolved oxygen required by aerobic microbial organisms in the water body to break organic matter in the water at a specific temperature for a specified period. It is also a measure for surface water quality for irrigation, domestic, animal watering and aquatic life. A high BOD value means high concentration of organic matter in the water, which will lead to increased demand for dissolved oxygen for microbial decomposition which can cause general shortage of oxygen for the aquatic species. Also increased BOD could affect soil quality by reducing oxygen content in the soil which can cause yield decline in the long run. The BOD values were between 0.7 and 2.3 mg/l for both GIS and SPI which falls within WHO safe standards for water quality. According to the WHO, BOD values between the range of 3.0-5.0 mg/l is classified as moderately clean, which can be used for both drinking and irrigation purposes while water BOD greater than 5 mg/l indicates the water is undergoing pollution by a neighboring source [28]. Runoffs from agricultural fields could wash Nitrogen and Phosphorus into nearby water bodies, however the low BOD values in this study implied that anthropogenic activities such as agriculture has no much impact on the quality of water in the river. By inference, irrigation activities along the river banks have not impacted much on the river water quality. This trend may however change should agricultural activities intensify in the catchment. Conclusion The results of this study show that land and water productivities of small private irrigation (SPI) systems in the Nawuni catchment along the White Volta sub-basin were significantly higher compared to the government irrigation scheme (GIS). This could be attributed to the choice of cultivating relatively higher economically valued crops. Farmers under SPI are business-oriented as their cultivation targets the prevailing market demand whilst farmers under GIS are conservative and do not seem to consider market factors in their production decisions. In order to improve the productivities of the government irrigation schemes in Northern Ghana, high value crops cultivation should be encouraged, especially in the dry season, in order to achieve "value for money". The state of small private irrigation at the time of this study did not show any significant environmental threat, likewise the GIS, hence the sector could be a major source of food and fibre, employment and livelihood for surrounding communities, however, sustainable water management practices should be encouraged in order to protect the water bodies from pollution in case of future intensification in the basin. For the sustainability of both SPI and GIS in the catchment, we recommend training programmes and bye-laws on sustainable water management. Author contribution statement Abdul-Rauf Malimanga Alhassan and Andrew Manoba Limantol: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Isaac Larbi: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper. Rosemary Anderson Akolaa and Gilbert Ayine Akolgo: Analyzed and interpreted the data; Wrote the paper. Data availability statement Data associated with this study has been deposited at Figshare. The doi is 10.6084/m9. figshare.21,960,077. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A. Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.heliyon.2023.e19181.
v3-fos-license
2021-09-01T15:10:39.636Z
2021-06-24T00:00:00.000
237875066
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10483-021-2750-8.pdf", "pdf_hash": "566d98a75a58c4d7bf3a9dc704761d2be6670e97", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42261", "s2fieldsofstudy": [ "Engineering" ], "sha1": "c1391055b0a8ab10e453ab9836aea1db8758a395", "year": 2021 }
pes2o/s2orc
On well-posedness of two-phase nonlocal integral models for higher-order refined shear deformation beams Due to the conflict between equilibrium and constitutive requirements, Eringen’s strain-driven nonlocal integral model is not applicable to nanostructures of engineering interest. As an alternative, the stress-driven model has been recently developed. In this paper, for higher-order shear deformation beams, the ill-posed issue (i.e., excessive mandatory boundary conditions (BCs) cannot be met simultaneously) exists not only in strain-driven nonlocal models but also in stress-driven ones. The well-posedness of both the strain- and stress-driven two-phase nonlocal (TPN-StrainD and TPN-StressD) models is pertinently evidenced by formulating the static bending of curved beams made of functionally graded (FG) materials. The two-phase nonlocal integral constitutive relation is equivalent to a differential law equipped with two restriction conditions. By using the generalized differential quadrature method (GDQM), the coupling governing equations are solved numerically. The results show that the two-phase models can predict consistent scale-effects under different supported and loading conditions. Introduction Beam-like micro-/nano-structures are essential parts and are often utilized in the engineering design of micro-/nano-electromechanical systems (MEMSs/NEMSs). In such applications, both experiments and atomistic simulations show that the size-effect must be comprehensively and rigorously considered. Owing to the lack of any inherent length characteristic parameter, the classical elasticity theory fails to address such size-dependent problems. Molecular dynamics (MD) simulations require high computational costs, and experiments are often difficult to implement at a micro-/nano-scale. Therefore, several non-classical continuum theories are suggested to account for the size-effects, thereby overcoming the breakdown of classical elasticity. Among the higher-order continuum theories, the nonlocal elasticity model [1][2][3] is a popular tool. Unlike the classical local elasticity theory, the nonlocal stress is expressed as a convolution integral of a decaying kernel and the strain fields at each point, which is the so-called averaging kernel. At the same time, a length-scale parameter is introduced to evaluate the size-effect. In practical applications [4][5][6] , integral constitutive relations are usually converted into a differential form that is easier to solve. However, some paradoxes appear when applying the differential form of Eringen's strain-driven model to bounded beam structures. In particular, Refs. [7] and [8] have shown that the results based on Eringen's model coincided with the local one in the case of a cantilever beam subjected to tip forces, which are inconsistent with the results under other supported conditions. As for a reason, Romano et al. [9] systematically studied Eringen's nonlocal differential model over the bounded continuous domains and recently pointed out that such inconsistencies are induced by the neglect of the two constraint conditions related to the differentiation process of the originally integral equation, and these two constraint conditions happen to conflict with the equilibrium requirement in the case of cantilever beams under point-loading conditions. Alternatively, the two-phase formulation of the nonlocal model suggested by Eringen is used to bypass such ill-posed issues. Wang et al. [10][11] employed such a model to study the Euler-Bernoulli and Timoshenko beams, in which the integral constitutive equation is utterly equivalent to a differential equation with two higher-order constitutive boundary conditions (BCs). Zhang et al. [12] investigated the scale-dependent bending response of circular beams. The two-phase nonlocal integral equations are solved directly via the Laplace transform method. Another option for removing the defects mentioned above is the so-called stress-driven nonlocal integral elasticity theory [13][14] . The philosophy of such a novel theory is similar to Eringen's strain-driven nonlocal model, but by exchanging the positions of stress and strain terms, the integral constitutive relation is expressed as an integral convolution of the stress field instead of the strain field. Therefore, the constitutive BCs are converted from force-described ones to displacement-described ones, thereby eliminating the conflicts in BCs. So far, the stress-driven nonlocal model has been successfully used to investigate static and dynamic mechanical behaviors of beam structures, in which a consistently stiffening effect is captured [15][16] . However, the research objects of these studies focus on the classical or first-order shear beam models (i.e., the Euler-Bernoulli beam theory and the Timoshenko beam theory, respectively). In fact, the Euler-Bernoulli beam theory is the most straightforward beam theory and is inaccurate for estimating the deformation of non-slender beams owing to the neglect of the shear-deformable effect. Although the Timoshenko beam theory can overcome the Euler-Bernoulli beam theory's limitations by taking into account the shear-deformable effect, it does not satisfy free shear stress on the upper and lower surfaces of the beam. As a result, a shear correction factor needs to be introduced to compensate for the difference between the actual stress state and the assumed one. To remove these defects and better predict beam behaviors, various types of higher-order shear deformation beam theory are developed, such as the third-order one, the sinusoidal one, the hyperbolic one, the exponential one, and the exponential one [17] . The shear strain distribution across the beam section is governed by a shear shape function, which meets the free traction conditions on surfaces, and thus no shear correction factor is required. Furthermore, Vo and Thai [18] presented a refined shear deformation theory, which divided the components of transverse displacement into bending components and shear components. In this model, the bending and shear forces are only determined by the bending component and the shear component of the beams, respectively. Also, the bending component is similar to the expressions in the Euler-Bernoulli beam theory, and the shear component is expressed as a specific high-order variation of shear strain [18] . Therefore, the shear stress will also vanish at the top and bottom surfaces. After that, various higher-order refined shear deformation beam theories (HORSDBTs) have been proposed [19][20][21] . Up to now, although several authors have studied the mechanical behavior of nanostructures based on the higher-order shear deformation theories [22][23][24] , most of the existing literature is still based on Eringen's ill-posed constitutive relation, that is, the inherent constitutive BCs are rarely considered. Motivated by these reasons mentioned above, this work aims to extend those well-posedness nonlocal models to the application of higher-order refined shear deformation beams. The main novelty of this paper is a discovery that the stress-driven model is also ill-posed in higher-order refined shear deformation beams. As a remedy, the well-posedness and the consistency of both the strain-and stress-driven two-phase local/nonlocal mixed models are pertinently evidenced by studying the size-dependent bending of various high-order refine shear deformation beams. The content of this article is arranged as follows. In Section 2, the mathematical formulation of the bending problem of a functionally graded (FG) curved beam is established by using different versions of nonlocal elasticity. Such a beam model can degenerate into a homogeneous beam or a straight beam by adjusting related parameters. In Section 3, the ill-posedness of pure nonlocal elasticity of both the stress-and strain-driven types is pointed out, and the necessity to use the two-phase nonlocal models is illustrated. In Section 4, an effective method named as the generalized differential quadrature method (GDQM) is introduced to solve the coupling governing equations of curved beams. In Section 5, several numerical examples are considered to validate the consistency of the predicted scale-effects. Finally, the conclusions are given in Section 6. 2 Problem formulation 2.1 Higher-order refined shear deformation curved beam made of FG materials A slightly curved FG beam having the length L, the width b, the thickness h, and the constant curvature radius R together with the coordinate system is shown in Fig. 1. The properties of the beam are assumed to vary smoothly through the thickness of the beam by a power-law. Therefore, the useful material properties of the beam, including Young's modulus E, Poisson's ratio ν, and the shear modulus G are given by in which subscripts 1 and 2 denote the properties of materials 1 and 2, respectively. α is a non-negative power-law index, which governs the material distribution along the beam section. It is easy to know that α = 0 indicates homogenous cases (i.e., the full material 2), while α → ∞ indicates the full material 1. According to the assumption of the HORSDBTs, the general forms of the circumferential displacement field u x and the radial displacement field u z of a curved beam can be expressed as follows [25] : in which u(x) is the tangential displacement of any point on the central axis, and w b (x) and w s (x) stand for the bending and shear components of radial displacement, respectively. Also, a shape function f (z) is taken into account to determine the shear strain and stress distributions across the beam's thickness. In this paper, the following various shape functions [26][27] are adopted. (I) The third-order polynomial type of the HORSDBT (HORSDBT-T) (II) The sinusoidal type of the HORSDBT (HORSDBT-S) (III) The hyperbolic type of the HORSDBT (HORSDBT-H) (IV) The exponential type of the HORSDBT (HORSDBT-E) Moreover, a limit case f (z) = z corresponds to the Euler-Bernoulli beam theory. The non-zero strains of the curved beam can be given by [28]      where the prime symbol represents the differential with respect to the coordinate x. It can be seen that the shape function form depends on the satisfaction of stress-free BCs on the top and bottom surfaces of the beam without utilizing a shear correction coefficient. The variational principle of virtual work is used to derive the governing equations and BCs, which states where the variation of strain energy is On well-posedness of two-phase nonlocal integral models 935 and the virtual work done by the distributed external force q 0 and the concentrated force at beam ends is given by By inserting Eqs. (9) and (10) into Eq. (8) and setting the coefficients of δu, δw b , and δw s to zero, one can obtain the following governing equations: and the corresponding BCs at Those resultants in the above equations are defined as where A stands for the cross-sectional area of the beam. Different versions of nonlocal constitutive relation 2.2.1 Strain-driven type Eringen's strain-driven nonlocal model assumes that the nonlocal stress σ ij at any point is assumed to be a convolution integral of the strain ε ij at each point in the domain and a decaying kernel ψ. In this framework, the constitutive relation for a one-dimensional case is given by in which the so-called nonlocal parameter χ is used to depict long-range interactions in the region [x 1 , x 2 ]. However, some ill-posed issues appear when studying bounded structures. As a remedy, the strain-driven two-phase nonlocal (TPN-StrainD) model is proposed initially by Eringen, in which the stress-strain relation is expressed as a combination of classical local elasticity and nonlocal elasticity through a volume fraction ξ, In this paper, the classical Helmholtz kernel is adopted for all the nonlocal models mentioned, in which κ = χl e = e 0 a is a nonlocal length-scale parameter, and a and l e denote the internal and external characteristic lengths, respectively. e 0 is the non-negative nonlocal material constant. A combination of Eqs. (7), (13), (15), and (16) gives the strain-driven nonlocal constitutive equations of the FG curved beams, where the sectional rigidities are determined by Such a formula can revert to the strain-driven pure nonlocal and classical local theories by setting ξ = 1 and ξ = 0, respectively. Stress-driven type As another innovative strategy, a stress-driven nonlocal integral model was presented in Refs. [9], [13], and [14], which expresses the strain field at a certain point as a convolution integral of the stress field and the kernel at all points in the region, Following the successful application of the stress-driven nonlocal integral model to various size-dependent static and dynamic problems of simple beam models, Barretta et al. [29] also developed the stress-driven type of two-phase local/nonlocal mixed formulation, Such a two-phase strategy seems more applicable than the pure nonlocal model, because two control parameters can be used to reflect the nonlocal effect of different materials and structures. Using the TPN-StressD strategy with the Helmholtz kernel (see Eq. (16)), the constitutive relations of the one-dimensional FG curved beams become On well-posedness of two-phase nonlocal integral models 937 This formulation corresponds to the stress-driven pure nonlocal theory and the classical local theory as ξ = 1 and ξ = 0, respectively. Non-dimensional problem formulation By introducing the following non-dimensional quantities: one can obtain the equilibrium equations in non-dimensional forms as (hereafter, dropping the asterisks for convenience) and the corresponding BCs at x = 0, 1 are Besides, the non-dimensional constitutive equations under the two different types of nonlocal models are, respectively, as follows. (i) TPN-StrainD model According to Ref. [29], one can derive the following equivalent differential TPN-StrainD constitutive equations: with the following constitutive BCs: 938 Pei ZHANG and Hai QING (ii) TPN-StressD model According to Ref. [29], the TPN-StressD constitutive equations can be equivalent to with the following constitutive BCs: 3 Ill-posedness of pure nonlocal models Strain-driven type In the frame of Eringen's strain-driven pure nonlocal model, the static bending of higher-order refined shear deformation curved beams is governed by the equilibrium equation (23) and the constitutive law (25) with ξ = 1. At the same time, the natural BCs (24) and the constitutive BCs (26)-(29) with ξ = 1 are mandatory. Now, assuming ξ = 1 and substituting first three equations of Eq. (25) into Eq. (23), one can easily find that the paradox existing in classical simple beam theories is still unavoidable; that is, the problem has more BCs than those required (i.e., 18 mandatory BCs and 12 required BCs). Since all BCs cannot be satisfied simultaneously, the problem may be over-constrained and usually cannot be solved. Besides, the influence of nonlocal parameter on the deformation of the cantilever beams under end-point load is inconsistent with that of other supported boundaries, which is mainly due to the conflict between the natural BCs (see the second equation to the fifth one of Eq. (24)) and the constitutive BCs (see Eqs. (27)-(28)). In the frame of Euler-Bernoulli and Timoshenko beam theories, such paradoxes of Eringen's strain-driven pure nonlocal model can be entirely overcome by using the stress-driven model. By exchanging stress and strain terms in the integral constitutive relation, the inherent constitutive BCs are transformed from stress-described ones into strain-described ones, thereby eliminating the conflict between the natural BCs and the constitutive BCs. Also, swapping the positions of the strain field and the stress field in the constitutive equation increases the number of required BCs, and thus the problem becomes solvable [16,30] . However, for higher-order shear deformation beam models, this is not the case. Stress-driven type In the frame of the stress-driven pure nonlocal model, the equilibrium equation and constitutive laws for the problem can be obtained from Eqs. (23) and (30) All in all, the two commonly used pure nonlocal elasticity models are both ill-posed when analyzing nonlocal higher-order refined beam models. Therefore, it is necessary and mandatory to try another strategy, i.e., two-phase nonlocal formulation. 4 Solution procedure for two-phase nonlocal models 4 .1 Equations in terms of displacements Since the derived equations with two-phase nonlocal models are lengthy and complex, it is hard to obtain their analytical solutions accurately. As an alternative, a numerical solution method, named as the GDQM, is adopted in this paper. Before this process, the governing equations and the corresponding BCs should be expressed in terms of displacements u, w b , w s and variable Q. With TPN-StrainD model Using the TPN-StrainD model, the governing equations can be re-expressed in terms of the displacement components, g 11 u + g 12 u + g 13 w b + g 14 w b + g 15 w b + g 16 w s + g 17 w s + g 18 w s = 0, (35) g 21 u + g 22 u + g 23 w b + g 24 w b + g 25 w b + g 26 w b + g 27 w s + g 28 w s + g 29 w s + g 210 w s − R 2 q 0 /(R 2 + κ 2 ) = 0, (36) g 31 u + g 32 u + g 33 w b + g 34 w b + g 35 w b + g 36 w b + g 37 w s + g 38 w s and the constitutive BCs become The natural BCs for different supported types can also be expressed in terms of displacements, which are not listed here for simplicity. Moreover, the explicit expressions for the coefficients n ij , m ij , g ij , and c ij are not listed here owing to the limited space of the article. With TPN-StressD model For the TPN-StressD model, the governing equations can also be expressed in terms of displacements, namely, and the constitutive BCs become On well-posedness of two-phase nonlocal integral models Similarly, the natural BCs in displacement forms are not listed here for brevity. Moreover, the explicit expressions for the coefficients N ij , M ij , G ij , and C ij are not listed here owing to the limited space of the article. Remark 1 According to the above formulation, it can be found that, when using the TPN-StrainD or TPN-StressD model, the orders of the unknown variables, i.e., u, w b , w s , and Q, are 4, 6, 6, and 2, respectively, which means that 18 BCs are required. This number is exactly equal to the number of mandatory BCs for the problem (i.e., 10 natural BCs and 8 constitutive BCs). That is to say, the ill-posedness in the pure nonlocal models is utterly avoided by using two-phase nonlocal strategies. Therefore, when solving the size-dependent problems of the higher-order refined beam models, a two-phase nonlocal strategy must be used to replace the pure nonlocal model, regardless of the strain-and stress-driven nonlocal types. Solution procedure By using the GDQM [31] , the coupled differential equations with arbitrary BCs can be solved. First, for faster convergence and better calculation accuracy, the grid points are considered as Thereafter, the following vectors are defined: , · · · , w bn , w b1 , w bn , w b1 , w bn ) , d ws = (w s1 , w s2 , · · · , w sn , w s1 , w sn , w s1 , w sn ) , while the approximation forms of the unknown variables can be expressed by using different interpolation functions, where n is the number of the grid points, ϕ u/w (x), ψ u/w (x), and ϑ w (x) are the Hermite interpolation functions [31][32] , and l j (x) denotes the Lagrange interpolation function. Based on these assumptions, the governing equations and BCs of the problem can be expressed in matrix forms as below, through which the unknown variables can be carried out. Validation Since the ill-posedness of the pure nonlocal models exists, and there is no relevant research on higher-order refined beams related to the two-phase nonlocal models, the efficiency and accuracy of the present solution procedure are validated by comparing the results of a degradation case (i.e., ξ = 0.000 1, κ = 0.001) with those of the local elasticity theory. In calculations, the number of grid points is set as n = 21. For comparison, the FG parameter is set as α = 0, and the material properties of the beam are assumed to be the same as in Ref. [26], while the following non-dimensional deflections are defined [26] : From Table 1, one can see that, for different values of the length to height ratio L/h, as the curvature radius R = 10 000 (i.e., approximately regarded as infinite), the difference between the present solutions and the results of those straight beams in Ref. [26] can be negligible, regardless of TPN-StrainD and TPN-StressD models. Table 1 Comparison of non-dimensional midpoint deflections w b (0.5) + ws(0.5) of simply-supported (SS) curved beams (R = 10 000) subjected to uniformly distributed loads, predicted by TPN models (with ξ = 0.000 1, κ = 0.001) and local theory Beam theory TPN-StrainD TPN-StressD Local theory [26] L/h = 5 L/h = 20 Moreover, a comparison between a limit case (f (z) = z, g(z) = 0) of present models and the two-phase nonlocal Euler-Bernoulli beam model [29] is made in Table 2. Also, good agreement is obtained for both the TPN-StrainD and TPN-StressD models. These comparative results show that the present solution procedure is effective and accurate. Consistency of nonlocal effects In this subsection, the numerical examples are considered to study the consistency of current two-phase nonlocal integral models for addressing the size-effect in bending behavior of various higher-order refined shear deformation curved beams (i.e., based on all of the shear functions HORSDBT-T, HORSDBT-S, HORSDBT-H, and HORSDBT-E) made of FG materials. Different boundary edges and loading conditions are considered, including SS, clamped-clamped (CC), and CF beams subjected to a uniformly distributed load q 0 , as well as CF beams under an end-point load Q. Moreover, it is worth mentioning that, unless otherwise stated, the definitions (see Eqs. (55) and (56)) in Subsection 5.1 will continue to be used in subsequent sections. In Figs. 2-5, the subfigure (a) depicts the variation of the dimensionless midpoint or tip deflections versus the volume fraction parameter ξ with a fixed dimensionless nonlocal parameter κ = 0.15, and the subfigure (b) presents the dimensionless midpoint or tip deflections versus the dimensionless nonlocal parameter κ with a fixed volume fraction parameter ξ = 0.3. In calculations, the FG power-law index, the dimensionless curvature radius, and the length to height ratio of the beam are assumed to be α = 0, R = 10, and L/h = 10. It can be seen from these pictures that, for all boundary types, the TPN-StainD nonlocal integral model shows a consistent softening impact on beam deformations. Increasing the value of the dimensionless length-scale nonlocal parameter κ or the volume fraction of nonlocal part ξ leads to an increase in dimensionless deflections of the beam. On the contrary, there is a stiffening effect on the beams for the stress-driven strategy as the nonlocal length-scale parameter increases. Moreover, the larger the volume fraction of the nonlocal stress part, the more significant the stiffening effect of the nonlocal length-scale parameter on beams. That is to say, both types of two-phase nonlocal models can predict consistent size-dependent responses. Therefore, the two-phase nonlocal integral strategies are suitable for addressing the is considered in Fig. 6. One can find that the dimensionless deflections of SS and CC beams decrease significantly with an increased length to height ratio. When the slenderness ratio is greater than 10, the deflection change of the CF beams is relatively tiny. Figure 7 illustrates the variation of the non-dimensional midpoint or tip deflections versus the curvature radius of different supported curved beams based on both TPN-StrainD and TPN-StressD models with different volume fractions ξ = 0.25, 0.5, and 0.75. The HORSDBT-S is utilized. In contrast to the stress-driven pure nonlocal model [30] , increasing the curvature radius leads to the increased beam deflections for all boundary types. Moreover, with the increase in the curvature radius, the nonlocal effects on SS and CC curved beams are enhanced by increasing the volume fraction ξ. However, for cantilever beams, the effect of nonlocal volume fraction seems to have nothing to do with the radius of curvature of the curved beams, because the shapes of the deflection curves are almost the same at different values of the radius of curvature. volume fractions ξ = 0.25, 0.5, and 0.75. It can be seen that the non-dimensional deflections of all beams consistently increase with the increase in the FG parameter. As the FG index α increases, the proportion of the material 1 increases, making the sectional stiffness of the beams decrease. As the index α continues to increase, the FG influence gradually stabilizes. Conclusions In this paper, the ill-posedness of both the strain-and stress-driven models for higher-order refined beam models is uncovered. As a remedy, the well-posedness of the two-phase strategy of nonlocal integral models is illustrated by analyzing the scale-effected bending of a curved beam made of FG materials. The governing equations and the corresponding BCs are established by invoking the variational principle of virtual work. The two-phase nonlocal integral constitutive law is converted to an equivalent differential one with two constraint BCs. With the GDQM, the coupling governing equations are solved numerically. Then, several numerical examples are given for investigating the consistency of the two-phase nonlocal models. From the above results, the conclusions are given below. (i) Unlike the case based on the classical simple beam theories, the strain-and stress-driven pure nonlocal models are both ill-posed for higher-order refined shear deformation beam models. There are too many BCs in such problems to be met simultaneously, thereby leading to ill-posed issues. (ii) With the utilization of the two-phase nonlcoal strategy, the number of mandatory BCs for the problem happens to equal the number of BCs required, and the consistent size-effects can be predicted for different boundary and loading conditions. Therefore, it is necessary and mandatory to adopt the two-phase local/nonlocal formulation when analyzing the higher-order shear deformation beam models, regardless of the strain-and stress-driven types. (iii) Furthermore, the effects of the length to height ratio, the curvature radius, and FG materials are investigated extensively. It is found that the dimensionless deflections decrease with the increase in the length to height ratio. Increasing the curvature radius leads to an increase in beam deflections. With the increase in the curvature radius, the nonlocal effect of the volume fraction ξ on SS and CC curved beams is significantly enhanced. The non-dimensional deflections of curved beams under all supported conditions consistently increase with the increase in the FG parameter, and as the index α continues to increase, the FG effect on the beam gradually stabilizes. This study reveals the ill-posedness of both the strain-and stress-driven pure nonlocal models for various higher-order refined beam models. The conclusions and the present solution methods are also applicable to other structural problems related to high-order shear deformation assumptions.
v3-fos-license
2022-06-30T15:14:17.404Z
2022-06-28T00:00:00.000
250126518
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/amete/2022/7544310.pdf", "pdf_hash": "2bf8b00b5c19db4a4d1ef321a9a012dd9d87bc96", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42262", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "f446fc8812203a4601541bf084ca5a0153ae23d0", "year": 2022 }
pes2o/s2orc
Performance of MODIS Deep Blue Collection 6.1 Aerosol Optical Depth Products Over Indonesia: Spatiotemporal Variations and Aerosol Types This study aims to evaluate the performance of the long-term Terra Moderate Resolution Imaging Spectroradiometer (MODIS) Deep Blue (DB) Collection 6.1 (C6.1) in determining the spatiotemporal variation of aerosol optical depth (AOD) and aerosol types over Indonesia. For this purpose, monthly MODIS DB AOD datasets are directly compared with Aerosol Robotic Network (AERONET) Version 3 Level 2.0 (cloud-screened and quality-assured) monthly measurements at 8 sites throughout Indonesia. The results indicate that MODIS DB AOD retrievals and AERONET AOD measurements have a high correlation in Sumatra Island (i.e., Kototabang ( r (cid:31) 0.88) and Jambi ( r (cid:31) 0.9)) and Kalimantan Island (i.e., Palangkaraya ( r (cid:31) 0.89) and Pontianak ( r (cid:31) 0.92)). However, the correlations are low in Bandung, Palu, and Sorong. In general, MODIS DB AOD tends to overestimate AERONET AOD at all sites by 16 to 61% and can detect extreme fire events in Sumatra and Kalimantan Islands quite well. Aerosol types in Indonesia mostly consist of clean continental, followed by biomass burning/urban industrial and mixed aerosols. Palu and Sorong had the highest clean continental aerosol contribution (90%), while Bandung had the highest biomass burning/urban-industrial aerosol contribution to atmospheric composition (93.7%). For mixed aerosols, the highest contribution was found in Pontianak, with a proportion of 48.4%. Spatially, the annual mean AOD in the western part of Indonesia is higher than in the eastern part. Seasonally, the highest AOD is observed during the period of September–November, which is associated with the emergence of fire events. Introduction Aerosol is a collection of liquid and solid particles measuring 0.001-100 microns that are suspended in the atmosphere, except for hydrometeors (raindrops, cloud droplets, ice crystals, and snow akes) [1]. Based on the source, aerosols consist of natural sources and anthropogenic sources [2,3]. Natural sources include sea spray, mineral dust, vegetation res, and volcanic ash. Anthropogenic sources, for example, are the combustion of fossil fuels, biofuels, or vegetation res caused by humans [4]. Aerosols can act as solar and terrestrial radiation absorbers and scatterers, as well as condensation nuclei in water droplets and ice crystals, potentially a ecting climate change [5,6], human health [7,8], and air quality [9]. As solar radiation scatterers, aerosols (e.g., sulfate aerosols) play the opposite role to greenhouse gases in the atmosphere, causing a direct e ect such as cooling the Earth's surface and also having an indirect effect by altering cloud formation and their properties [10,11]. However, some aerosols (e.g., black carbon) can act as solar radiation absorbers, causing warming in the troposphere and affecting atmospheric stability and cloud microphysics [5,12]. Indonesia is an archipelagic country that has approximately 17,000 islands, with five major islands, namely Sumatra, Java, Kalimantan, Sulawesi, and Papua. Currently, Indonesia's total population reaches more than 270 million people and ranks as the fourth most populous country in the world. Only about 30% of Indonesia's territory is land, and it has a complex topography with vegetation cover dominated by forestland. Naturally, Indonesia produces aerosols derived from organic components of vegetation, forest fires, sea salt, and volcanic ash. Furthermore, man-made aerosols are also generated by urban/industrial activities such as burning fossil fuels and burning biomass. Aerosol optical depth (AOD) is a parameter used to determine the quantity of aerosol in the atmosphere. AOD is obtained by calculating the amount of light absorbed or scattered in an atmospheric column [13]. AOD can be obtained from direct sunlight measurements on the Earth's surface using a sun photometer and indirectly from reflected radiation from the Earth's surface captured by satellite sensors [14]. Ground-based AOD measurements provide aerosol properties at specific locations that have a high temporal and spectral resolution but have a weakness in spatial resolution. In contrast, satellite-based AOD retrievals provide aerosol information with high spatial resolution but low accuracy [15]. Aerosol Robotic Network (AERONET) is a global ground-based remote sensing network established by NASA (National Aeronautics and Space Administration) and PHOTONS (Photométrie pour le Traitement Opérationnel de Normalization Satellitaire) that aims to conduct longterm aerosol observations and analyze local aerosol optical properties. Additionally, AERONET data can be used to validate satellite remote sensing data [16,17]. Although ground-based aerosol measurements have a high temporal resolution, global-scale AOD data from satellites are required for a better understanding of the distribution and influence of aerosols on a larger scale. Remote sensing can acquire aerosol properties on a wider scale. e moderate resolution imaging spectroradiometer (MODIS) instrument on Aqua and Terra satellites can provide aerosol information spatially and temporally at global and regional scales [18]. MODIS has a spectral range of 36 bands at a wavelength of 0.4-1.44 nm. is satellite is a polar orbital satellite that operates at an altitude of 705 km with a width of view of 2230 km and a temporal scale of 1-2 days. e Terra spacecraft crossed the equator at 10:30 am local standard time (LST), and the Aqua spacecraft crossed the equator at 13:30 LST [19]. Many studies have validated AOD between satellite-based and ground-based measurements in various parts of the world and found a high correlation [20][21][22][23]. e MODIS collection 6.1 (C6.1) AOD dataset is the most recent version in which the aerosol data collection process has been improved. ere are two well-known official aerosol retrieval algorithms, including the dark target (DT) algorithm over land and ocean and the deep blue (DB) algorithm over land. In this study, we used MODIS DB C6.1 AOD products with the following considerations, and the DB algorithm has been developed to have a good performance on bright surfaces such as deserts and snowy areas but also be good at interpreting surfaces that have high vegetation, such as those in the tropics [24]. In addition, the DB product is superior at the site scale [25]. Several studies examining the performance of MODIS in conducting AOD retrieval in Indonesia, especially Kalimantan forest fires in 2015, show that the MODIS satellite is good at capturing fire events [26]. ere is no study that has been conducted to investigate the performance of the MODIS DB C6.1 satellite in Indonesia. is study aims to examine the performance of the Terra MODIS DB C6.1 AOD retrievals over Indonesia by comparing them with ground-based AERONET measurements over a long-term period. Previous studies have also utilized AOD and its properties to detect aerosol types over the Middle East [27]. erefore, MODIS DB C6.1 AOD datasets were analyzed to classify aerosol types and assess their contribution to aerosol composition at AERONET sites in Indonesia. Finally, spatial and seasonal variations of aerosols over Indonesia were discussed. Methods AERONET measures aerosols on the ground using a Cimel sun photometer, which is a multichannel, automatic sunand-sky scanning radiometer that measures the direct solar irradiance and sky radiance at the Earth's surface. e instrument serves to measure direct sun and diffuse sky radiances at wavelengths of 340, 380, 440, 500, 675, 870, 1020, and 1640 nm where these measurements will produce AOD andÅngström exponent (AE) [28]. AE is often used as a qualitative indicator of aerosol particle size. e greater the AE value, the smaller the aerosol particle size and vice versa [29]. ere are three levels of data on AERONET, namely data level 1.0 (unscreened), level 1.5 (cloud-screened and quality controlled), and level 2.0 (cloud-screened and quality-assured). AERONET data can be downloaded on the AERONET website (https://aeronet.gsfc.nasa.gov). ere are ten AERONET sites in Indonesia, but only eight of them provide level 2.0 data. e eight AERONET sites used in this study include GAW Kototabang, Jambi, Bandung, Pontianak, Palangkaraya, Makassar, GAW Palu, and Sorong ( Figure 1). is study uses monthly AERONET AOD data level 2.0 version 3.0 from 2009 to 2019 (11 years). However, at several sites, the installation of sun photometers started in 2012 and 2015, so the length of the available AERONET AOD data is limited. e Terra MODIS DB C6.1 level 3 AOD monthly data (M x 08_M3) with 1°× 1°horizontal resolution were derived from level 1 and atmosphere archive & distribution system (LAADS) (https://ladsweb.nascom.nasa.gov) from 2009 to 2019 (11 years) [30]. MODIS DB C6.1 has better spatial coverage, including vegetated and bright surfaces [31]. e MODIS DB C6.1 AOD at 550 nm was obtained by interpolation at 470 nm and 670 nm wavelengths. In global climate modeling, the 550 nm wavelength is very important because it is the most scattered in the atmosphere and is widely used in various chemistry models [32]. In this study, the monthly MODIS AOD retrievals were derived from the Scientific Data Set (SDS) "Deep_-Blue_Aerosol_Optical_Depth_550_Land_Mean_Mean" and defined by centering the nearest pixel on the AERONET site. e corresponding monthly AERONET AOD measurement was regarded as the true value. e MODIS AE was obtained from SDS "Deep_Blue_Aerosol_Optical_Depth_Land_ Mean_Mean". Since the SDS provides only 3 visible wavelengths (412 nm, 470 nm, and 660 nm), then the MODIS AE value is calculated using equation (1). In addition, the MODIS Terra active fire products were derived from NASA Fire Information for Resource Management System (FIRMS) (https://firms.modaps.eosdis.nasa. gov/active_fire) [33] with an 80% confidence level to investigate the effect of fire events (e.g., forest fires and agricultural residues) on AOD in Indonesia. Furthermore, since AERONET does not directly measure AOD at 550 nm, then AERONET AOD at 550 nm wavelength is interpolated using the power law given in equation (2). α in equation (2) represents the value of AERONET AE at 440-870 nm. e performances of the MODIS AOD retrievals are evaluated by calculating relative mean bias (RMB) (Equation (3)), root mean-square-error (RMSE) (Equation (4)), mean absolute error (MAE) (Equation (5)), and Pearson correlation coefficient (r). Quantitative evaluation of the AOD retrieval uncertainty is described using the expected error (EE) envelope that encompasses the sum of absolute and relative errors as shown in Equation (6a) [34,35]. e slope and intercept between collocated MODIS AOD and AERONET AOD were calculated using the reduced major axis (RMA) method, which incorporates errors in both independent (AERONET) and dependent (MODIS) variables [36]. Advances in Meteorology 3 where |EE| is the absolute value of EE. RMB > 0 and RMB < 0 represent over-and under-estimation of MODIS AOD retrievals compared to AERONET AOD, respectively. RMSE � 0 represents the collocated points on the 1 : 1 (x � y) line, and RMSE > 0 represents the collocated points scattered away from the 1 : 1 line. Several studies have shown that a relationship between AOD 550 nm and AE can be utilized to determine aerosol types as shown in Table 1. In this study, aerosols have been classified into (1) clean continental, (2) biomass burning/ urban industrial, (3) clean marine, (4) desert dust, and (5) mixed type aerosols. is classification method is based on previous studies [37,38]. Low correlations between MODIS AODs and AERO-NET AODs were found in Bandung (r � 0.30, n � 90) with 52.81% of retrievals falling within EE, GAW Palu (r � 0.23, n � 24) with 20.83% of retrievals falling above EE, and Sorong (r � 0.35) with 10% of retrievals falling above EE. While in Makassar, the correlation is a bit high (r � 0.64, n � 15), but only 26.67% of retrievals fall within EE. Poor performance of MODIS AODs at GAW Palu, Makassar, and Sorong may be caused by the small number of available observations, but it is not the case in Bandung. However, if we look in detail, the similarity of the four sites was having low AOD variations, and AERONET AOD values are less than 1. is may suggest that MODIS was unable to capture low AOD variations at that site, which is probably due to coarse spatial resolution. e RMB values are always more than 0 at all AERONET sites, meaning overestimation of MODIS AOD retrievals compared to AERONET AOD. In general, MODIS AOD tends to overestimate AERONET AOD by 16.28% (Sorong) to 61.11% (GAW Palu). e time series plot of monthly MODIS AOD and AERONET AOD is depicted in Figure 3. It is shown that MODIS AOD can capture the peak AOD, which represents extreme events from the AERONET observation data. An extreme event could cause AOD to increase significantly in Indonesia, such as forest fires. In Indonesia, forest fires are rarely caused by nature but mainly by local communities clearing agricultural or plantation land. Forest fires often occur in Sumatra (Jambi) and Kalimantan (Pontianak and Palangkaraya), which cause regional air pollution [39]. e seasonality of Indonesia is mainly driven by the Asian monsoon (wet season) and the Australian monsoon (dry season) [40]. During the dry season (June-November), AOD has increased in several areas in Indonesia, such as Jambi, Pontianak, and Palangkaraya ( Figure 3). is is likely related to forest fires that occur in the area since it is favourable to trigger forest fires during the dry season than during the wet season. In addition, the highest peak of AOD with a value of 3-4 occurred in September-October 2015 recorded in Jambi, Pontianak, and Palangkaraya. is extreme event was closely related to forest fires which were exacerbated by the strong El Niño event in 2015/2016. Results and Discussions Low AOD variations were found in Kototabang, GAW Palu, Sorong, Bandung, and Makassar. e first three locations (Kototabang, Palu, and Sorong) are the locations of global atmospheric watch (GAW) stations, which are located in remote areas. Although Bandung is one of the big cities in Indonesia, Bandung still has a low AOD variation. is condition may be influenced by the humid and cool highland climate to prevent the spread of pollutants. Similar to Bandung, Makassar is also a big city in Indonesia, but the AERONET AOD measurement in this city is still very limited. High AOD variations in Indonesia are generally caused by forest fires [41]. e increase in AOD value at Kototabang in September 2019 was influenced by forest fires in Sumatra that occurred during that period [42]. Despite the lack of time series data, Figure 3 shows that MODIS is generally able to capture the temporal pattern of AOD in Indonesia, especially since MODIS is quite good at detecting extreme values at the observation site. In order to classify the aerosol types according to Table 1, Figure 4 shows the relationship between MODIS AOD and MODIS AE from 2009 to 2019. e x-axis is the AOD at 550 nm obtained from the MODIS DB C6.1, while the y-axis is theÅngström exponent (AE) value of the MODIS DB C6.1. e contribution of each aerosol type at Advances in Meteorology each site was then calculated as a percentage and shown in Figure 5. e contribution of aerosol types at AERONET sites in Indonesia is depicted in Figure 5. Most of the aerosols are clean continental (CC), followed by biomass burning/ urban-industrial (BB/UI). e highest contribution of CC aerosol was found in GAW Palu and Sorong, with a contribution of more than 90%, while the highest contribution of BB/UI aerosol was found in Bandung, with a contribution of 93.7%. For mixed aerosols, the highest contribution was in Pontianak, with a contribution of 48.4%, while CM and DD aerosols were not found at all observation sites. CC aerosols are natural aerosols that originate from areas that still have a lot of forests or urban areas that have large power plants or petrochemical refining [43,44]. e observation stations at GAW Palu and Sorong have a dominant contribution from CC aerosols. Both sites are located in remote areas surrounded by tropical forests where there are fewer human activities related to fossil fuel combustion, such as industry and motor vehicles, that produce air pollution. Advances in Meteorology BB/UI aerosols are aerosols that come from fossil fuels burning in industrial areas [45,46]. ese aerosols enter into an energy balance that is useful either for scattering solar radiation directly into space (direct effect) or by increasing cloud albedo through microphysical processes (indirect effect) [47][48][49]. ese aerosols also have an indirect effect on the radiative and microphysical properties of clouds, which together influence the formation of precipitation [50]. Atmospheres containing a Table 1. Advances in Meteorology 7 high concentration of aerosols are associated with reduced light precipitation and increased moderate and heavy rainfall [51]. GAW Palu, Sorong, and Makassar are the three locations that have the lowest BB/UI contribution, namely 1.6%, 2%, and 11.4%, respectively. is means that industrial or fossil fuel burning activities are still minimal in these areas. In the meantime, GAW Kototabang has a BB/UI contribution of 16.7%, indicating the area has started to be affected by the impact of fossil fuel burning or industrial activities. On the other hand, capital cities like Palangkaraya, Bandung, Pontianak, and Jambi have BB/UI contributions of 52.9%, 93.7%, 34.4%, and 82.6%, respectively, which have been affected by industrial activities or the burning of fossil fuels. Based on the annual mean MODIS AOD at 550 nm from 2009 to 2019, Figure 6 illustrates that the western region of Indonesia, which includes Sumatra, Kalimantan, and parts of western Java, has a higher AOD value than other parts of Indonesia, which reaching an AOD value of 0.6. Meanwhile, the central and eastern parts of Java, Nusa Tenggara, Sulawesi, Maluku, and Papua (eastern region of Indonesia) have an AOD value that is relatively lower, only in the range of 0-0.2. is is likely because of two main factors. First, Sumatra and Kalimantan are home to seasonal forest fire events in Indonesia that can increase the AOD significantly. Second, urban and industrial development has been concentrated in the western part of Indonesia for the last few decades, so the AOD value is higher than in the eastern part of Indonesia. e spatial and seasonal variation of MODIS AOD at 550 nm is depicted in Figure 7 for December-February (DJF), March-May (MAM), June-August (JJA), and September-November (SON). e AODs were low and evenly distributed over Indonesian land during DJF, which is associated with the rainy season in most of Indonesia's regions. Previous studies showed that light precipitation decreases air quality while heavy rainfall improves the air quality [52,53]. Meanwhile, the highest AOD values were observed during SON (the transition period from the dry to wet season), especially for Sumatra, Kalimantan, and most parts of Java. is condition was related to the emergence of forest fire events that caused an increase in AOD in Sumatra and Kalimantan during August, September, and October ( Figure 8). In Java, where urban and industrial development has been established, the spatial average of AOD is consistently high during all seasons, but it seems that the spatial average of AOD is higher during the transition Table 1. It is also worth noting that a bit of AOD during JJA and SON in the southern part of Papua Island may also be related to forest and land fires. Figure 8 shows the seasonality of fire events derived from MODIS Terra active fire products in Indonesia from the period 2009 to 2019. e highest active fire count occurred in September, followed by November. e months with the lowest active fire counts were December-January and April-May. is figure clarifies the positive relationship between the number of fire events and AOD in Indonesia. During June-September, an increase in active fire events is likely to induce an increase in AOD values and AOD variations, with the dominant contribution coming from fire events in Sumatra and Kalimantan. is supports the results of a previous study that found high AOD variations in Indonesia are generally caused by forest fires [41]. Conclusions e objective of this study is to investigate the performance of Terra MODIS Deep Blue (DB) Collection 6.1 (C6.1) AOD over Indonesia from the period of 2009-2019. For this purpose, monthly MODIS DB AOD retrievals were collected and compared against ground-based monthly AERONET AOD measurements from 8 AER-ONET sites in Indonesia during the same period. Performance of these monthly AOD retrievals at site scales and determination of the annual mean AOD spatial distributions and seasonal variations as well as aerosol types are carried out for the first time. e results illustrated that MODIS DB AOD retrievals and AERONET AOD measurements have a high correlation in Sumatra Island (i.e., Kototabang (r � 0.88) and Jambi (r � 0.9)) and Kalimantan Island (i.e., Palangkaraya (r � 0.89) and Pontianak (r � 0.92)). However, the correlations are low in Bandung, Palu, and Sorong, which is likely due to low AOD variations and a lack of observation data. Generally, MODIS DB AOD tends to overestimate AERONET AOD at all sites by 16 to 61% and can detect extreme fire events in Sumatra and Kalimantan Islands quite well. For spatial distributions, the annual mean AOD in the western part of Indonesia is higher than in the eastern part. Furthermore, for seasonal variations, the highest AOD is observed during the period of September-November, which is associated with the emergence of fire events, especially the ones that occurred in Sumatra and Kalimantan. Aerosol types in Indonesia mostly consist of clean continental, followed by biomass burning/urban industrial and mixed aerosols. e highest clean continental aerosol contribution (90%) was identified in Palu and Sorong, which are located in remote areas, while the highest biomass burning/urban-industrial aerosol contribution (93.7%) was found in Bandung, one of the big cities in Indonesia. Conflicts of Interest e authors declare no conflicts of interest.
v3-fos-license
2023-02-26T16:02:21.776Z
2023-02-24T00:00:00.000
257197077
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2023.1101765/pdf", "pdf_hash": "35cfd194b9e9836a08babfe53f3942350bd0d76e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42263", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "d2b72ee45f2190cf9423ba639782ad8e5cef6088", "year": 2023 }
pes2o/s2orc
Stroke risk study based on deep learning-based magnetic resonance imaging carotid plaque automatic segmentation algorithm Introduction The primary factor for cardiovascular disease and upcoming cardiovascular events is atherosclerosis. Recently, carotid plaque texture, as observed on ultrasonography, is varied and difficult to classify with the human eye due to substantial inter-observer variability. High-resolution magnetic resonance (MR) plaque imaging offers naturally superior soft tissue contrasts to computed tomography (CT) and ultrasonography, and combining different contrast weightings may provide more useful information. Radiation freeness and operator independence are two additional benefits of M RI. However, other than preliminary research on MR texture analysis of basilar artery plaque, there is currently no information addressing MR radiomics on the carotid plaque. Methods For the automatic segmentation of MRI scans to detect carotid plaque for stroke risk assessment, there is a need for a computer-aided autonomous framework to classify MRI scans automatically. We used to detect carotid plaque from MRI scans for stroke risk assessment pre-trained models, fine-tuned them, and adjusted hyperparameters according to our problem. Results Our trained YOLO V3 model achieved 94.81% accuracy, RCNN achieved 92.53% accuracy, and MobileNet achieved 90.23% in identifying carotid plaque from MRI scans for stroke risk assessment. Our approach will prevent incorrect diagnoses brought on by poor image quality and personal experience. Conclusion The evaluations in this work have demonstrated that this methodology produces acceptable results for classifying magnetic resonance imaging (MRI) data. Introduction Global mortality and morbidity are primarily caused by cardiovascular disease (CVD), and 17.9 million fatalities each year globally are attributable to CVD, or 31% of all deaths (1). The primary factor for CVD and upcoming cardiovascular events is atherosclerosis. Atherosclerosis development and plaque formation in the vasculature, including the coronary and carotid arteries, are the primary causes of CVD (2). Plaque rupture or ulceration frequently leads to the development of a thrombus, which may embolize or occlude the lumen, blocking blood flow and resulting in myocardial infarction or stroke (3). The plaque is seen and screened using a variety of medical imaging techniques, the most popular of which are magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US). Recently, because of significant inter-observer variability, the texture of carotid plaques, as seen on ultrasonography, is variable and challenging to classify with the human eye. In order to determine the mechanical qualities caused by the influence of the lipid core and calcification within a plaque, numerical simulation is also employed to define the distribution and components of the plaque structure (4). Compared to CT and ultrasonography, high-resolution MR plaque imaging provides naturally superior soft tissue contrasts, and a combination of various contrast weightings may yield more insightful data. Two further advantages of MRI include operator independence and the absence of radiation. However, other than preliminary research on MR texture analysis of basilar artery plaque, there is currently no information addressing MR radiomics on carotid plaque (5). Since medical images contain a plethora of information, many automatic segmentation and registration approaches have been investigated and proposed for use in clinical settings. Deep learning technology has lately been used in various industries to evaluate medical images, and it is particularly good at tasks like segmentation and registration. Several CNN architectures have been suggested that feed whole images with increased image resolution (6). Fully CNN (fCNN) was developed for segmenting images and was first introduced by Long et al. (7). However, fCNNs produce segmentations with lower resolution than the input images. That was brought about by the later deployment of convolutional and pooling layers, both of which reduce the dimensionality. For multiple sclerosis lesion segmentation, Brosch et al. (8) suggested using a 3-layer convolutional encoder network to anticipate segmentation of the same resolution as the input pictures. Kamnitsas et al. (9) used a deep learning technique to categorize ischemic strokes. Roy and Bandyopadhyay (10) examined Adaptive Network-based Fuzzy Inference System (ANFIS), a suggested method for categorizing cancers into five groups. The Gray-Level Co-Occurrence Matrix (GLCM) was used to obtain characteristics that were used to categorize and segment tumors using pre-trained AlexNet (11). In this research, we used various pre-trained deep learning models to the automatic segmentation of MRI carotid plaque for Stroke risk assessment. Deep learning networks have recently been repeatedly suggested for enhancing segmentation performance in medical imaging. Segmentation performance can be improved by combining segmentation and classification, regression, or registration tasks (12). Proposed methodology For automatic segmentation of MRI scans to detect carotid plaque for stroke risk assessment, there is a need for a computer-aided autonomous framework to classify MRI scans automatically. Deep learning technology has recently permeated several areas of medical study and has taken center stage in modern science and technology (13). Deep learning technology can fully utilize vast amounts of data, automatically learn the features in the data, accurately and rapidly support clinicians in diagnosis, and increase medical efficiency (14). In this research, we proposed a deep learning framework based on transfer learning to detect carotid plaque from MRI scans for stroke risk assessment. We used YOLO V3, Mobile Net, and RCNN pre-trained models, fine-tuned them and adjusted hyperparameters according to our dataset. All experiments in this paper are conducted on Intel(R) Celeron(R) CPU N3150 @ 1.60 GHz. The operating system is Windows 64-bit, Python 3.6.6, TensorFlow deep Learning framework 1.8.0, and CUDA 10.1. The proposed framework to address the mentioned research problem is shown in Figure 1. Data collection and statistics The data of 265 patients were collected from the Second Affiliated Hospital of Fujian Medical University, in which 116 patients have a high risk of plaques, and the remaining 149 patients have a stable condition and have a low chance of plaques. The detailed process and parameters for the data collection are described in the following subsections. Inclusion criteria Carotid artery stenosis detected by ultrasound, CTA, MRA, and other numerical simulations (15) methods needs to be identified; ultrasound and CTA indicate plaque formation on the wall, regardless of whether the patient has clinical symptoms; carotid artery is not found by other imaging examinations Significant stenosis, but clinical symptoms: TIA and cerebral infarction of unknown cause. Magnetic resonance carotid artery scans were performed. Scanning parameters Philips 3.0 T MRI with 8-channel phased array surface coil dedicated for carotid artery assessment. Instruct the patient to lie down, keep calm during the scanning process, avoid swallowing, and place the jaw and neck in the center of the 8-channel phased array surface coil. First, the bilateral carotid arteries were scanned by coronal thin slice T2WI scanning, and the images were reconstructed to obtain the shape and stenosis position of the carotid arteries. The sequence and imaging parameters are as follows: patients' T1WI, T2WI, and 3DTOF sequences were kept consistent, and the images of patients with carotid plaques were selected for further study. The images were post-processed by the MRI-VPD system, and Plaque View software was used to analyze the properties and components of carotid plaques. All analysis and measurement steps were performed independently by three senior radiologists. The above examinations were obtained with the consent of the patients and their families and signed informed consent. The sample dataset is shown in Figure 2. Furthermore, we also show the dataset statistics in the table for better understanding as shown in Table 1. YOLO V3 A deep learning network called YOLO identifies and categorizes objects in the input photos. The object detection task entails locating Frontiers in Cardiovascular Medicine 03 frontiersin.org each object on the input image and classifying it according to the bounding box that surrounds it (16). A single Convolutional Neural Networks (CNNs) architectural model is used in the YOLO deep learning network to concurrently localize the bounding boxes of objects and classify their class labels from all images. The YOLO loss for each box prediction comprises coordinate loss due to the box prediction not covering an object as described in Eq. 1. Where o i is the output value, and t i is the target value. BC Eloss The primary addition here is that YOLO V3 is able to extract more valuable semantic data from the up-sampled features during training. MobileNet The MobileNet model is the first mobile computer vision model for TensorFlow and is designed for mobile applications, as its name suggests. MobileNet uses depth-wise separable convolutions and features filters/ kernels that are D D k ḱ´1 . It significantly lowers the number of parameters when compared to a network with conventional convolutions of the same depth in the nets, and the convolution operation is represented in Eq. 2 The result of this is lightweight deep neural networks. The new architecture requires fewer operations and parameters to accomplish the same filtering and combination procedure as a typical convolution. The entire model architecture and hyperparameter details are displayed in Table 3, where each line represents a sequence of one or more identical layers (modulo stride) repeated n times and an expansion factor of t. Both layers share the output channel number c for the identical sequence. Every sequence starts with a stride, and all subsequent layers also employ a stride. All spatial convolutions employ 3×3 kernels. R-CNN The sliding-window paradigm is the foundation of the previous localization strategy for CNN, however, it struggles to achieve acceptable localization precision when working with more convolutional layers. The, the authors suggested using the region paradigm to address the CNN localization issue (17). Three modules make up the R-CNN design principle (1). The first module aims to produce a set of category-independent region recommendations using selective search (18), a search method that combines the best aspects of exhaustive search and segmentation intuitions. One of the best techniques for reducing overfit is increasing the training dataset's size. The training images were automatically resized using an augmented image dataset. Our pre-trained deep learning model avoids over-fitting by using the dropout layer. Results and discussion Global mortality and morbidity are primarily caused by cardiovascular disease (CVD), and 17.9 million fatalities each year globally are attributable to CVD, or 31% of all deaths. Atherosclerosis is the primary factor for CVD and upcoming cardiovascular events (19). The main causes of CVD are atherosclerosis development and plaque production in the vasculature, including the coronary and carotid arteries. Since medical images contain a plethora of information, many automatic segmentation and registration approaches have been investigated and proposed for use in clinical settings. Recently, deep learning technology has been used in various industries to analyze medical images. In this research, we proposed a deep learning framework based on transfer learning to detect MRI scans into a carotid plaque for stroke risk assessment. We used YOLO, Mobile Net, and RCNN pre-trained models, fine-tuned them and adjusted hyperparameters according to our problem. The data of 265 patients were collected from the Second Affiliated Hospital of Fujian Medical University, in which 116 patients have a high risk of plaques, and the remaining 149 patients have a stable condition and have a low risk of plaques. Then, using a random selection approach, we divide the data in the ratio of 70% for training and 30% for the testing set. Our trained YOLO model achieved 94.81% accuracy, RCNN achieved 92.53% accuracy, and Mobile Net achieved 90.23% in identifying carotid plaque from MRI scans for stroke risk assessment. We used accuracy and loss graphs to evaluate the performance of our model. According to our dataset, Figures 3, 4A,B respectively, show the training and validation accuracy and training and validation loss for the YOLO V3 Mobile Net models. Similar to Figure 5, which uses the RCNN model to identify carotid plaque from MRI scans for stroke risk assessment, Figure 5A shows the training loss and training accuracy, and Figure 5B shows the validation loss and validation accuracy. Table 4 lists the classification accuracies in terms of sensitivity and specificity for each pixel in the testing set. Both blinded manual and automated segmentation yield similar results, showing high specificities for all tissue categories and great sensitivity for fibrous tissue. In contrast to the loose matrix, which has very poor sensitivity, necrotic core and calcifications sensitivity is good. This metric is pessimistic for small locations, like the majority of calcifications and confusion matrix, which can mainly cause a slightly lower sensitivity. The segmentation result in Figure 6 serves as an illustration of this observation. The relationship or trade-off between clinical sensitivity and specificity for each potential cut-off for a test or set of tests is usually A B FIGURE 4 Accuracy and Loss graph using the RCNN. The following segmentation results are displayed on a T2-weighted image. ROC curves for three models. Whereby (A-C) represents the performance of the YOLO V3, MobileNet, and RCNN, respectively, to detect carotid plaque from MRI scans for stroke risk assessment. Our proposed framework Pre-trained models 94.81 Highlight the accuracy of our model. depicted graphically using ROC curves. The performance of two or more diagnostic tests is compared using the ROC curve (20), which is used to evaluate a test's overall diagnostic performance. It is also used to choose the best cut-off value for assessing whether a disease is present. Figure 7 represents the performance of three models by using the ROC curve. Here (a) illustrates the performance of the YOLO V3 model to detect carotid plaque from MRI scans for stroke risk assessment. Similarly, (b) represents the performance of Mobile Net in terms of the confusion matrix, and (c) illustrates the performance of the RCNN model to detect carotid plaque from MRI scans for stroke risk assessment. In this research, we proposed a deep learning framework based on transfer learning to detect carotid plaque from MRI scans for stroke risk assessment. We used to detect carotid plaque from MRI scans for stroke risk assessment pre-trained models, fine-tuned them, and adjusted hyperparameters according to our problem. The proposed framework assists the radiologist in early and accurate carotid plaque detection from MRI scans for stroke risk assessment. Our proposed framework also improves the diagnosis and addresses other challenges in MRI diagnosis due to various issues. Furthermore, we have compared our proposed framework performance with the previously proposed approach shown in Table 5 (23-26). Conclusion In this study, we concluded that deep learning-based methods for stroke risk assessment are the most promising and successful. Our trained YOLO model achieved 94.81% accuracy, RCNN achieved 92.53% accuracy, and Mobile Net achieved 90.23% in identifying carotid plaque from MRI scans for stroke risk assessment. Using accuracy, loss graphs, and ROC curves, we evaluated the performance of our model and found that the suggested framework performed better. Our approach will prevent incorrect diagnoses brought on by poor image quality and personal experience. The evaluations in this work have demonstrated that this methodology produces acceptable results for classifying MRI Frontiers in Cardiovascular Medicine 07 frontiersin.org data. Future applications may employ extreme learning as a more sophisticated classifier for plaque categorization issues. Limitations and future work Deep learning requires a large amount of data to improve performance and avoid over-fitting. It is difficult to acquire medical imaging data of low-incidence serious diseases in general practice. Due to differences in patients and the appearance of the prostate, future work will focus on testing the model with a more extensive data set. The, even though the results of studies have the potential for deep learning associated with different kinds of images, additional studies may need to be carried out clearly and transparently, with database accessibility and reproducibility, in order to develop valuable tools that aid health professionals. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. Ethics statement The studies involving human participants were reviewed and approved by Second Affiliated Hospital of Fujian Medical University. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
v3-fos-license
2023-12-16T16:05:07.386Z
2023-12-14T00:00:00.000
266246943
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/10659129231221486", "pdf_hash": "9a16734814cd461486771761cba9012a887b1ce7", "pdf_src": "Sage", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42267", "s2fieldsofstudy": [ "Political Science" ], "sha1": "a3913d470ff21656d8e12e4bef76d9ce4a26a46b", "year": 2024 }
pes2o/s2orc
Correcting Myopia: Effect of Information Provision on Support for Preparedness Policy Some scholars argue that the public is generally myopic in their attitudes about disaster preparedness spending, because they prefer to spend money on disaster response rather than preparedness, despite the greater cost effectiveness of the later. Given voters’ general lack of policy information, it is possible that limited support for preparedness comes from lack of information about its efficacy. In this paper, we build on these studies by examining how people respond to new information about the effectiveness of policy initiatives in the context of public health and the COVID-19 pandemic. Through two online survey experiments with over 3400 respondents, we demonstrate that information can lead people to update attitudes about preparedness, illustrating the potential for information campaigns to increase support for preparedness policies. Our results suggest that information about the efficacy of preparedness can increase support for these policies, and the information effect exists even for individuals whose prior beliefs were that public health programs were ineffective. These results suggest that information can make people more supportive of preparedness spending, which could provide electoral incentives for its provision. We conclude by providing some directions for future research to enhance our understanding of public opinion and preparedness spending. Introduction Previous research suggests that people support elected officials who provide relief spending in the wake of crisis (Bechtel and Hainmueller 2011).Although preparedness spending is significantly more cost effective than relief spending, existing scholarship suggests there is a widespread absence of electoral incentives for these initiatives (Gailmard and Patty 2019;Healy and Malhotra 2009;Stokes 2016).As a result, it is not surprising that federal spending on preparedness lags behind optimal levels, while relief spending has increased over time (Healy and Malhotra 2009).Current literature primarily addresses these patterns related to the onset of disasters and climate change, but as pandemics increasingly threaten communities around the world, it is vital to also understand the conditions under which voters might support preparedness spending to mitigate the effects of crises in public health. While some scholars argue the public is generally myopic in their attitudes about preparedness spending (Achen and Bartels 2016;Healy and Malhotra 2009), it is possible that limited support for preparedness comes from lack of information about its efficacy (Andrews, Delton, and Kline 2021;Gailmard and Patty 2019;Sainz-Santamaria and Anderson 2013).This raises the question-if people have better information about preparedness, do they become more supportive of these measures? There is widespread evidence that voters lack basic policy information, and that people are able and willing to revise their beliefs when exposed to new, persuasive information (Carrieri, Paola, and Gioia 2021;Diamond, Bernauer, and Mayer 2020;Hill 2017;Lupia and McCubbins 1998).Across dozens of policy issues experimental evidence suggests that respondents change their beliefs when exposed to persuasive messages (Coppock 2023;Tappin, Berinsky, and Rand 2023).The effect of information on attitudes extends to people's attitudes about emergency preparedness (Bechtel and Mannino 2021), and other work suggests that these attitudes can translate into voting intentions if local news media draw lessons from distant events for their audience (Jamieson and Van Belle 2018;2022). In this paper, we build on these studies by examining how people respond to new information about the effectiveness of policy initiatives in the context of public health preparedness.Through two online survey experiments, we demonstrate that information can lead people to update attitudes about preparedness, illustrating the potential for information campaigns to increase support for preparedness policies. This paper is structured in four further sections.First, we review prior literature about public opinion and electoral incentives for preparedness, demonstrating the areas of debate surrounding the electoral implications of preparedness and relief spending while highlighting the gap in our knowledge about the conditions under which individuals might support preparedness policy.Second, we describe the experimental design we use to test how information about policy effectiveness leads to increased support for preparedness using a national probability sample of 1595 participants recruited from Qualtrics and another sample of 1819 participants recruited via Amazon Mechanical Turk. Next, we present our results which demonstrate that informing participants that better preparedness could have saved lives during the COVID-19 pandemic generates support for increased public health spending.We also examine how the effect of information varies based on prior beliefs about the effectiveness of preparedness spending.Our results suggest that information about the efficacy of preparedness can increase support for these policies, and the information effect exists for individuals who received the COVID-19 treatment even if their prior beliefs were that such programs are ineffective. Finally, we conclude with a brief discussion of the implications of our results for the understanding of public support for preparedness spending, and we provide some directions for future research to enhance our understanding of public opinion and preparedness spending. Literature Review Prior scholarship clusters around two empirical findings related to voter behavior and crises such as disasters.First, previous research suggests that voters reward politicians for their immediate response (Gasper and Reeves 2011;Reeves 2011) and for relief spending after the onset of crises such as disasters (Bechtel and Hainmueller 2011;Fair et al. 2017;Velez and Martin 2013).For example, President Obama's response to Hurricane Sandy appeared to have boosted his approval ratings ahead of the 2012 Presidential election (Velez and Martin 2013).This type of behavior by voters creates incentives for politicians to spend resources on disaster response even though spending on preparedness might save up to $14 for every dollar spent on preparedness (Healy and Malhotra 2009). The electoral effects of crisis relief and response seems to be context dependent (Abney and Hill 1966;Boin et al. 2016), and poor government responses could lead to sustained electoral losses (Eriksson 2016;Montjoy and Chervenak 2018;Olson and Gawronski 2010).The public may also respond to disaster damage by becoming less supportive of democratic institutions in the wake of a poor response (Carlin, Love, and Zechmeister 2014).However, others have argued that it might prompt increased participation in elections and consolidate support for the incumbent government if the response is viewed favorably (Fair et al. 2017).Essentially, the findings in the existing literature suggest that the public rewards relief assistance from elected officials as long as it is provided in a timely and sympathetic manner. One reason that preparation spending may not be prioritized is because it is difficult to observe these activities, and voters may not be sufficiently informed to reward elected officials for this spending, even if voters support these policies.This absence of observability could even lead to perverse outcomes if individuals perceive preparedness and mitigation spending as evidence of corruption (Gailmard and Patty 2019). A second finding in previous scholarship is that voters fail to respond to the relative benefits of relief and preparedness spending, preferring concentrated benefits from relief spending after a disaster to more cost-effective policies to mitigate future crises (Achen and Bartels 2016;Andrews, Delton, and Kline 2021;Gailmard and Patty 2019;Healy and Malhotra 2009;Heersink, Peterson, and Jenkins 2017;Stokes 2016).However, we know little about how voters respond when presented with information about cost effectiveness, especially as voters may be more supportive of preparedness spending when they learn about the benefits of these policies (Bechtel and Mannino 2021). In short, while the standard explanation is that voters punish incumbents for adverse events as a function of their myopia (Achen and Bartels 2016;Healy and Malhotra 2009), there remains a gap in our knowledge about how individuals perceive the relative merits of preparedness and relief spending when they are more informed about the costs and benefits of such spending. The existing findings about voter behavior could also be the result of them lacking information about preparedness.This prompts our research question: Do people change their preferences about preparedness when they learn about its potential benefits? Research Design Prior research demonstrates the paucity of citizens' political knowledge, which can make it difficult to form policy preferences that are consistent with their interests (Delli Carpini and Keeter 1996;Galston 2001).Our theoretical approach to investigating if information affects beliefs about disaster preparation and response builds on two related claims about people's knowledge.First, most people have little knowledge of the actions of governments vis-à-vis disaster relief and/or preparedness.Second, people also know very little about the cost effectiveness of disaster preparation versus relief. Based on these assumptions about individuals' knowledge of policy, we expect that providing them with information has the potential to change beliefs about government spending on pandemic preparedness.The reason is relatively straightforward.If people do not know much about preparedness/relief and its effectiveness, then even a small amount of information may change people's preferences. To test if information about preparedness affects beliefs and opinions, we designed a survey experiment and recruited a general population sample using Qualtrics between June 29-July 14, 2021 and a convenience sample using Amazon Mechanical Turk from April 4-6, 2022.We recruited two different samples at two different times to help ensure that our findings were not unique to a particular sample or time period. Our total sample consists of 3414 respondents comprised of 1595 from Qualtrics and 1819 from MTurk.In the analyses we report, we combine the respondents together, because separate analyses do not differ substantively from the combined analysis. 1 In each survey, respondents were randomly assigned to one of three different treatments or the control group leading to about 900 respondents per group.We preregistered our experimental design through the Open Science Foundation. 2 After consenting to complete the survey, respondents answered a variety of demographic questions before we asked them about their opinions regarding public health spending and disaster preparation.In particular, we examined if respondents believe that: public health is a government responsibility, public health spending is ineffective, public health spending could be better used for other purposes, and/or public health spending is unlikely to benefit the respondents.These pre-treatment questions allow us to examine how attitudes change depends on prior beliefs, and if respondents have these beliefs (Wehde and Nowlin 2020), then it is straightforward to understand why they would not support public health spending.In particular, if respondents do not believe that disaster/ public health preparation spending is effective, then it is not particularly surprising that respondents would not support preparedness policies. We also asked respondents to report their preferred allocation between disaster relief and preparedness and provided some background about each category as both pre-treatment and post-treatment question (see below for more information about this question), which allows us to examine how attitude change depends on prior beliefs (Clifford, Sheagley, and Piston 2021).After these introductory questions all respondents were randomly assigned to receive either the control group or one of three treatment conditions. Descriptive statistics demonstrate the importance of understanding people's prior beliefs as a possible cause of their attitudes about preparedness spending.Figure presents histograms that visually demonstrate the distribution of responses to questions relating to responsibility for preparedness spending and the effectiveness of preparedness spending, with responses ranging from (strongly disagree and disagree) to 3 (strongly agree and agree). 3For ease of interpretation, we combined disagree and agree responses to create a trichotomous variable.In short, we find there is a large amount of variation in attitudes about public health spending. The histograms demonstrate the variation in beliefs about public health spending.Across the four different questions the answers suggest: • Most participants agreed that it is the government's responsibility to reduce the impact of public health threats.• Many respondents were concerned that public health funding could be ineffective.• Responses varied considerably about whether public health funding could be better used elsewhere.• People's beliefs about whether public health funding might not benefit them were evenly distributed across the three categories. These descriptive statistics demonstrate that many people have beliefs about public health funding that might relate to their preferences for public health preparation compared to response.Understanding peoples' prior beliefs about public health spending is essential to understanding both why people might not support preparedness spending, and also the types of information that might change people's opinions.If people do not believe that disaster preparedness is effective, then it is hardly surprising that they would not support such spending.Among those who do not support disaster preparation an important distinction is between those who are uninformed about disaster (public health, in our case) spending and those who are informed but unsupportive.The difference between these two reasons for opposition to spending affects what we infer about the reason for people's resistance to disaster preparedness and the type of information that might change their minds to support preparation or politicians who advocate for preparedness. Treatments We designed our treatment vignettes using information that comes from knowledgeable, trustworthy sources so that the conditions for persuasion are met, and the treatments could be expected to affect respondents' beliefs (Lupia and McCubbins 1998).We describe the three treatments and our rationale for each one below. 4 COVID Preparedness Treatment: A report by the National Center for Disaster Preparedness suggests that better preparedness would have prevented about 200,000 deaths from the COVID-19 pandemic in the United States.These estimates are corroborated by other studies. We expect that this treatment will lead to an increase in support for disaster preparedness, because the treatment provides evidence about the effectiveness of preparation.Our pretest questions suggest that many respondents have concerns about public health effectiveness and likely many respondents lack the specific information about better COVID preparation. The limited research about how information affects beliefs about disaster preparedness led us to design an experimental treatment that would give us a good opportunity to identify if information could cause respondents to change their beliefs about pandemic preparation.Our focus in this experiment was not to understand the scope conditions or moderating factors that might affect whether information influences beliefs.As is normal in an experiment, we designed the COVID-19 treatment to have a causal effect, it was by no means guaranteed to be effective.Even a treatment about saving 200,000 lives could fail to affect beliefs.Respondents might believe that COVID is a special case and not think that pandemic preparedness, in general, is effective or worthwhile.In other words, while our treatment claims that past spending would have been effective it does not inform respondents that future spending would be effective-respondents have to make that inference themselves.As such, our treatment is similar to a statement that a politician could make in advocating for greater pandemic spending in general.For the statement to affect beliefs, respondents must change their beliefs about pandemic preparation spending, and it is possible that people believe the COVID treatment but not that pandemic spending is effective/efficient in general. Studying pandemic preparation during a pandemic is clearly an unusual context, but it does provide some useful features for our purposes.First, the COVID example seems like a clear case where the United States was not wellprepared for a pandemic, and it is therefore easy to make the case that better preparation would have saved lives.Second, the COVID pandemic had effects nationwide, and therefore it is unlikely that people oppose preparedness spending because they do not see themselves as potentially affected by another pandemic.For example, people in Arkansas are unlikely to be affected by an earthquake and for that reason they might oppose money being spent to retrofit buildings for earthquake resilience.The experiment represents a balance of a treatment designed to identify a causal effect with sufficient realism such that the results speak to realworld situations.The purpose of this treatment is to identify if greater support for preparedness comes from exposure to information about COVID or whether any information about public health spending leads respondents to support greater spending on public health and/or preparedness.We implemented the experiment in the midst of a global pandemic, and our concern was that any information about public health spending might cause respondents to be generally more supportive of public health spending.The CDC treatment serves as a way to test if information about public health spending primes respondents to support more spending for pandemic preparation or if the treatment needs to be specific to a pandemic to induce support for that type of spending. Opioid Treatment A report by the National Safety Council suggests that better preparedness would have prevented about 200,000 deaths from the Opioid crisis in the United States since 2015.These estimates are corroborated by other studies. This treatment was designed to see if support for preparedness extends to different areas other than COVID.The Opioid epidemic differs from COVID in a lot of ways (for example, it seems to be longer lasting, people's individual choices seem more relevant, and it does not spread directly between people), and these differences are good in our context.If even something as different as the Opioid epidemic leads to increased support for public health spending and/or pandemic preparation, then it suggests that respondents are influenced by information about preparedness across a wide variety of possible conditions. Outcome Variables We used a variety of different measures of respondent preferences to gauge the effect of our treatment vignettes.Our goal is to understand if information affects respondents' preferences for public health spending on preparedness and response.Accordingly, as outcome measures, we look at support for increased public health and preparedness spending, and we ask respondents to identify their preferred allocation of a hypothetical budget between public health response and preparedness to better resemble decisionmaking with a budget constraint.We use multiple measures to help ensure that any treatment effects we identify are not unique to a particular outcome measure. 5 One question we used as both a pre-treatment measure and a post-treatment measure asked respondents to allocate government spending between preparation and response to pandemics.We provided respondents with a short description of both public health preparedness and response to help ensure they had sufficient information to answer the question.In Table 1, we present the background information given to respondents and the budget allocation question we use. In addition to the question about the tradeoff between pandemic preparation and response, we also asked the following three outcome questions after respondents were exposed to a randomly assigned treatment.For each question the response category was a reverse-coded 7point Likert scale that ranged from "Strongly Agree" (7) to "Strongly Disagree" (1). • The federal government should increase public health spending in 2021.• The federal government should increase pandemic preparedness spending in 2021.• The federal government should increase public health spending to reduce Opioid-related deaths in 2021. In an attempt to understand if the treatments were likely to affect voting, we also asked the following question with more likely (1), less likely (À1) or neither more or less likely (0) as the response categories. • If your Member of Congress votes to increase public health spending in 2021, how would it affect your vote in the 2022 Congressional elections? We also collected information about demographic variables and other commonly used individual-level factors that could conceivably relate to variation in our outcomes, including information such as sex, age, party ID, political ideology, their prior beliefs about preparedness, income, race/ethnicity, and whether they have been personally affected by the COVID or Opioid crises. 6 Table 2 presents a summary of our hypotheses outlining expected effects of the treatments on our outcomes of interest. Results We turn now to analyzing the effect of our various treatments on how respondents answer the different outcome questions.In short, we show that our treatments affect beliefs, although we also find results that demonstrate nuance in how people think about preparedness policies, with implications for understanding the puzzle underlying relative underinvestment in preparedness policies compared to response. The Allocation of Preparedness and Response Spending We first report the results related the effect of our treatments on preferences for allocation between spending on preparedness or response.In Figure 2, we display the average treatment effect for each of our three treatments using coefficient plots with 95% confidence intervals.To estimate the treatment effects, we used a linear probability model and included controls for whether the respondent was recruited via MTurk or Qualtrics, measures of the personal impact of the COVID-19 pandemic and the Opioid epidemic, prior attitudes about preparedness spending, political trust, party ID, political ideology, political moderation, political interest, income, sex, age, race and ethnicity. 7 Consistent with our expectations, information about the benefits of public health spending (i.e., COVID and Opioid treatments) leads to an increase in support for pandemic preparedness.As expected, the CDC treatment does not lead to a significant increase in support for greater allocation to pandemic preparation.The CDC treatment is insignificant at conventional levels, and this result accords with our expectations that the CDC treatment should not affect the allocation between preparation and response.Given that we fielded our experiment in the midst of a global pandemic, we take this as considerable evidence that respondents are not simply primed to support greater pandemic spending by just any information about public health and spending.Rather, the results suggest that opinion change is specific to the treatments that invoke pandemics and preparation, which is consistent with the idea that respondents are learning from the experimental treatments. Increased Public Health Spending We expected that all three of our treatments will increase support for greater public health spending.Figure 3 plots the estimated treatment effects for each of the three treatments.The results demonstrate that only the COVID- Public health preparedness refers to actions undertaken prior to the onset of an epidemic/pandemic to enhance the response capacities of individuals and households, organizations, communities, states, and countries Public health response refers to actions taken at the time of an epidemic/pandemic that are intended to reduce threats to life and/or safety, to care for victims, and to contain secondary hazards and community losses Because the federal government has scarce resources, it has to decide how much money to spend on epidemic/pandemic preparedness and response actions In your opinion, what is the correct balance between federal government spending on epidemic/pandemic preparedness and response actions?• 100% Preparedness, 0% Response (5) • 75% Preparedness, 25% Response (4) • 50% Preparedness, 50% Response (3) • 25% Preparedness, 75% Response (2) • 0% Preparedness, 100% Response (1) public health spending.In contrast, neither the Opioid treatment nor the CDC treatment lead to more support for public health spending, which is inconsistent with our expectations that both treatments would lead to more support for public health spending. Increased Pandemic Preparedness Spending We next present results related to the effect of our treatments on preferences for pandemic preparedness spending.We expect that the COVID-19 treatment will affect preferences for pandemic spending, but neither the Opioid nor CDC treatments would affect these preferences.In Figure 4, we display the estimated treatment effects.As expected, the COVID treatment increases support for pandemic spending and the Opioid treatment does not.Contrary to our expectations, the CDC treatment does lead to more support for pandemic preparedness spending.It seems plausible that this effect occurs because the ongoing COVID pandemic makes people more sensitive to the information about the decline in CDC spending; however, since both samples were recruited during the pandemic we cannot identify if the effect would have varied under different circumstances. Increased Public Health Spending to Reduce Opioid-Related Deaths To help us understand if information about the benefits of public health leads to an increase in support for greater spending we look at the effect of a treatment about the Opioid epidemic.We expect that the Opioid treatment will affect opinions about spending on Average treatment effects on the allocation of preparedness and response Spending. Opioid-related deaths, but we do not expect the other treatments to affect this outcome.As displayed in Figure 5, we do not find that the Opioid treatment affects responses, and we do find that the COVID treatment affects support for increased spending on efforts to reduce deaths from Opioids.This result is very anomalous given our predictions, and we lack a good explanation for why the COVID treatment seems to affect preferences for spending on Opioid abuse prevention.Dependent Variable: Agreement that the federal government should increase pandemic preparedness spending (range from strongly disagree (À3) to strongly agree (3). We suspect that one reason why the Opioid treatment may not change respondents' beliefs is that Americans largely assign individual responsibility for the Opioid crisis.Previous polling suggests that the majority of people place blame for the crisis on the individuals taking prescription painkillers or doctors prescribing these medications, and not the government (Sylvester, Haeder, and Callaghan 2022).These prior beliefs may help explain why people do not respond to information about the effectiveness of preparedness spending to increase support for these policies Figure 5. Voting Intentions While it is important to understand the drivers of support for preparedness spending, government responsiveness ultimately depends on electoral consequences for politicians' support for or opposition to these initiatives.As a result, our final analyses concern support for incumbent Members of Congress based on their support for increased public health spending. We find mixed evidence in support of our expectations in Figure 6.We find that respondents assigned to the COVID treatment are more likely to vote for an incumbent who proposes increased public health spending.However, we do not find that the other two treatments affect support for incumbents.Again, we cannot be sure if this effect only occurs because the experiment took place in the midst of the COVID pandemic or not.To identify the conditions under which information about pandemic preparation affects respondents' beliefs requires more research with different treatments that take place under varying conditions. Moderating Effects of Respondents' Beliefs We are also interested in the moderating effects of prior beliefs about public health spending on the effects of our experimental treatments.This is particularly important to understand, because it may help shed some light on how the treatments interact with prior beliefs and provide some insight, therefore, about how these effects occur.Given that our treatments speak directly to the effectiveness of preparedness spending and that they provide information that could change respondents' preconceptions that these policies are ineffective, we focus especially on how concern about public health spending efficacy might moderate our treatments. For ease of interpretation and to give us more observations at various levels of concern about public health spending, we transformed the original variable about respondents' beliefs into a trichotomous scale as follows: Not Concerned (1 and 2), Neither (3), and Concerned (4 and 5), which also matches Figure 1. We interact this variable with our treatments to estimate how treatment effects vary with beliefs about public health spending and display the results in Figure 7.The estimated treatment effects compare those in the control group to those in the COVID-19 treatment group who have similar prior beliefs about public health efficacy.Positive values on the Y-axis indicate that the average response among Dependent Variable: Agreement that the federal government should increase public health spending to reduce Opioid-related deaths (range from strongly disagree (À3) to strongly agree (3).those in the treatment group with those beliefs is greater than the average for the control group. Figure 7 reports these results with 90% confidence intervals.We only display the results from the COVID-19 treatment because we are most interested in its effects.The results for the other treatments appear in the appendix.Each estimated treatment effect is based on a significantly smaller number of observations, because each depends on the respondents with a given category of the public health effectiveness question causing a significant increase in our confidence intervals. Regardless of prior beliefs, the estimated treatment effects are consistently positive.Respondents assigned to the COVID-19 treatment were more supportive of the different outcomes than respondents assigned to the control condition at almost all levels of prior concern about the effectiveness of public health spending.At the same time, the treatment effects were smaller among those most concerned about the efficacy of public health spending, which suggests that while respondents incorporate the information we provided, their prior beliefs also condition the size of the treatment effect. Collectively, these results suggest that respondents' prior beliefs about the effectiveness of public health spending affect how individuals respond to the COVID-19 treatment, but that information can still shift attitudes among even those with prior beliefs that would seem less supportive of public health spending.This is important, because it suggests that the provision of information can cause respondents to update their prior beliefs, and in this case lead to support for public health spending (Hill 2017).These results provide some reason for optimism about the prospects of "correcting" myopia and that people will support disaster preparation after being presented with information about the utility of these policies.From the evidence presented here, pessimistic accounts of voter myopia might be overstated-if voters are given information about the effectiveness of public policy, our results suggest they may support these policies. Treatment Effect Heterogeneity by Party Identification Given the politicization of the COVID-19 pandemic and the differing responses to it by the political parties, we also thought it useful to examine whether the treatment effects are meaningfully different by the political party of the respondent.Theories of motivated reasoning suggest that Democrats and Republicans will interpret the treatment differently (Taber and Lodge 2006;2016); however, recent research suggests that Democrats and Republicans actually respond very similarly to a wide variety of experimental treatments designed to test for motivated reasoning (Coppock 2023).To investigate which pattern is more consistent with the data from our experiment we interacted the COVID-19 treatment with a dummy variable indicating if the respondent identifies as a Democrat, Independent or Republican and included various demographic variables to account for the fact that party identification is not randomly assigned.In the context of COVID's politicization we would expect that if motivated reasoning were at play we would observe a positive treatment effect for Democrats and a negative or no treatment effect for Republicans, but we do not have clear expectations for Independents. In Figure 8, we report the estimated effect of the COVID treatment on each of our dependent variables.For all three types of partisans (Democrats on the left, Independents in the middle, and Republicans on the right), we observe a mix of significant and insignificant treatment effects.Interestingly, no treatment has consistently positive effects across all three groups, and every treatment has a null effect for at least one of the partisan groups.These results do not seem consistent with the general idea of motivated reasoning, because we would have expected Republicans to resist all three treatments and therefore have null or negative effects for each outcome; however, we observe a positive treatment effect for both a greater allocation to pandemic preparedness instead of response and for the probability of voting for a candidate who supports pandemic preparedness. In the context of political campaigns and elections, these results suggest that respondents of all political parties may be willing to support greater spending on pandemic preparation if they are informed about its benefits. Learning vs. Priming A possible interpretation of these results is that they reflect priming rather than learning or changing beliefs.The difference between priming and learning has been discussed by a variety of scholars before, typically in the context of campaigns.In this context, priming is said to occur if the information affects the importance people place on an issue and/or changes the criteria that people use to evaluate policies or politicians.On the other hand, learning is said to occur if individuals change their policy positions to match those of a political candidate. Our domain is slightly different in that we are studying opinion about policy issues rather than candidates.One way scholars have tried to empirically discriminate between learning and priming in studying opinions about policy is to look at the different estimated treatment effects based on respondents' prior knowledge (Lergetporer et al. 2018).The intuition is that if learning is at play, then we should observe that the effect of the information is the greatest for those with the least prior knowledge. Unfortunately, we did not ask respondents about their knowledge of pandemic preparation so we cannot use the exact same approach to studying learning.Our best take at addressing the priming versus learning issue is to reconsider the results in Figure 7.The x-axis reflects whether respondents are concerned about the efficacy of public health spending.The clearest sign of learning, we think is whether there is a positive treatment effect for those who are concerned with the efficacy of public health spending, because the COVID treatment provides information that is contrary to this group's prior beliefs.Of the four estimated treatment effects in Figure 7, three are significant for the concerned respondents and the fourth treatment effect is positive, but not quite significant.This suggests that learning explains at least part of the estimated treatment effects for this group. At the same time, those not concerned about public health efficacy also exhibit significant treatment effects in three of the four estimates.These effects could certainly reflect learning as the information may be novel and lead them to increase their support for pandemic preparation.At the same time, the estimated effects for these respondents could also indicate priming, because the treatment makes pandemic preparation more salient leading to more support for spending.One difficulty about differentiating between the two explanations is that the two explanations are not mutually exclusive either across or within subjects.A treatment may lead to increased support for public health spending by priming its importance to some respondents or teaching some respondents new information that changes their mind.It may also be possible that for a given subject some portion of their opinion change is related to priming and some related to learning.If we used well-known information that primed subjects but did not inform them of anything, then it would be easier to eliminate learning as an explanation.However, in this case it seems likely that the information treatment did not provide well-known information and therefore both explanations are possible. Yet, to our minds, there remain a number of other reasons why learning seems more likely to explain the results than priming.First, our experiment took place squarely during the COVID-19 pandemic and it therefore seems likely that most respondents would already be attuned to its existence, even if they were unaware of the positive effect of better preparation.Therefore, if priming operates by raising the salience of an issue, then in this context it would seem relatively unlikely because of the baseline salience of COVID already.The COVID treatment was the only treatment with a consistent effect, and given the existence of the pandemic we would not expect to prime people about COVID.However, the information in the treatment is likely novel to most people, suggesting that the treatment effects likely reflect learning.Second, Tesler (2015, 807) argues that "media and campaign content should tend to prime predispositions and change policy positions."The design of our treatment and outcome questions seems to tap into policy positions rather than predispositions, making learning more likely than priming.Third, we do not estimate a significant causal effect of the other treatments on the respective dependent variables (see appendix), which we might expect if there was a general priming effect to these treatments. While it is hard to definitively rule out priming in lieu of respondent learning, we believe the case is strongest for learning as an explanation for our empirical results.However, the results certainly raise the need to better understand if priming or learning is most likely given respondents' beliefs and characteristics. Discussion In this paper we argue that one reason why people prefer disaster response to preparation is that they lack information about the efficacy of preparedness, and therefore are unlikely to support spending money on it.Using an online survey experiment conducted on two different samples, we demonstrate that information about the effectiveness of preparedness policies can lead to increased support for preparedness policies, and that this also can translate into voting intentions towards incumbent Members of Congress. The most consistent treatment effects we estimate relate to the COVID pandemic.This does not seem particularly surprising as the COVID pandemic has affected everyone causing major transformations across society, and therefore, learning that better preparation would have saved lives seems incredibly salient.The differential effects of the treatments, however, raise important questions about what type of information affects beliefs and how context affects whether information changes beliefs.One possibility is that the widespread nature of the COVID pandemic has made respondents generally more willing to support pandemic preparedness, and the relatively narrower effects of Opioids do not create sensitivity to public health preparation. A second contribution of this paper is to examine how individuals' beliefs about preparedness relate to our estimated treatment effects.We find that concerns about the effectiveness of preparedness policies are important in shaping how people respond to information about public health spending, but the COVID-19 treatment still seems to increase support for public health spending among those we might expect to be opposed to it. Collectively, the paper suggests reasons that individuals are responsive to information related to preparedness.Our results suggest that popular opposition to preparedness is not immutable, but information campaigns to inform the electorate could lead to greater support for policies that have the potential to reduce harm from public health threats.Further research should build on the results of this study, our findings suggest voters might be uninformed, but they are not necessarily as myopic, irrational, or short-sighted as commonly believed. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. 4. Full details of our pre-registered hypotheses and the rationale behind them are presented in Appendix A. These treatments were developed in 2021 and reflect knowledge at that time. 5. We refer to questions relating to 2021 in this section, but we updated these questions and asked participants about their responses to actions in 2022 for the MTurk study.6.The full text of the protocols is presented in Appendix B in the Supplementary Materials including coding of variables, and descriptive statistics are reported in Appendix C. 7. Full results including coefficients for treatment-only models and the full models with covariates used in the main body of the paper are presented in Appendix D in the Supplementary Materials available at https://prq.sagepub.com.8.All regression models adjust for a host of demographic variables and opinions about role of government in responding to pandemics and COVID. Figure 1 . Figure 1.Histograms of beliefs about public health preparedness spending. CDC Spending Treatment: According to a report by Trust for America's Health federal spending on the Center for Disease and Prevention (CDC) Control decreased by about 10% between 2010 and 2019.These estimates are corroborated by other studies. Figure 3 . Figure 3. Average treatment effects on support for increased public health spending.Dependent Variable: Agreement that the federal government should increase public health spending (range from strongly disagree (À3) to strongly agree (3). Figure 4 . Figure 4. Average treatment effects on support for increased pandemic preparedness spending. Figure 5 . Figure 5. Average treatment effects on support for increased federal public health spending to reduce Opioid-related deaths. Figure 7 . Figure 7. Relationship between beliefs about public health effectiveness and treatment effects. Figure 6 . Figure 6.Average treatment effects on voting intentions.Dependent Variable: If your Member of Congress votes to increase public health spending, how would it affect your vote in the 2022 Congressional elections?[Note: À1 = Less likely to vote for the incumbent; 0 = Neither more or less likely to vote for the incumbent; 1 = More likely to vote for the incumbent]. Figure 8 . Figure 8. Relationship between party ID and estimated treatment effects. 8 Table 1 . Background Information and Budget allocation Questions. Table 2 . Summary of Our Hypotheses.
v3-fos-license
2022-01-05T14:23:45.837Z
2022-01-05T00:00:00.000
245672090
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2021.766267/pdf", "pdf_hash": "c126071061d0411c5de2020a94df8160ef0d7c67", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42270", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "c126071061d0411c5de2020a94df8160ef0d7c67", "year": 2022 }
pes2o/s2orc
Aged Microglia in Neurodegenerative Diseases: Microglia Lifespan and Culture Methods Microglia have been recognized as macrophages of the central nervous system (CNS) that are regarded as a culprit of neuroinflammation in neurodegenerative diseases. Thus, microglia have been considered as a cell that should be suppressed for maintaining a homeostatic CNS environment. However, microglia ontogeny, fate, heterogeneity, and their function in health and disease have been defined better with advances in single-cell and imaging technologies, and how to maintain homeostatic microglial function has become an emerging issue for targeting neurodegenerative diseases. Microglia are long-lived cells of yolk sac origin and have limited repopulating capacity. So, microglial perturbation in their lifespan is associated with not only neurodevelopmental disorders but also neurodegenerative diseases with aging. Considering that microglia are long-lived cells and may lose their functional capacity as they age, we can expect that aged microglia contribute to various neurodegenerative diseases. Thus, understanding microglial development and aging may represent an opportunity for clarifying CNS disease mechanisms and developing novel therapies. INTRODUCTION Microglia were recognized as a type of connective tissue or passive bystander of the central nervous system (CNS) physiology for a century since their discovery by Pio del Rio Hortega in 1919. Nowadays, microglia are defined as multifunctional cells that communicate with the peripheral system as well as other CNS cells, such as neurons, astrocytes, and oligodendrocytes, in physiological states. In addition, microglia are not considered as just spectators in CNS pathologies and have been found to play roles as determinants of diseases. Microglia Development and Specific New Markers Microglia are the primary innate immune cells located within CNS parenchyma and have different unique signature genes from other CNS macrophages, such as perivascular macrophage, meningeal macrophage, and choroid plexus macrophage (Li and Barres, 2018). In addition, microglia develop in a stepwise fashion (Matcovitch-Natan et al., 2016), indicating that prenatal and postnatal microglia are different from adult microglia. One of the reasons microglia can have unique signature genes is associated with their origin and developmental process. Owing to phenotypic similarities to dendritic cells (DCs) and peripheral monocytes/macrophages, the origin of microglia was presumed to be of hematopoietic origin. In fact, many studies have supported this speculation, showing that irradiationinduced myeloablation facilitates infiltration of Ly-6C hi CCR2 + monocytes into CNS (Mildner et al., 2007;Varvel et al., 2012). These data suggest that peripheral monocytes can migrate to CNS parenchyma and settle down with morphological similarities to microglia. However, this occurred only under factitious conditions, and there was no way to confirm whether the engrafted monocytes are truly resident microglia. Meanwhile, Ginhoux et al. (2010) clearly proved that microglia were derived from embryonic yolk sac during development by using in vivo fate mapping approach for yolk-sac-derived cells. This observation was confirmed again by Schulz et al. (2012), who showed that Myb −/− mice had a normal number of microglia but were deficient in hematopoietic-derived monocytes/macrophages. Conclusively, microglia are derived from the first wave of hematopoiesis in the yolk sac and not from postnatal hematopoiesis (Ajami et al., 2007). To summarize the overall development of microglia, microglia precursors derived from yolk sac migrate to the brain parenchyma at embryonic day 8.5 (E8.5) in mice (Nayak et al., 2014) and gestational week 12-13 (GW12-13) (Lloyd et al., 2017) in humans and then proliferate and acquire the ramified form through their developmental program, resulting in having their signature genes (developing microglia column in Figure 1). The mature form of microglia contributes to CNS homeostasis by interacting with almost all CNS components as well as the peripheral immune system. In healthy state, microglia dynamically survey the surrounding environment and maintain steady, region-specific densities by self-renewal. After confirming microglial origin, essential factors for microglia development and maintenance have been suggested in various mutant mice (Li and Barres, 2018). Macrophage colony-stimulating factor (M-CSF or CSF1) is a hematopoietic growth factor produced by endothelial cells, microglia, oligodendrocytes, and astrocytes in CNS and induces differentiation, proliferation, and maturation of macrophages. CSF1 receptor (CSF1R), a receptor tyrosine kinase with two cognate ligands [CSF-1 and interleukin-34 (IL-34)], regulates tissue macrophage homeostasis (Chitu et al., 2016). Recent studies in mice have revealed that CSF1R signaling contributes to the development and maintenance of the microglial population. In CSF1R-mutant mice, yolk-sac macrophages were absent and microglia colonization failed to occur (Oosterhof et al., 2018). Similarly, PLX3397 (CSF1R inhibitor) administration for 7 days eliminates >90% of microglia in adult mice (Elmore et al., 2014). Consistently, microglia-specific Csf1r knockout (KO) mice also showed loss of microglia (Buttgereit et al., 2016). Interleukin-34 ablation in neuronal progenitors led to the loss of gray matter microglia in a selective, dose-dependent manner (Badimon et al., 2020). In addition, reports have shown that the development of microglia also relies on the transcription factors interferon regulatory factor 8 (IRF8) and PU.1 (Kierdorf et al., 2013). The cytokine transforming growth factor-β (TGF-β), known as an anti-inflammatory cytokine, is another important factor in the development of microglia and in maintaining the homeostatic function of microglia (Butovsky et al., 2014). The clarified microglial origin indicates that microglia can have their unique characters distinct from the neuroectodermal origin of other CNS cells. Nevertheless, researchers had no means to detect resident microglia, except for markers such as ionized calcium-binding adaptor molecule (Iba-1), fractalkine receptor [CX3C chemokine receptor 1 (CX3CR1)], and CD11b that are also expressed by CNS-engrafted monocytes/macrophages (Kettenmann et al., 2011). Thus, the absence of microglia-specific markers made it difficult to interpret the role of microglia under the complex neuroinflammatory environment formed by a mixture of microglia and CNS-engrafted monocytes/macrophages. For example, it was almost impossible to distinguish whether core cells of the neuroinflammatory response in multiple sclerosis were resident microglia or engrafted immune cells that permit infiltration of peripheral immune cells into CNS parenchyma. Considering that neuroinflammation is not uniform and has a diverse status, the aforementioned conundrum also applies to other neurological disorders, such as stroke, Alzheimer's disease (AD), Parkinson's diseases, amyotrophic lateral sclerosis (ALS), and psychiatric disorders (Rezai-Zadeh et al., 2009;Takahashi et al., 2016;Yang et al., 2020). The abovementioned concerns have been resolved by the discovery of purinergic receptor P2Y12 (P2RY12) in 2014 (Butovsky et al., 2014) and transmembrane protein 119 (TMEM119) in 2016 (Bennett et al., 2016). Before the discovery of these markers, morphological distinctions, relative marker expression (of the common leukocyte antigen CD45 hi/lo ) by flow cytometry, or generating bone marrow (BM) chimeras were used to distinguish microglia from engrafted CNS macrophages and peripheral monocytes (Lassmann et al., 1993;Ford et al., 1995). These techniques presented inherent limitations as these are not unique markers and the chimera model leads to partial chimerism, requiring much effort and time. Meanwhile, with advanced techniques, such as genetics, imaging, mass spectrometry, single-cell technologies, and transcriptome analysis, it has become possible to elucidate the heterogeneity and functional role of microglia in mice and humans. Establishment of CX3CR1CreER mouse lines to target microglia Yona et al., 2013), identification of a microgliaspecific signature using transcriptome analysis (Beutner et al., 2013;Butovsky et al., 2014), identification of microglial heterogeneity and subpopulation depending on brain region (Grabert et al., 2016), sex (Guneykaya et al., 2018;Villa et al., 2018), and neurodegenerative disease using single-cell analysis of murine and human microglia are the examples. Regarding validation of microglia-specific antibody, P2RY12 immunoreactivity (IR) for resident microglia was not colocalized with green fluorescent protein (GFP)-tagged infiltrated monocytes in experimental autoimmune encephalomyelitis FIGURE 1 | Microglial development, repopulation, and aging. Microglia progenitors are originated from the yolk sac and migrate to the brain parenchyma through the head neuroepithelial layer, and then proliferate and acquire the ramified form with adult microglial signatures. TMEM119, P2RY12, CX3CR1, and HEXB are signature genes on adult microglia, and TGF-β plays a crucial role in their maintenance. With aging, microglial loss can occur and the loss might be replaced by microglial proliferation or infiltrated macrophages distinct from the yolk-sac origin of homeostatic microglia. However, owing to limited repopulating capacity, it is speculated that aged microglia can be accumulated in the aged brain, leading to entire work overload due to relative dysfunctional aged microglia. The epigenetics might be involved in the alteration of homeostatic microglial genes in aged microglia. (EAE) model with demyelinated pathology by infiltrated immune cells (Butovsky et al., 2014). In mutant superoxide dismutase 1 (mSOD1) mice, P2RY12 IR was rarely detected in the end stage of disease, although many Iba-1-positive cells were co-localized with increased inducible nitric oxide synthase (iNOS) in the spinal cord. Another microglia-specific marker, TMEM119, plays a key role in the validation of the microglial cell model as a signature gene that is expressed only in adult microglia (Bennett et al., 2016). After confirming microglia-specific markers, cell models closer to adult microglia, which can also be known as "microglia-like cells, " have been reported (adult microglia column in Figure 1). This issue is going to be introduced in section "in vitro Methods for Aged Microglia Study." Collectively, these newly derived markers raise the fundamental question about the role of resident microglia during neuroinflammation and disease progression, and this issue is going to be discussed in section "Aged Microglia in Neurodegenerative Diseases." The Origin of Repopulated Microglia As mentioned above, microglia originate from the yolk sac and not the neuroectoderm. Therefore, we have no choice but to ask the next question. How are microglia replaced if microglial cells are depleted by aging or other stimuli? The possible hypothesis that microglia loss would be supplemented by BM-derived monocytes seems to be reasonable. Acute microglia depletion by pharmacological inhibition via CSF1R antagonist induced peripheral monocytes infiltration into CNS without blood-brain barrier (BBB) breakdown, and the infiltrated monocytes showed functional behavior like resident microglia, although transcriptome analysis revealed that the replaced cells did not have the same signature genes as that of resident microglia (Cronk et al., 2018). However, there is a report that acute depletion of microglia could be repopulated by the proliferation of residual microglia rather than de novo microglia progenitor differentiation, nestin-positive cells, or peripheral monocytes/macrophages (Huang et al., 2018). However, this study also indicated that transcriptomes of the repopulated microglia were distinct from resident microglia. The debate on the origin of repopulated microglia is still ongoing, but two major studies have a common finding that the repopulated microglia are not the same as yolk-sac-derived resident microglia. This finding is very important since microglia aging may be associated with various neurological disorders (aged microglia column in Figure 1) (Spittau, 2017;Angelova and Brown, 2019). In addition, a heterogeneous population of aged microglia might require a stratified targeting for microglia rejuvenation strategy. Microglia Lifespan and Limited Repopulation Microglia are long-lived cells, and their activities may be dysregulated as they age. In addition, microglia are not replenished by circulating monocytes under homeostatic conditions (Mildner et al., 2007). As mentioned above, microglia can be replenished by repopulation when depleted, but the repopulated microglia are not the same transcriptionally as previous resident microglia. Thus, microglia lifespan is a crucial point in understanding the pathophysiology of neurological disorders. Previously, an indirect study to establish chimerism in circulating BM-derived precursors suggested that microglia lived long in healthy CNS through much of the lifespan of an animal (Mildner et al., 2007). More recently, by in vivo single-cell imaging, it was found that the median lifetime of neocorticalresident microglia was over 15 months, and approximately half of total microglia survived the entire mouse lifespan, suggesting that microglia are long-lived cells and microglial replenishment may be less required relatively than other CNS cells (Fuger et al., 2017). In humans, an indirect method referring to the 14 C atmospheric curve was used to analyze the lifespan and turnover of microglia. Human microglia renewed at a median rate of 28% per year and the average lifespan was 4.2 years. Most of the microglia population (96%) was renewed throughout life, suggesting that the microglia population in the human brain is maintained by persistent slow turnover throughout adult life (Reu et al., 2017). Thus, the persistence of individual microglia throughout life explains how microglial aging may contribute to age-related neurodegenerative diseases. Is endless repopulation of adult microglia possible? Adult microglia can be depleted by 90% by CSF1R inhibitor in mice. Once microglia are depleted acutely, withdrawal of inhibitor promotes repopulation of microglia in the entire CNS, and greater depletion of microglia results in more rapid repopulation (Najafi et al., 2018). Interestingly, this study found that the recovery time was gradually extended as the depletion was repeated, indicating the possible limited capacity for microglial repopulation. Thus, maintaining yolk-sac-derived microglia in a healthy state for a long time can be a good strategy to prevent age-related neurodegenerative diseases. Aged Microglia in Neurodegenerative Diseases Aging is associated with altered inflammatory status in the brain as well as systemically. As CNS ages, microglial morphology and number also change. Aged microglia in humans demonstrate dystrophic morphologies, indicating fragmentation of residual processes, less branching, deramified dendritic arbors, and cytoplasmic beading in shape relative to young microglia depending on the observed region (Streit et al., 2004). The dystrophic microglia are contrasted morphologically and functionally with dark microglia showing the condensation of their cytoplasm and nucleoplasm, accompanied by cytoplasm shrinkage, Golgi apparatus, and endoplasmic reticulum dilation, highly ramified morphology, and increased phagocytosis (Bisht et al., 2016). Along with this morphological change, homeostatic microglial functions decline with aging. Homeostasis is defined as a relative constancy of set point formed in certain conditions, and maintaining homeostatic microglial function indicates an effort to restore the deviating set point due to aging in the CNS environment (Deczkowska et al., 2018). The homeostatic microglial function indicates timely proper response required at each stage of life. Because excessive or tolerant microglial response can interrupt the tissue restoration after CNS damage, the transition from homeostatic microglial function in steady-state to immune-modulating mode under pathological conditions should be tightly regulated. Hence, microglial immune checkpoints, which are a set of controlling mechanisms preventing uncontrolled response in microglia, were suggested (Deczkowska et al., 2018). CX3CR1, also known as the fractalkine receptor, is a transmembrane protein and chemokine for leukocyte migration expressed on monocytes, DCs, and microglia (Harrison et al., 1998). CX3CL1, a ligand of CX3CR1, is expressed on the neuronal surface or released as the active soluble form from specific neurons. Tight regulation between neuronal CX3CL1 and microglial CX3CR1 controls microglial functional phenotype and their hyperactivation under an inflammatory environment. For example, CX3CR1 deficiency in mice with an induced exaggerated response to lipopolysaccharide (LPS) stimuli in CNS showed microglial neurotoxicity and advanced neuronal death (Cardona et al., 2006b). CD200 receptor (CD200R) in microglia also interacts with neighboring cells, including neurons, astrocytes, oligodendrocytes, and endothelial cells through their CD200 ligand; this has also been suggested as a mechanism of attenuating microglial activation, primarily under inflammatory conditions (Walker and Lue, 2013). Another homeostatic transcriptional regulator of microglia is TGF-β produced by astrocytes and microglia at high levels in healthy CNS (Butovsky and Weiner, 2018). TGF-β KO microglia show aberrant immune-activated signature, increased neuronal death, reduced synaptic plasticity, and late-onset motor deficits (Brionne et al., 2003). Transcription factors (MafB, Mef2C, and Sall1) and MeCP2 as a methylated DNA binding repressor are also involved in controlling microglial immune activity. Congenital disruption of the MafB gene in microglia induced enhanced inflammation in adult mice (Matcovitch-Natan et al., 2016). Mef2C, which is expressed in microglia, limited microglial immune activation in response to pro-inflammatory perturbations (Potthoff and Olson, 2007). MeCP2 also aggravates immune response to tumor necrosis factor (TNF) (Cronk et al., 2015). Sall1, which controls the transcriptional signature of microglia, regulates microglial identity and physiological features in the CNS (Buttgereit et al., 2016). In this study, Sal1 deficiency in microglia induced their activation and disturbed adult hippocampal neurogenesis. As described above, changes in several immune checkpoints can affect microglial homeostatic function that is orchestrated by checkpoint mechanisms throughout life. Interestingly, microglial immune checkpoints are distorted with aging, indicating dysregulation of homeostatic microglial function (Deczkowska et al., 2018). Aged microglia display increased immune vigilance (high expression of both immunoreceptors and an inflammatory secretome) along with dysregulated phagocytosis (Grabert et al., 2016). The increased release of neurotoxic substances and reduced ability to phagocytose debris and toxic protein aggregates in dystrophic microglia leaves neurons vulnerable. Insufficient phagocytic activity of aged microglia toward apoptotic bodies, misfolded protein aggregates, and myelin might result in the gradual accumulation of potentially toxic compounds, a hallmark of age-related neurodegenerative diseases (Safaiyan et al., 2016;Galloway et al., 2019;Damisah et al., 2020). The cause of such phenotypic shift in aged microglia appears to be related with changes in the microglial homeostatic gene profile. In essence, directly isolated microglia from aged human brain also support the observation that aged human microglia exhibit downregulated TGF-β signaling in Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway (Olah et al., 2018). This report suggests that diminishing TGF-β signaling highlights the perturbation of homeostatic programs as microglia activate reactive pathways to respond to aging-related changes such as the accumulation of amyloid pathology. Another study using cortical microglia purified from postmortem human samples clearly demonstrated aged microglia-associated gene profiles such as cell surface receptor P2RY12 and cell adhesion molecules (Galatro et al., 2017). Microglial functional phenotype can be regulated by TGF-β produced by astrocytes and neurons among other cells (von Bernhardi and Ramirez, 2001;Chen et al., 2002;Herrera-Molina and von Bernhardi, 2005). TGF-β promotes phagocytosis and neuronal protection, depending on the Smad3-mediated mechanism in microglia (Tichauer et al., 2014;von Bernhardi et al., 2015). Thus, aging or loss of the TGF-β releasing cells might affect microglial TGF-β signaling and their homeostatic genes expression. Changes in gene expression in aged microglia may also be based on changed epigenetics. Microglia plasticity can be controlled by epigenetics (Cheray and Joseph, 2018), and aged microglia show upregulation of IL-1β gene expression by hypomethylation of CpGs sites on IL-1β proximal promoter (Matt et al., 2016). Similarly, a unique epigenome and transcriptome can define a phenotype of microglia in aging, including changes in homeostatic microglial genes such as TGF-β, CX3CR1, and P2RY12. Along with extreme longevity (Fuger et al., 2017) and limited repopulation capacity (Najafi et al., 2018) of microglia, turnover of aged microglia does not reset the pro-inflammatory phenotype in the aged CNS microenvironment (O'Neil et al., 2018). In addition, the homeostatic microglia population gradually decreases with aging (Niraula et al., 2017), leading to work overload for the remaining microglia. Thus, an intrinsic dysfunction of aged microglia is closely related to neurodegenerative disease. "Dystrophic microglia" refers to microglial morphological changes with age (Streit and Xue, 2014); they have been detected in the periphery of tau and amyloid pathology in the brains of patients with AD and likewise near sites of Lewy bodies in the brain of patients with dementia with Lewy bodies (Streit and Xue, 2016;Shahidehpour et al., 2021). In particular, microglia activation occurs at the early stages of AD, and as it disappears, microglia become senescent/dystrophic and less responsive to stimuli at a late stage (Graeber and Streit, 2010). Histopathological finding from 19 AD pathologies indicates that aging-related microglial degeneration rather than microglial activation might contribute to the onset of AD (Streit et al., 2009). In fact, aged microglia-related releasing factors disturbed clearance of apoptotic bodies and aggregation of α-synuclein, thus, aggravating disease progression (Angelova and Brown, 2019). In mSOD1 mice, which is a familial ALS mouse model, microglia were involved in inflammatory reactions in the early stage, and they exhibited a tolerant and dystrophic form that does not function properly at the end stage of disease progression, demonstrating P2RY12 IR disappearance, despite Iba-1 IR increase in the spinal cord (Butovsky et al., 2015). Similarly, acutely isolated mSOD1 microglia in the symptomatic period showed β-galactosidase activity as well as the elevation of p16, matrix metalloproteinase-1 (MMP-1), p53, and nitrotyrosine with large and flat morphology, suggesting a senescence-associated secretory phenotype (SASP) (Trias et al., 2019). Chronic amyloid β exposure induced microglial impairment with immune tolerance, which was associated with microglial metabolic defects (downregulation of mTORglycolysis pathways) (Baik et al., 2019). Chronic stress, which is an aggravating factor in AD and risk factor in mood disorders, also sensitized microglia toward a primed phenotype in the acute stage, and, subsequently, led to dystrophic morphology according to stress duration in mice, suggesting that chronic depression may be associated with dystrophic microglia (Kreisel et al., 2014). At this point, one question is why multiple studies thus far have suggested that inflammatory activation of microglia is the main culprit of neurodegenerative diseases, although microglia have dystrophic morphology and lose their homeostatic genes. One of the main causes is associated with previous microglial markers such as Iba-1 and CD11b that cannot discriminate resident microglia from infiltrated monocytes/macrophages because microglia signature genes, including P2RY12 and TMEM119, were established after 2014 as mentioned above. Thus, papers published before 2014 seem to have mistaken the main culprit for the neuroinflammatory response as resident microglia rather than Iba-1-(or other previous markers) positive cells, detecting both infiltrated monocyte/macrophages and resident microglia in neurodegenerative diseases with BBB breakdown. Another possible reason is related to immature features of fetal or neonatal microglial cells, widely used as in vitro surrogates. Single-cell analysis according to developmental state clearly identified that fetal/neonatal microglial cells had a different signature from acute-isolated adult microglia in mice (Matcovitch-Natan et al., 2016), and microglial cell lines, as well as fetal/neonatal microglial cells, rarely express adult microglial signature genes (Butovsky et al., 2014). Regarding microglial functional character, acute-isolated microglia from post-mortem human brain tissue showed a tightly regulated phenotypic change to an inflammatory environment composed of LPS and interferon-γ (IFN-γ) compared with neonatal/fetal microglia (Melief et al., 2012). Thus, we cannot exclude the possibility that immature microglial cells present more dynamic inflammatory reactions to inflammatory stimuli distortions than actual adult microglia. Based on these reports, it may be concluded that the responsibility for neuroinflammation in neurodegenerative diseases cannot be shifted only to the yolk-sac origin of homeostatic microglia rather than infiltrated monocyte/macrophages because dystrophic and tolerant microglia were also observed in most neurodegenerative diseases (Streit et al., 2009;Varvel et al., 2016;Karlen et al., 2018;Sevenich, 2018). FUNCTIONAL CHANGE IN AGED MICROGLIA The change in microglial transcriptome with aging, microglial functions, such as phagocytosis, synaptic pruning, migration, and cytokine release to stimuli, can be also altered toward a decline or more dysregulation in the supportive and protective capacity. Microglia are remarkably versatile in their functions that overall achieve a homeostatic environment. Microglial dysfunction has been linked to neurodegenerative diseases. Live imaging of retinal microglia in young and aged mice revealed that aged microglia showed slower process motilities in homeostatic state and slower migrating response to laser-induced focal tissue damage (Damani et al., 2011). In addition, aged retinal microglia exhibited a sustained inflammatory response and defective phagocytosis (Damani et al., 2011). Aged microglia exhibited a heightened and prolonged response to inflammatory stimuli and showed a blunted response to IL-4, suggesting a reduced repair mechanism (Fenn et al., 2014). Furthermore, defective phagocytosis of myelin debris by aged microglia led to impaired remyelination (Rawji et al., 2020). Ex vivo cultured microglia isolated from the brain of aged mice constitutively secreted more amounts of pro-inflammatory cytokines, such as TNF-α and IL-6, and exhibited less Aβ phagocytosis, leading to higher amyloid burden (Njie et al., 2012). Proteomic analysis of aged microglia isolated by CD11b magnetic beads showed that aged microglia exhibited disruption in chromatin remodeling, loss of nuclear architecture, and impairment in RNA processing (Flowers et al., 2017). In this study, aged microglia showed a bioenergetic shift from glucose to fatty acid utilization, linking with the study results that restoration of defective glycolytic metabolism could be a target for boosting the tolerant microglia induced by chronic amyloid β exposure (Baik et al., 2019). A recent study demonstrated that aged microglia were not uniform throughout the brain but had transcriptomic diversity in a region-dependent manner, indicating differential susceptibility to aging factors (Grabert et al., 2016). Considering that microglial phagocytic function contributes to clearance of aberrant proteins (amyloid β, Apolipoprotein E, and α-synuclein), and damaged neuronal debris, synaptic stripping, and remodeling for CNS homeostasis (Wake et al., 2013), decrease in phagocytic function with aging potentially have a direct link with increased susceptibility to the progression of neurodegenerative diseases. Signaling between CX3CL1 and its receptor CX3CR1 is critical for microglial synaptic pruning, phagocytosis, and migration in the adult brain; however, in the aged brain, their expression levels are decreased (Wynne et al., 2010;Deczkowska et al., 2018). In contrast, hallmarks of microglia activation such as major histocompatibility complex II (MHC II) and CD86, Toll-like receptors (TLRs), and nucleotide oligomerization domain (NOD)-like receptors (NLRs) are increased with age (Patterson, 2015). Age-dependent microglia dysfunction might be enhanced by the loss of endogenous TGF-β1 to maintain mitochondria homeostasis. TGF-β1 induces microglia phagocytosis of apoptotic cells via Mfge8 expression (Spittau et al., 2015). Microglia priming is a stronger response than that of the stimulus-naïve microglia to a second inflammatory stimulus (Perry and Holmes, 2014). The exaggerated response to toxic stimuli, such as LPS, has been considered as a "primed state" of microglia with overproduction of pro-inflammatory cytokines or decreased anti-inflammatory factors. The "priming state" indicates a phenotypic shift of microglial cells toward a more sensitized state, responding to an additional stimulus more rapidly, longer, and to a greater degree than expected if non-primed (Harry, 2013). This exaggerated inflammatory response can compromise critical processes for optimal cognitive functioning. For example, IL-1β production in aged brain interrupted hippocampus-dependent memory systems and synaptic plasticity processes via disruption of brain-derived neurotrophic factor (BDNF) function (Norden and Godbout, 2013;Patterson, 2015). In addition, when aged mice received an intraperitoneal injection of LPS or Escherichia coli, IL-1β production was significantly higher and for a longer time than that in young mice (Godbout et al., 2005;Barrientos et al., 2009). In summary, aged microglia are in a primed state and show an exaggerated response to inflammatory stimuli. In addition, aged microglia respond slowly to toxic stimuli, lose dynamic surveillance features, and exhibit reduced phagocytic function. These results were derived by live imaging using CX3CR1-specific GFP-tagged microglia; tissue staining using microglial markers, such as Iba-1 or CD11b, in aged mice; and acutely isolated microglia using CD11b magnetic beads or Percoll gradient. However, these methods cannot isolate perfectly pure resident microglia distinct from infiltrated monocyte/macrophage or CNS macrophages located in choroid plexus, meninges, and perivascular space, and we cannot determine the contaminated amount. In addition, an advanced dynamic contrast-enhanced magnetic resonance imaging protocol with high spatial and temporal resolutions quantified regional BBB permeability in the living human brain and found an age-dependent BBB breakdown in CA1 and dentate gyrus subdivisions of the hippocampus, supporting infiltration of macrophages and monocytes into CNS parenchyma (Montagne et al., 2015). Furthermore, we cannot discriminate whether higher inflammation to peripheral LPS injection is due to aged microglia or infiltrated macrophages/monocytes because peripheral LPS injection induces BBB disruption (Banks et al., 2015). Actually, highly pure microglia (CD11b high CD45 int ) isolated from the human parietal cortex with the elimination of meningeal macrophages by fluorescence-activated cell sorting (FACS) indicated that microglia of physiologically aged mice do not recapitulate the effect of aging on human microglia, FIGURE 2 | In vitro microglial culture. The methods to obtain microglial cells are described and the strengths of each technique, followed by weaknesses, are listed. and the top 100 differentially expressed genes in human aged microglia were more related with actin cytoskeleton-associated genes, sensome cell surface receptor, cell adhesion molecules, and surface receptors rather than inflammatory cytokines. These results suggest that decline in fine microglial processes, such as motility for surveillance, perturbed microglial migration, and reduced phagocytosis efficiency, may be associated with age-related neurodegeneration (Galatro et al., 2017). In a mouse model of telomere shortening (mTerc −/− ), it seems that peripheral LPS injection enhanced pro-inflammatory response in mTerc ( −/− ) microglia, but the enhanced inflammatory response was not accompanied with genes related with aged microglia and correlated closely with infiltration of immune cells (Raj et al., 2015). Thus, the primed state of aged microglia might need to be reevaluated with purer isolated microglia with a stably expressed core marker during homeostasis and disease, especially according to neurodegenerative disease progression. In vitro Methods to Study Aged Microglia Microglia are widely involved in the homeostatic maintenance in the CNS, and age-associated microglial dysfunction is closely related to CNS diseases. Proper use of in vitro methods recapitulating adult microglia is required to study microglia; however, it has been difficult to recapitulate adult microglial cells perfectly due to the complexity of the origin and developmental process. In this section, we introduce the currently used in vitro methods for an accurate understanding of microglia. The features and limitations of each method are discussed, briefly referring to well-organized previous review papers (Timmerman et al., 2018;Angelova and Brown, 2019). A brief description of in vitro methods is illustrated in Figure 2. The description of "aged" microglia may indicate the inclusion of several distinct phenotypes and the term is uncertain (Koellhoffer et al., 2017). Based on the fact that senescent microglia or aged-like phenotype is not sufficient to cover aged microglia features, we have tried to use the terms as distinctly as possible. SASP as an alternative method to characterize aged microglia was indicated separately (Streit and Xue, 2014). Actually, aged microglia seem to have distinct features from in vitro senescent microglia although both aged microglia and senescent microglia show dysfunctional phenotypes such as impaired phagocytosis, slow migration, slow response to stimuli. Thus, in vitro senescent microglia might not recapitulate aged microglia perfectly so far. Microglial Cell Lines Initially, a cell line was suggested as a solution to the problem of not being able to secure enough microglial cells for detailed studies (Blasi et al., 1990;Nagai et al., 2001). The microglial cell line was established by immortalization. Such methods include viral transduction with oncogenes (e.g., v-raf, v-myc, v-mil), SV 40 T antigen, and cancerization (e.g., p53-deficient cell) using cells derived from various species, including mouse, rat, macaque, and human (Timmerman et al., 2018). Infinite growth capacity due to immortalization enables passage culture, and it is useful in research methods such as high-throughput screening assays that require a high number of cells, due to their relatively high growth rate (Dello Russo et al., 2018). However, immortalization is a double-edged sword. Immortalization has the advantage of being able to easily obtain a tremendous number of cells but rather distorts the properties of microglia due to artificial manipulation. Thus, they are different from adult microglia in genetic and functional aspects (Butovsky et al., 2014;Das et al., FIGURE 3 | Hypothesis on aged microglia heterogeneity. Based on the microglia origin and their limited repopulation, aged microglia might be composed of yolk-sac-originated microglia (homeostatic microglia), repopulated microglia from infiltrated monocytes, and proliferation of homeostatic microglia. 2016; Melief et al., 2016). Immortalized cells may also be not suitable for studying long-lived adult microglial cells, which show very low proliferative capacity at a healthy state (Fuger et al., 2017;Haenseler et al., 2017). For studying microglia senescence, there is a report that repeated LPS stimulation (10 ng/ml, every 48 h) can induce cellular senescence in BV2 cells (Yu et al., 2012). In this study, BV2 senescence was evaluated by β-galactosidase staining, p53, and cell cycle arrest in G0/G1 phage, suggesting that multiple inflammatory stimuli may induce microglia senescence. Primary Fetal/Neonatal Microglial Culture Rodent primary microglia are commonly obtained from neonatal/fetal animals (Giulian and Baker, 1986), and human primary microglia also may be obtained from embryonic or fetal tissues (Satoh and Kim, 1994). After tissue collection, it is necessary to isolate a sufficient amount of only the desired microglial sample of high purity. There are several enzymatic and mechanical separation methods. In this process, one method is density gradient centrifugation; this method can isolate microglia with more than 99% purity (Cardona et al., 2006a;Zuiderwijk-Sick et al., 2007). Other ways are magnetic-activated cell sorting (MACS) (Nikodemova and Watters, 2012;Mizee et al., 2017) and FACS (Olah et al., 2012;Bennett et al., 2016); these methods use microglia antibody coated with magnetic beads and fluorescent material, respectively, and the other is to perform the shaking procedure (Tamashiro et al., 2012). Rodent primary microglia alone are insufficient to study human microglia due to interspecies differences with regard to features of adhesion, proliferation rates, and expression of key receptors (Smith and Dragunow, 2014). Because no artificial treatment, such as genetic modification, is applied, fetal/neonatal microglia culture has the advantage that it is similar to resident microglia compared with cell lines, but passage culture is difficult, and too many animals are required to obtain a high number of cells. Notably, early fetal or neonatal microglia differ in many ways, including transcriptome, function, morphology, and physiology, from adult microglia settled in the adult brain after BBB formation (Butovsky et al., 2014;Matcovitch-Natan et al., 2016;Prinz et al., 2019). For induction of senescent microglia, long-term culture using fetal microglia was proposed (Caldeira et al., 2014). In this study, 16 days in vitro (16 DIV) cultured microglia showed slightly increased ramified morphology compared with the ameboid form of 2 DIV and showed reduced migration and phagocytosis compared with 2 DIV. In addition, 16 DIV exhibited enhanced β-galactosidase staining and decreased autophagy, indicating that this method induces senescence in microglia. HIV-1 also induces a senescence-like phenotype in human microglia (Chen et al., 2018). Primary human fetal microglia exposed to singleround infectious HIV-1 pseudotypes had significantly elevated senescence-associated β-galactosidase activity, p21 levels, and production of cytokines (such as IL-6 and IL-8), which are potentially indicative of a SASP, and showed mitochondrial dysfunction. Another method to induce β-galactosidase activity in microglia is dexamethasone (DEX) treatment (Park et al., 2019). In this method, we found that DEX induced ramified form but showed dysfunctional phagocytosis and tolerant response (decreased mRNA of pro-and anti-inflammatory cytokines) with downregulated homeostatic genes, such as Cx3cr1, Cd200r, P2ry12, and Trem2. They were partially unlike aged microglia because DEX-treated microglia showed increased autophagy and decreased inflammatory cytokines. Based on the fact that dystrophic microglia can be identified by high ferritin, iron content also can generate a microglia model of an aged-like phenotype (Brown, 2009). Direct Isolation of Adult Microglia and ex vivo Microglial Culture As the need for adult microglia has emerged, many studies have directly isolated microglia from adult animals. This is performed in a similar way to primary fetal/neonatal microglia culture, with the difference that adult microglia can be obtained after mechanical and enzymatic dissociation of the rodent brain. Generally, after digestion, adult microglia are separated from the cocktail containing collagenase and dispase using a discontinuous Percoll gradient or MACS or FACS (Becher and Antel, 1996;Cardona et al., 2006a;Nikodemova and Watters, 2012;Olah et al., 2012;Bennett et al., 2016;Mizee et al., 2017). Microglia obtained in this way can be used for (single-cell) transcriptome analysis, high-density microarray, proteomic, or cytometric analysis. The main advantage of this method is the ability to obtain adult microglia with specific features as mentioned above. Namely, it can reflect the state of microglia present in the adult brain environment, although acutely isolated adult microglia rapidly lost TMEM119 expression in culture media (Bohlen et al., 2017). Although it has limitations like inefficient passage culture and low cell yield leading to a high number of animals being used (Timmerman et al., 2018), it is undeniably the most reliable way to obtain actual aged microglia from aged rodents or humans. Notably, the actual feature of aged microglia was confirmed by these methods (Galatro et al., 2017;Olah et al., 2018;Ximerakis et al., 2019). However, direct isolation from aged animals for experiments requires high effort and can be timeconsuming. Alternatively, Ercc1 mutant mice, a DNA repairdeficient mouse that exhibits characteristics of accelerated aging in CNS and other tissues, might be used for studying aged microglia without long aging periods, but this may not reflect natural aging due to genetic manipulation (Raj et al., 2014). Acutely isolated murine microglia from aged mice show different features from senescent microglia obtained by long-term fetal microglia. Senescent microglia exhibited shortened telomeres with increased telomerase activity, whereas aged microglia showed unaltered telomeres and reduced telomerase activity (Stojiljkovic et al., 2019). In this study, senescent microglia showed increased p16, p21, and p53 expression, while aged microglia only exhibited p16 elevation, suggesting that aged microglia show dysfunctional features but cannot exhibit key senescence markers. Microglia-Like Cells From Human Induced Pluripotent Stem Cells (iPSCs) Extracting living brain cells containing microglia directly from the animal or human brain presents technical and/or ethical problems. To overcome these issues, in vitro models mentioned below have been proposed. iPSCs or monocytederived microglia-like cells were established along with new adult microglial markers. Induced pluripotent stem cells are reprogrammed adult cells, such as fibroblasts, generated by introducing four transcription factors (Oct3/4, Sox2, c-Myc, and Klf4) (Takahashi and Yamanaka, 2006). The existing in vitro microglia models, such as primary cultures, present difficulty in obtaining sufficient normal and disease-associated microglial cell sources. In addition, microglia are very sensitive to the environment in their identity, so they quickly lose their characteristics when separated from the brain microenvironment (Butovsky et al., 2014;Bohlen et al., 2017). To resolve these limitations and reflect microglial development in the in vitro model as much as possible, many studies have been conducted on the protocols for in vitro differentiation of iPSCs into microglia-like cells. For the first time, a robust protocol for differentiating human iPSCs and embryonic stem cells (ESCs) into microglia-like cells using the embryonic body (EB) was suggested (Muffat et al., 2016). The microglia-like cells were cultured in serum-free conditioned media to reflect the development environment of actual microglia. These microglia-like cells show characteristics of human primary fetal and adult microglia in gene expression, signature marker, and microglial function (e.g., phagocytosis). In addition, they particularly expressed the markers P2RY12 and TMEM119 and progressively showed a ramified form. Unlike most other iPSC-derived microglia protocol results, this approach showed that iPSC-derived microglia-like cells have features of adult microglia as well as human primary fetal microglia. Another protocol differentiated iPSCs into human microglialike cells through exposure to defined factors following the astrocyte co-culture protocol, which includes factors involved in proliferation, such as IL-3, M-CSF, and granulocytemacrophage-CSF (GM-CSF), in the medium (Pandya et al., 2017). Before final differentiation into microglia-like cells, first, an intermediate process of differentiation into hematopoietic progenitor-like cells (iPSC-HPC) is performed. This makes it possible to analogously reflect the ontogeny procedure of microglia. iPSC-HPC has marked expression of CD34 and CD43, markers of hematopoietic cells. Subsequently, as the differentiation into human microglia-like cells progresses, more microglia-related markers, such as CD11b, Iba-1, HLA-KR, TREM2, and CX3CR1, were expressed. In a similar vein to the study of Muffat et al. (2016), Abud et al. (2017) published a fully defined serum-free protocol that ensures high purity (>97%) and large quantities. Initially, CD43 + iPSCs differentiate into myeloid progenitors by exposure to defined medium and transient low oxygen levels (5%). After 10 days, the medium is replaced with serum-free media containing M-CSF, IL-34, TGF-β1, and insulin. Thereafter, microglia-like cells are exposed to CD200 and CX3CL1 and continue to mature, showing more ramified forms as they mature. Gene expression analysis and functional assessment demonstrated that these microglia-like cells highly resemble human fetal and adult primary microglia. Furthermore, Abud et al. (2017) demonstrated the effect of coculture with other neural cells on morphology and function as well as gene expression in microglia. Based on microglial origin, Douvaras et al. (2017) described a reproducible protocol that uses PSC-derived myeloid progenitors, which are considered to correspond to in vivo primitive yolk-sac myeloid progenitors in chemically defined conditions. PSCs, including ESCs and iPSCs, are stimulated with a myeloid inductive medium and treated with microgliapromoting cytokines. As a result, KDR + CD235a + primitive hemangioblasts are generated, which then change to CD45 + CX3CR1 + microglial progenitors in vitro. Subsequently, the addition of IL-34 and GM-CSF to plated microglial progenitors differentiates into iPSC-derived microglia-like cells (Ohgidani et al., 2014), ramifying with highly motile processes and monitoring the microenvironment like in vivo microglia (Davalos et al., 2005). iPSC-derived microglia-like cells express not only typical microglial markers, such as IBA1, CD11c, TMEM119, P2RY12, CD11b, and CX3CR1, but also signature genes in human primary microglia, such as C1QA, GAS6, GPR34, MERTK, P2RY12, and PROS1 (Butovsky et al., 2014). Furthermore, they showed phagocytosis and intracellular Ca 2 + transient in response to ADP. To recapitulate the ontogeny of microglia, Haenseler et al. (2017) established a very efficient human iPSC-derived microglia model analogous to the microglial ontogenetic development process. Microglia originate from yolk-sac-derived macrophages, which have MYB-independent and PU.1-and Irf8-dependent properties (Schulz et al., 2012;Kierdorf et al., 2013). To consider this fact, embryonic MYB-independent iPSC-derived macrophages, which were harvested from EB cultured with BMP4, VEGF, SCF, IL-3, and M-CSF, were co-cultured with iPSC-derived cortical neurons for 2 weeks. The obtained iPSCderived microglia-like cells express major microglia-specific markers, form highly dynamic ramified morphology, and perform phagocytosis. In addition, transcriptome analysis results are similar to those of human fetal primary microglia. In particular, the resulting co-cultures upregulate homeostasisrelated function pathways, downregulate pathogen response pathways, and promote enhanced anti-inflammatory response compared with corresponding monocultures. This protocol avoids repetitive cell sorting or replating, resulting in relative simplicity, high efficiency, and yield. Above mentioned iPSC-derived microglia-like cells have the advantage of securing a sufficient cell source. Another advantage is that iPSC-derived cells of normal donors can be compared with those from patients with neurological disorders, and the genetic background of the patient can be considered. However, despite these advantages, research using iPSC-derived microglialike cells has limitations to overcome. First, there are too many models with different protocols. To describe the most reliable approaches, comparison and integration between each approach are necessary (Timmerman et al., 2018). In addition, most microglia in vitro models using iPSC are inefficient due to low yields, over-time, and cost (Li and Barres, 2018). The effects of the CNS microenvironment cannot be reflected, and most of the iPSC-derived microglia-like cells studied so far have characteristics of primary microglia and not adult microglia. Therefore, developing methods for differentiating microglialike cells similar to adult microglia may be better suited for neurodegenerative disease studies. Above all, because iPSC technology accompanies cell rejuvenation, iPSC cannot reflect the aging feature of the origin cells obtained from aged humans (Mertens et al., 2015). To our best knowledge, there is no report on in vitro aging method utilizing microglia-like cells derived from iPSCs yet. Monocyte-Derived Microglia-Like Cells There are other methods to obtain microglia-like cells by using monocytes. iPSCs-derived microglia-like cells have limitations that they do not reflect the current physiological and pathological status due to rejuvenation (Mertens et al., 2015), but monocytederived microglia-like cells have the advantages that they mirror the state of the donor (Ohgidani et al., 2015;Ryan et al., 2017;Sellgren et al., 2017). Thus, induced microglia-like cells from monocytes of aged humans may reflect aged microglia. However, this needs to be validated. Previously, it has been proven that rat monocytes or macrophages cultured in an astrocyte-conditioned medium (ACM) develop into microglia-like cells showing characteristics of microglia, including ramified morphology (Kettenmann and Ilschner, 1993;Schmidtmayer et al., 1994). Based on this, Leone et al. (2006) showed that human monocytes cultured in ACM exhibit microglia-like features in many respects. Later, it was found that GM-CSF and IL-34, cytokines secreted by astrocytes (Guillemin et al., 1996;Noto et al., 2014), are at least essential in inducing microglia-like cells from human peripheral blood cells (Ohgidani et al., 2014). In particular, IL-34 was found to be a major factor in the proliferation of microglia (Gomez-Nicola et al., 2013). Within 2 weeks, using a cocktail of GM-CSF and IL-34 successfully induced microglia-like cells from human monocytes. These cells represented the various characteristics of microglia, including ramified morphology; markers, such as high CD11 and CX3CR1 and low CD45 and CCR2; phagocytosis; and releasing cytokines related to inflammation (Ohgidani et al., 2014). In the following studies, these induced microglia-like cells were demonstrated to enable various approaches to study microglia in psychiatric disorders by translational research. This can be linked with drug efficacy screening and personalized medicine to maximize the therapeutic effect (Ohgidani et al., 2015). Gene expression analysis showed that microglia-specific genes involved in microglial function are also expressed in monocyte-derived microglia (Ryan et al., 2017). Representative microglial genes TGFBR1 and C1QB are important mediators for synaptic pruning (Bialas and Stevens, 2013), PROS1 is involved in phagocytosis (Fourgeaud et al., 2016), and P2RX7 induces activation and proliferation of microglia (Monif et al., 2009). In addition, monocyte-derived microglia-like cells have also been applied to translational research (Ohgidani et al., 2015;Ryan et al., 2017;Sellgren et al., 2017). However, as the progenitor of microglia is early erythroid myeloid progenitors (eEMP) originating from hematopoietic stem cells, there is a fundamental limitation that monocytederived microglia-like cells and microglia differ in their origins. Moreover, obtaining enough microglia-like cells from human blood monocytes requires repeated invasive procedures (Beutner et al., 2013). Taken together, each method for culturing microglia in vitro has advantages and disadvantages. Although monocyte-derived microglia-like cells can better recapitulate aging as they are not rejuvenated during reprogramming as are iPSCs-derived microglia-like cells, it has not been confirmed whether monocytes obtained from aged humans can actually differentiate to microglia-like cells from aged humans and reflect their features. In the case of rodent microglia, several methods for inducing senescence have been proposed as mentioned above, but the in vitro methods require a significant amount of primary microglial cells. To address this issue, our laboratory developed a system to obtain bankable and expandable adult-like microglia (NEL-MG) by using the head neuroepithelial layer in the mouse embryo (You et al., 2021). CONCLUSION Microglia are yolk-sac-derived CNS cells with distinct origins different from neurons, astrocytes, and oligodendrocytes. They are long-lived cells, and when they die by aging and other causes, they might be replaced by their proliferation or peripheral immune cells rather than being regenerated by their progenitors like other CNS cells, suggesting aged microglia heterogeneity (Figure 3). The function of microglia is also very extensive, affecting our brain homeostasis throughout life, from neurodevelopment to neurodegenerative changes. Due to the transcriptomic dissimilarity of the repopulating microglia and limited repopulating capacity, keeping the original resident microglia healthy for a long time seems to be another strategy to prevent neurodegenerative diseases. Moreover, with the evolving understanding of microglia, to understand the aging process of microglia, using further improved aged microglia models may provide us with a crucial key to find alternative therapeutic strategies for neurodegenerative diseases. AUTHOR CONTRIBUTIONS H-JY and M-SK wrote the manuscript. M-SK supervised all the processes, determined the direction of the manuscript, and approved the final submission of the manuscript. Both authors critically revised the manuscript and confirmed the author's contribution statement.
v3-fos-license
2021-08-03T00:04:04.605Z
2021-04-09T00:00:00.000
236755965
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnsys.2021.675127/pdf", "pdf_hash": "dbb445ede1a58f8b40458b8bf8300ffe5be2b01b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42272", "s2fieldsofstudy": [ "Psychology", "Biology", "Philosophy" ], "sha1": "6b10c804fd43d5c70dbf3c7beb07e6284b139b22", "year": 2021 }
pes2o/s2orc
Cognitive Neuroscience Meets the Community of Knowledge Cognitive neuroscience seeks to discover the biological foundations of the human mind. One goal is to explain how mental operations are generated by the information processing architecture of the human brain. Our aim is to assess whether this is a well-defined objective. Our contention will be that it is not because the information processing of any given individual is not contained entirely within that individual’s brain. Rather, it typically includes components situated in the heads of others, in addition to being distributed across parts of the individual’s body and physical environment. Our focus here will be on cognition distributed across individuals, or on what we call the “community of knowledge,” the challenge that poses for reduction of cognition to neurobiology and the contribution of cognitive neuroscience to the study of communal processes. Although assumption (a) is typical of theories in the psychological and brain sciences (for reviews, see Gazzaniga et al., 2019;Barbey et al., 2021), it is not universal. Proponents of embodied cognition see knowledge as distributed across the brain, the body, and artifacts used to process information (e.g., Barsalou, 2008) and proponents of cultural psychology sometimes see knowledge as embedded in cultural practices (Duque et al., 2010;Holmes, 2020). But assumptions (b) and (c) are widely shared by disciplines that focus on cognition (for a review, see Boone and Piccinini, 2016). The idea is that what really counts as cognition is mediated by individual processes of reasoning and decision making; that cognitive processing is distinct from interactions with books, the internet, other people, and so on. Moreover, other people are obviously sources of information, but their value for an individual is in the information they transfer. The goal of this manuscript is to question the generality of these assumptions, spell out some of the resulting limitations of the cognitive neuroscience approach, and try to suggest some more constructive directions for the field. Our contention will be that the information processing of any given individual is not contained entirely within that individual's brain (or even their bodies or physical environments). Rather, it typically includes components situated in the heads of others, and that the transfer of information is more the exception than the rule. Assumption (a) as usually understood implies (b). If knowledge is represented in the brain, then it is represented by individuals. Thus standard neuroimaging methods assess brain activity and task performance within the individual (for a review of fMRI methods, see Bandettini, 2012). According to this view, the neural foundations of the human mind can be discovered by studying the individual brain and identifying common patterns of brain activity across individuals. Thus, by averaging data from multiple subjects, cognitive neuroscience seeks to derive general principles of brain function and thereby reveal the mechanisms that drive human cognition. This approach lies at the heart of modern research in cognitive neuroscience, reflecting a disciplinary aim to generalize beyond the individual to characterize fundamental properties of the human mind using widely held methodological conventions, such as averaging data from multiple subjects, to infer general principles of brain function (Gazzaniga et al., 2019). Although assumption (a) implies (b), the converse does not also hold. If knowledge is represented by the individual, it need not be represented exclusively within the brain. More importantly, as we will argue, an individual's knowledge not only arises in large part from communal interactions, but also depends on cognitive states of other members of the community. This places limits on the utility of studying individual brains to infer general principles of the collective mind. Our conclusion is decidedly not that cognitive neuroscience makes no contribution to the study of cognition. It is that cognitive neuroscience does not provide a sufficient basis to model cognition. Social neuroscience is an emerging field that addresses part of the problem, as it takes as a central tenet that "brains are not solitary information processing devices" (Cacioppo and Decety, 2011). Nevertheless, the discussions we are aware of within the field of cognitive neuroscience still abide by assumptions (b) and (c). THE COMMUNITY OF KNOWLEDGE AND THE LIMITS OF THE INDIVIDUAL We start with assumption (b). Years of research in psychology, cognitive science, philosophy, and anthropology have shown that human cognition is a collective enterprise and is therefore not to be found within a single individual. Human cognition is an emergent property that reflects communal knowledge and representations that are distributed within a community (Hutchins, 1995;Clark and Chalmers, 1998;Wilson and Keil, 1998;Henrich, 2015;Mercier and Sperber, 2017;Sloman and Fernbach, 2017). By "emergent" property we mean nothing elusive or mysterious, but simply certain well-documented properties of groups that would not exist in the absence of relevant properties of individuals, but are not properties of any individual member of the group, or any aggregation of properties of some or all members of the group. Accumulating evidence indicates that memory, reasoning, decision-making, and other higher-level functions take place across people. The evidence that mental processing is engaged by a community of knowledge is multifaceted (for a review, see Rabb et al., 2019). The claim that the mind is a social entity is an extension of the extended mind hypothesis (Clark and Chalmers, 1998): Cognition extends into the physical world and the brains of others. The point is not that other people know things that I do not; the point is that my knowledge often depends on what others know even in the absence of any knowledge transfer from them to me. I might say, "I know how to get to Montreal, " when what I really mean is that I know how to get to the airport and the team piloting the aircraft can get from the airport to Montreal. Similarly, one might say that "what makes a car go" is the motor: that's why it's called a "motor, " after all. But while a full account will include the engine as a key contributor, the propulsion system is distributed over the engine, drive shaft, the human who turns the key, fuel, a roadway, and more. Changing the boundaries of what has traditionally been considered cognitive processing in an analogous way -from individual brains to interacting communities -perhaps raises questions of who should get credit and who should take responsibility for the effects of an individual's action, but it is nevertheless an accurate description of the mechanisms humans use to process information. Furthermore, as the boundaries for what counts as cognitive processing shift, the operational target for studying the human mind moves beyond the scope of methods that examine performance through the lens of the individual. Philosophers analyzing natural language illustrate how cognitive processes are extended into the world. The classic analysis is by Putnam (1975), who points out that we often use words whose reference (or denotation or extension) and therefore, according to Putnam, their meaning, is determined by factors outside one's brain or mind (i.e., externalism). One could see Humpty Dumpty as an extreme and defiant internalist: "When I use a word, it means precisely what I want it to mean, no more and no less" (Carroll, 1872). Putnam's argument is the subject of vigorous and sophisticated but not entirely conclusive debate (Goldberg, 2016; see also Burge, 1979). Nonetheless it is now widely agreed that some form of externalism is at least a necessary part of an explanation of how our everyday terms have their referents (or denotations) and meanings. The philosopher whom one might call the Godfather of Externalism, Wittgenstein (1973), preferred to draw attention to what he saw as linguistic facts that had been overlooked, above all, that the meaning of words depends on (or is even identical to) their use. Although that bald statement is highly controversial, what matters from our point of view is that the meaning of a word and its correct use depend on collective knowledge that extends beyond the individual, reflecting a social context (Boroditsky and Gaby, 2010). Thus, for a community of knowledge to support meaning and communication, there must be sufficient stability of common usage even as usage typically changes over time. The same holds for sentence meanings, as in, "Zirconium comes after Yttrium in the Periodic Table." The speaker may have long ago forgotten-or never even knewwhat exactly Zirconium is and why one thing comes after another in the Periodic Table. Nonetheless the statement has a meaning that has been fixed by the appropriate members of the scientific community, and propagated more-or-less successfully to generations of students. The speaker's statement is true and has that communally established meaning, no matter how confused the speaker may be. Some might distinguish the speaker's meaning from the correct, communally-ordained meaning. That is important in some contexts (e.g., in teaching and in evaluating students), but the point here is that the sentence has a precise meaning established by chemical science, even if that is not precisely what is in the speaker's head, but only in the heads of others. The same holds of theories. The statement "According to modern chemistry there are more than a hundred elements" is true regardless of how well or poorly the speaker might understand modern chemistry. It is true because "modern chemistry" means the chemical theories agreed upon by socially recognized experts. This holds even if the relevant theories are no longer in the speaker's head, and even if the speaker never understood the theories. These remarks on social meaning converge with recent work in the emerging discipline of "social epistemology" (Goldman, 1999), the study of knowledge as a social entity. We will speak of "knowledge" in an everyday sense, without entering into the labyrinthine and ultimately inconclusive attempts at definition offered by philosophers from the time of Plato, including what "really constitutes" social knowledge. What matters here is that research within social epistemology demonstrates that successful transmission of knowledge clearly does occur and depends on three general conditions (Goldberg, 2016): (i) social norms of assertion; (ii) reliable means of comprehending what is said (which depend on social norms of meaning and usage); and (iii) a reliable way of telling a reliable source of knowledge from an unreliable one. For reasons we elaborate below, we believe that the role of society in epistemology is not only to transmit knowledge from one individual to another, but to retain knowledge even when it is not transmitted. Sloman and Fernbach (2017) extended the externalist project well beyond a concern with the meanings of words, to large swathes of conceptual knowledge. Outside their narrow areas of expertise, individuals are relatively ignorant (Zaller, 1992;Dunning, 2011). In any given domain, they know much less than there is to know, but nonetheless do know certain things that others understand more fully. The extent to which we rely on others in this way is often obscured by the fact that people tend to overestimate how much they know about how things work (Rozenblit and Keil, 2002;Lawson, 2006;Fernbach et al., 2013;Vitriol and Marsh, 2018). They overestimate their ability to reason causally (Sloman and Fernbach, 2017). They also overestimate what they know about concept meanings (Kominsky and Keil, 2014) and their ability to justify an argument (Fisher and Keil, 2014) and claim to have knowledge of events and concepts that are fabricated (Paulhus et al., 2003). The best explanation for our tendency to overestimate how much we know is that we confuse what others know for what we know (Wilson and Keil, 1998). Others know how things work, and we sometimes fail to distinguish their knowledge from our own. The idea is the converse of the curse of knowledge (Nickerson, 1999). In that case, people tend to believe that others know what they themselves know (this is part of what makes teaching hard). In both cases, people are failing to note the boundary among individuals. Circumstances can produce a rude awakening if things go wrong and we suddenly need to understand how to fix them, or if we are otherwise challenged to produce a full explanation either in a real world situation or by a psychologist. Nonetheless, as Goldman (1999) observes, even a shallow understanding of a concept, idea, or statement can give us valuable practical information. Fortunately, we can know and make use of a good many truths without ourselves possessing the wherewithal to prove them, so long as our limited understanding is properly anchored elsewhere. We develop multiple examples below. Meanwhile, from a very broad perspective, we note that the conceptual web is tangled and immense, containing far more than a mere mortal could store and make sense of Sloman and Fernbach (2017). Thus we are by nature creatures that rely heavily on others to have full understandings of word meanings ("semantic deference" in the philosophical literature) and a more full and secure grasp of ideas, statements, or theories than our own incomplete grasp reflected in our shallow understanding. This dovetails not only with experimental results (Rozenblit and Keil, 2002;Fernbach et al., 2013;Kominsky and Keil, 2014;Sloman and Rabb, 2016), but also with recent anthropological work on culture-gene coevolution showing that cultural accumulation exerted selective pressure for genetic evolution of our abilities to identify and access reliable sources of information and expertise (e.g., Richerson et al., 2010;Henrich, 2015). At a social level, the fact that knowledge is communal also has a political dimension. As societies develop, group policy and decision-making will depend on the aggregation, coordination, and codification of various sorts of knowledge distributed across many individuals (e.g., experts in the production, storage, distribution, and preparation of food). There is lively debate among political theorists about whether command and control societies, democracies, or something else can best fulfill the needs and aspirations of its members (Anderson, 2006;Ober, 2008). Is decision-making best served by cloistered experts or through information gathered from non-experts as well? Non-experts presumably have greater access to details of local situations, but attempts to utilize widely distributed knowledge poses greater problems of aggregation and coordination. As Hayek (1945) remarked, the aggregation and deployment of widely distributed information is a central issue for theories of government. However, our interest here is not the relative merits of different forms of government. We mention these issues only to illustrate the far-reaching and pervasive importance of information processing in social networks and by implication the need for a political level of explanation in the understanding of a community of knowledge. SOCIAL KNOWLEDGE WITHOUT SOCIAL TRANSMISSION: OUTSOURCING Work on collective cognition points to several ways that individual cognition depends on others (Hemmatian and Sloman, 2018). One is collaboration: Problem-solving, decisionmaking, memory, and other cognitive processes involve the joint activity of more than one person, and in many contexts mutual awareness of a joint intention to perform some task. Work on collaboration has focused on team dynamics (Pentland, 2012) and group intelligence (Woolley et al., 2010). A second form of cognitive dependence on others, and the one that grounds our argument, is outsourcing: The knowledge people use often sits (or sat) in the head of someone else, someone not necessarily present (or even alive). Frequently, outsourcing requires that we have access to outsourced knowledge when the need arises. But often merely knowing we have access is sufficient for practical purposes (e.g., we go to Tahiti assuming we'll find what we need to enjoy ourselves when we're there). On occasion we do access the information, and this requires some type of social transmission. Such transmission comes in the form of social learning of a skill, practice, norm, or theory on the one hand, or in the form of more episodic or ad hoc accessing of information for limited, perhaps one-time, use (Barsalou, 1983). A prime example of the former would be an apprentice learning a trade from a master; of the latter, "googling" to find out who won the 1912 World Series. The transmission of information around a social network is a key determinant of human behavior (Christakis and Fowler, 2009). A key requirement in using information that is sitting in someone else's head is the possession of what we will call epistemic pointers ("epistemic" meaning having to do with knowledge): the conscious or implicit awareness of where some needed information can be found. Sometimes we can envision many potential pathways to an information source, whether direct or indirect, and sometimes very few. Thus we may envision many potential information sources for how to get to Rome (travel agents, friends who have been there), and various pathways by which we might access a given source (e.g., find the phone number of a friend who said she had a good travel agent) but fewer pathways to find out how to get to the rock shaped like an elephant that someone mentioned in passing. Our representations of pointers, to a source or to a step on a pathway to a source, can be partial and vague, providing little or no practical guidance ("some physics Professor knows it"), or full and precise ("it's in Einstein's manuscript on the special theory of relativity"). If we are completely clueless, we can be said to lack pointers and pathways, and simply have a placeholder for information. The evidence of human ignorance that we review below leads us to suspect that the vast majority of the knowledge that we have access to and use is in the form of placeholders. SETTING THE STAGE: COLLABORATION The centrality of collaboration for human activity derives from the fact that humans are unique in the cognitive tools they have for collaboration. Tomasello and Carpenter (2007) make the case that no other animal can share intentionality in the way that humans can in the sense of establishing common ground to jointly pursue a common goal, and a large body of work describes the unique tools humans have to model the thoughts and feelings, including intentions and motivations, of those around them (e.g., Baron-Cohen, 1991). The role of collaboration in specifically cognitive performance has been most fully studied in memory. Wegner et al. (1991) report some of the early work showing that groups, especially married couples, distribute storage demands according to relative expertise. They call these "transactive memory systems." Theiner (2013) argues that transactive memory systems reflect emergent group-level memories, providing evidence that: (i) members of a transactive memory system are not interchangeable (because each member makes unique contributions to the group); (ii) if members are removed from the group, the system will no longer function (omitting essential components of the grouplevel memory); (iii) the disassembly and reassembly of the group may impair its function (for example, when members of the group no longer understand the distribution of knowledge within the system and what information they are responsible for knowing); and (iv) cooperative and inhibitory actions among members are critical (given the interactive and emergent nature of transactive memories) (for a review, see Meade et al., 2018). Wilson (2005) claims that these properties of a transactive memory system have important political consequences as they affect the commemoration and memorialization of politically relevant events and culturally important origin stories that shape nationalism and attitudes toward human rights and other issues. Memory systems play a critical role in communities. Further evidence for the importance of collaboration in thought comes from naturalistic studies of group behavior. The seminal work was conducted by Hutchins (1995). He offered a classic description of navigating a Navy ship to harbor, a complex and risky task. The process involves multiple people contributing to a dynamic representation of the ship's changing location with reference to a target channel while looking out for changing currents and other vessels. Various forms of representation are used, all feeding into performance of a distributed task with a common goal. Sometimes the common goal is known only by leadership (in the case of a secret mission, say). Nevertheless, successful collaboration involves individuals pursuing their goals so as to contribute to the common goal. Many of the tasks we perform everyday have this collaborative nature, from shopping to crossing the street. If a car is coming as we cross, we trust that the driver won't accelerate into us, and the more assertive street crossers among us expect them to slow down in order to obtain the common goal of traffic flow without harm to anyone. Banks and Millward (2000) discuss the nature of distributed representation and review data showing that distributing the components of a task across a group so that each member is a resident expert can lead to better performance than giving everyone the same shared information. Hutchin's nautical example illustrates this, insofar as some essential jobs require multiple types of expertise. Other jobs might not require this, so that crew members may substitute for one another, because all of them have the same basic information or skill level needed for the job. Often in real life there will be a mix, so that the task occupies an intermediate position relative to Banks and Millward's two types of group (i.e., diverse local experts versus all group members having the same knowledge). Work on collective intelligence also provides a good example of emergent group properties, illustrating how collective problem-solving relies more on collaboration and social interconnectedness than on having individual experts on the team (Woolley et al., 2010). COLLABORATION AND NEUROSCIENCE: THE CASE OF NEURAL COUPLING Research in cognitive neuroscience has not ignored these trends in the study of cognition. An emerging area of research investigates the communal nature of brain networks, examining how the coupling of brain-to-brain networks enables pairs of individuals or larger groups to interact (Montague et al., 2002;Schilbach et al., 2013;Hasson and Frith, 2016). These studies deploy a generalization of neuroimaging methods, applying techniques that were once used to assess intra-brain connectivity (i.e., within the individual) to examine inter-subject connectivity (i.e., between different subjects; Simony et al., 2016). This can be achieved through experiments in which brain activity within multiple participants is simultaneously examined (i.e., "hyperscanning;" Montague et al., 2002) or analyzed post hoc (Babiloni and Astolfi, 2014). Such approaches have been applied to assess brain-to-brain communication dynamics underlying natural language (e.g., Schmalzle et al., 2015). Recently, researchers have placed two people face-toface in a single scanner to examine, for example, the neural mechanisms underlying social interaction (e.g., when people make eye contact; for a review, see Servick, 2020). The situation -very noisy and now also very crowded -does not score high on ecological validity. Also, it is hard to see how one could scale this approach up to study larger groups (big scanners, little participants?). Nonetheless this is a reasonable place to start, and here, as with hyperscanning and retrospective analysis of neuroimaging data, one might well secure suggestive results. So although the examination of brain-to-brain networks is rare in cognitive neuroscience, with only a handful of studies conducted to date (for a review, see Hasson and Frith, 2016), this approach represents a promising framework for extending cognitive neuroscience beyond the study of individuals to an investigation of dyads, groups, and perhaps one day to larger communities. This approach has set the stage for research on the neural foundations of communal knowledge, investigating how cognitive and neural representations are distributed within the community and how information propagates through social networks, for example, based on their composition, structure, and dynamics (for a review, see Falk and Bassett, 2017; for a discussion of hyperscanning methods, see Novembre and Iannetti, 2020;Moreau and Dumas, 2021). Evidence from this literature indicates that the strength of the coupling between the neural representation of communication partners is associated with communication success (i.e., successful comprehension of the transmitted signal; Stephens et al., 2010;Silbert et al., 2014;Hasson and Frith, 2016). For example, the degree of brain-to-brain synchrony within networks associated with learning and memory (e.g., the default mode network) predicts successful comprehension and memory of a story told among communication partners (Stephens et al., 2010). Indeed, evidence indicates that people who are closely related within their social network (i.e., individuals with a social distance of one) demonstrate more similar brain responses to a variety of stimuli (e.g., movie clips) relative to individuals who share only distant relations (Parkinson et al., 2017). Research further suggests that the efficiency of inter-subject brain connectivity increases with the level of interaction between subjects, providing evidence that strong social ties predict the efficiency of brain-to-brain network coupling (Toppi et al., 2015; for a discussion of the timescale of social dynamics, see Flack, 2012). THE MAIN EVENT: OUTSOURCING A community of knowledge involves more than coupling. We do collaborate, and we engage in joint actions involving shared attention, but we also make use of others without coupling: We outsource to knowledge housed in our culture, beyond the small groups we collaborate with. In the best cases, we outsource to experts. A great many people know that the earth revolves around the sun, but only a much smaller number know how to show that. Both sorts of people are part of a typical community of knowledge, and both are, by community standards, said to know that the earth revolves around the sun. This holds even though the non-expert does not know who the experts are, does not remember how she came to have that knowledge, and does not know what observations and reasoning show that our solar system is heliocentric. Outsourcing in some circumstances can make us vulnerable to a lack of valuable knowledge. Henrich (2015) describes how an epidemic that killed off many older and more knowledgeable members of the Polar Inuit tribe resulted in the tribe losing access to much of its technology: Weapons, architectural features of their snow homes, and transportation (e.g., a particular type of kayak). Knowledge about how to build and use these tools resided in the heads of those lost members. Without them, the remaining members of the tribe were unable to figure out how to build such tools, and were forced to resort to less effective means of hunting, staying warm, and traveling. The issue here is not collaboration. Tool users were not cognitively coupling with the tool providers. Rather, they were accessing and making use of the latter's knowledge without acquiring it, in this case outsourcing both the expertise and the production of vital artifacts. Assumptions that individuals had been able to rely on (i.e., that they would have access to a tool for obtaining food) no longer held. The problem was that the younger members of the tribe had outsourced their knowledge to others who were no longer available. Anthropologists have documented numerous cases of loss of technology through death of the possessors of a society's specialized knowledge, or through isolation from formerly available knowledge sources (e.g., Henrich and Henrich, 2007). By the same token, a community can add new expertise by admitting (or forcibly adding) new members with special skills (e.g., Weatherford, 2005). Sometimes we are aware that we are outsourcing, for instance when we explicitly decide to let someone else do our cognitive work for us (as one lets an accountant file one's taxes). In such cases, we explicitly build a pointer, a mental representation that indicates the repository of knowledge we do not ourselves fully possess and that anchors the shallow or incomplete knowledge we do possess. We have a pointer to an accountant or tax lawyer (whether to a specific person or just to a "tax preparer to be determined"), just in case we are audited. But often we outsource without full awareness, acting as if we have filled gaps in our knowledge even though no information has been transferred. Our use of words is often licensed by knowledge only others have, our explanations often appeal to causal models that sit in the heads of scientists and engineers, and our political beliefs and values are inherited from our spiritual and political communities. More generally, people's sense of understanding, reasoning, decision-making, and use of words and concepts are often outsourced to others, and often we do not know whom we are outsourcing to, or even that we are doing it. For instance, when we say "they landed on the moon, " most of us have little idea who they refers to, and often lack conscious awareness that we don't know who they were. Or we say, "We know that Pluto is not strictly speaking a planet." We know that much on reliable grounds. What little we know is anchored by the possibility of transmission (direct or perhaps very indirect) from communal experts; specifically, the scientists who set the criteria for planethood, and who know whether Pluto qualifies and on the basis of what evidence. Again, it is highly advantageous to be able to outsource -and in fact necessary -since we can't all master full knowledge of all the crafts, skills, theoretical knowledge, and up-to-date-details of local situations that we need or might need to navigate our environment. Moreover, people believe they understand the basics of helicopters, toilets, and ballpoint pens even when they do not (Rozenblit and Keil, 2002). Fortunately, others do. In addition, the knowledge that others do increases our sense of understanding not only of artifacts, but of scientific phenomena and political policies (Sloman and Rabb, 2016;Rabb et al., 2019). In fact, just having access to the Internet also increases our sense of understanding even when we are unable to use it (Fisher et al., 2015). These findings cannot be attributed to memory failures because, in the vast majority of cases, the relevant mechanisms were never understood. And the studies include control conditions to rule out alternative explanations based on self-presentation effects and task demands. What they show is that mere access to information increases our sense of understanding. This suggests our sense of understanding reflects our roles as members of a community of knowledge, and suggests that we maintain pointers to or placeholders for information that others retain. The fact that access causes us to attribute greater understanding to ourselves implies that our sense of understanding is inflated. This in turn implies that we fail to distinguish those pointers or placeholders from actual possession of information; we don't know that we do not really know how artifacts like toilets work, but the awareness that others do leads us to think we ourselves do, at least until we are challenged or we land in a situation demanding genuine expertise (Call the plumber now!). More evidence for this kind of implicit outsourcing comes from work on what makes an explanation satisfying. People find explanations of value even if they provide no information, as long as the explanations use words that are entrenched in a community. For example, Hemmatian and Sloman (2018) gave subjects a label for a phenomenon (e.g., "Carimaeric") and told them that the label referred to instances with a specific defining feature (e.g., stars whose size and brightness varied over time). Then the label was used as an explanation for the defining property (someone asked why a particular star's size and brightness varied over time and was told that it's because the star is Carimaeric). Subjects were asked to what extent the explanation answered the question. They answered more positively if the label was entrenched within a community than if it was not. Similar findings have been obtained using mental health terms, even among mental health professionals (Hemmatian et al., 2019). In these cases, there is no coupling between the unidentified community members who use the explanation and the agent. There is merely the heuristic that the fact that others know increases my sense of understanding. This heuristic is so powerful that it operates even when others' knowledge has no informational content. Some of the clearest evidence for this heuristic comes from the political domain. We often take strong stances on issues that we are ignorant about. These authors believe strongly in anthropogenic climate change despite being relatively ignorant of both the full range of evidence and the mechanism for it. We rely on those scientists who study such things. Political issues tend to be complex and we need to rely on others, at least in part, to form and justify our opinions. In a representative democracy, for instance, we try to be informed on key issues, but rely on specialized committees to investigate matters more thoroughly. For better or for worse, individual support for policies, positions, and leaders comes largely from partisan cues rather than nonpartisan weighing of evidence (Cohen, 2003;Hawkins and Nosek, 2012;Anduiza et al., 2013;Han and Federico, 2017;Van Boven et al., 2018). A growing body of evidence indicates that partisan cues determine how we understand events (Jacobson, 2010;Frenda et al., 2013; but see Bullock et al., 2015) and even whether we take steps to protect ourselves from infectious disease (Geana et al., 2021) 1 . Marks et al. (2019) show that people use partisan cues to decide whose advice to follow in a competitive game even when they have objective evidence about who the better players are. When evaluating data, we are often more concerned with being perceived as good community citizens by acceding to our community's mores than we are with making accurate judgments (Kahan et al., 2011). Such a bias has a rationale if it maintains community membership, and membership is deemed more important than being correct. Outsourcing knowledge, including the choice of whom to outsource to, is a risky affair. One must estimate what the source does and does not know, their ability to transmit information, and whether their interests align with yours. One must determine how much to trust potential sources of information. Outsourcing, whether influenced by partisan bias or not, is a direct consequence of the human need and tendency to construct pointers to knowledge that other people store. The basic features of how a community holds knowledgerelative ignorance associated with epistemic pointers to expertise-apply to both social information and disinformation, to well-grounded knowledge, as well as fervently held nonsense perpetrated by unreliable sources. Community norms about what counts as knowledge, and as a reliable pathway of knowledge transmission, may vary greatly: One subculture will require, for some subject matters, scientific expertise on the part of an ultimate source, along with reliable paths of transmission of scientific knowledge, paths often institutionalized, as with schools or trade unions and their certifications. Another subculture will consider God the ultimate source of understanding in important areas, and divine revelation, or the word of officially ordained spokespersons, as appropriate paths of dissemination. Thus the role of our social networks goes beyond actively sharing information. We use them to represent and process information, such that the network itself serves as an external processor and storage site. We trust others to maintain accurate statistics, to distil news, to total our grocery bill, help us fill out our tax forms, and to tell us what position to take on complex policy. In all such tasks, representation and processing of essential information does not in general occur in individual brains. They do not occur in individual brains even if we allow that those brains are coupled within a social network. Representation and processing occur over a larger portion of an encompassing network, and potentially over the entire network, branching out to include our sources, our sources' sources, and any intermediaries such as books, the internet, or other people, along the paths of transmission. OUTSOURCING IN COGNITIVE NEUROSCIENCE: CONSTRUCTING EPISTEMIC POINTERS To explain phenomena associated with outsourcing, we cannot appeal to coupling, because coupling requires specification of who is coupling with whom. To explain outsourcing, cognitive neuroscientists must appeal to a different theoretical construct: Neural pointers or placeholders, representations in the brain that act as pointers to knowledge held elsewhere. The work in cognitive neuroscience that most directly addresses the mechanisms of outsourcing concerns how the representation of knowledge relates to affiliation, on whom we trust to retain reliable knowledge. Putting aside the role of trust in institutions, social neuroscience research examining trust in more personal contexts indicates that trust and cooperation are mediated by a network of brain regions that support core social skills, such as the capacity to infer and reason about the mental states of others (for reviews, see Adolphs, 2009;Rilling and Sanfey, 2011). This work provides the basis for future research investigating how the neurobiology of trust contributes to the representation and use of outsourcing in collective cognition. To do so, however, the field will need to move beyond the use of "isolation paradigms" in which subjects observe others whom they might or might not then trust (Becchio et al., 2010). In such cases, subjects neither participate in direct social interaction with potential objects of trust nor outsource their own reasoning to others (Schilbach et al., 2013). Such observation is seldom the sole basis of epistemic pointers, and often is not involved at all. Instead, pointers typically depend on cues that reflect how third parties or the community as a whole regard a potential source. This can involve informal gossip or more institutionalized "rating systems" and reviews, where the latter will bring us back to social institutions. So there is a vast arena, virtually unexplored by social neuroscience, starting with the origin and nature of the neural mechanisms that serve as pointers to communal knowledge. THE IRREDUCIBILITY OF THE COMMUNITY OF KNOWLEDGE The implication of our discussion is that many activities that seem solitary-like writing a scientific paper-require a cultural community as well as the physical world now including the Internet (to ground language, to support claims, to provide inspiration and an audience, etc.). Does this mean there is no solely neurobiological representation for performing such tasks? Perhaps neurobiological reduction can be accomplished by giving up on the idea of reduction to a single brain, and instead appeal to reduction to a network of brains (Falk and Bassett, 2017). Perhaps a broader view of cognitive neuroscience as the study of information processing in a social network of neural networks can overcome the challenge posed for cognitive neuroscience by the community of knowledge. Can networks of individuals processing together be reduced to networks of brains interconnected by some common resource, perhaps some form of neural synchrony? We believe the answer is "no." For one thing, the relevant social network is frequently changing, as is membership in groups addressing different problems (for climate change, it involves climate scientists but for predicting football scores, it involves football fans). So there are no fixed neurobiological media to appeal to. This might seem to be irrelevant, as the goal of cognitive neuroscience is not to reduce cognition to a group of specific brains. Rather, one studies specific brains in order to find general patterns of activity that occur in different brains. But this is precisely the problem; namely, the general pattern may not capture specific properties exhibited by the individual. Generalization from the group to the individual depends on equivalence of the mean and variance at each level; an equivalence that has increasingly been called into question (Fisher et al., 2018). The same problem will almost certainly arise with generalizations about multiple groups' performance of a given task. Indeed the problem may be much worse, as changing group membership may introduce even greater variation across groups of the patterns of interaction that produce a group's performance. Changes in membership will not just mean changes in the attributes and resources the members bring to the group, but also -and more strikingly -potentially very large differences in the way members interact, even if they happen to produce the same result (e.g., if they forecast the same football score as another group whose members interacted in their own, different way in arriving at that prediction). Studies of group dynamics and organizational behavior recognize that many factors affect the efficiency and result of group collaboration: the relative dominance of discussion by some particular member(s), the timidity of others, the motivations of members, the level of experience and expertise of the members, the level of relevant knowledge about the particular teams involved, the stakes involved in making a good prediction, time limitations, the degree of synergy among team members, size of the group, form of discussion used (Hirst and Manier, 1996;Cuc et al., 2006), demographic makeup of the members, and so on. Different fans, or even the same fans on different occasions, can arrive at the same score forecasts for the same game by an unlimited number of patterns of interaction. This not only produces the problem of multiple realization (of a type of group performance on a given task) on a grand scale, but indicates that there will be no tolerably definite and generalizable pattern of group dynamics that applies to particular groups addressing the same given task. Hence there is no one general pattern, or even manageable number of patterns, to be reduced to neuroscience. On a more positive note, research in group dynamics and organizational behavior has, as just noted, identified numerous factors that enter into group performance. So cognitive neuroscience (social and individual) can, by drawing on that research, investigate the neural underpinnings of types of factors such as trust, mind-reading capacities, and many others that drive different forms of group interaction, and this will be essential for an account of group cognition if such an account is ever to be had. But that is a far cry from reducing group behavior to any variety of neuroscience. GROUP INTELLIGENCE AND INVENTIVENESS Anthropological and psychological research, in the lab and in the field, strongly reinforces the point: group intelligence and group inventiveness are not just the properties of an individual (such as the smartest or most inventive member of the group), or an average of the members' properties, or an aggregate of the members' individual cognitive properties (Woolley et al., 2010). They are sometimes quite surprising properties that emerge from interactions among members of the group, in some cases as a matter of learning, sometimes just from a repeated exchange of ideas, sometimes from a group of initially equal members, sometimes from a group with one or two initial stand outs. The effect of group interaction can be positive or negative depending on the motivations, personal traits, group camaraderie and various situational constraints (e.g., time limitations, availability of paper and pencil, food, and rest). The moral is that examination of the brains of group members will not reveal or predict precisely how the group as a whole will perform, nor through what complex pattern of interaction or mechanisms it arrived at a given result. Even in a relatively small group there will be an enormous number of interactions that might produce any given result, and that number will increase exponentially with any increase in group size, not to mention the introduction of other potentially influential factors. Thus there is no way to identify any particular neurobiological pattern (or manageably small number of patterns) across brains as the way(s) in which groups produce new knowledge, or even the way the same group functions on different occasions or with regard to different sorts of cognitive tasks. Put another way, even if we could find out through observation, self-report, or fMRI conducted in everyone, that specific members of a given group engaged in certain specific types of interaction with other specific members, and we were able to reduce that to neurobiological terms, we would not be able to say more than that this is one of innumerable ways a particular group result might be realized in a particular social and physical context. An open-ended list of possible realizations at the psychological or behavioral level does not support a reduction of this bit of psychological description to cognitive neuroscience even if it tells us a lot about what goes into that performance. Note once again that we need functional descriptions, which will themselves be complex and predictive of behavior in only a limited way. Functional descriptions will, as with individual psychology and neuroscience, provide essential guidance and support for social neuroscience, and potentially draw on insights from neuroscience. JUSTIFICATION AND COMMUNAL NORMS We saw earlier that within a community of knowledge most of what we know is anchored in the heads of people doing scientific, technical, and other sorts of intellectual work, or in the knowhow of expert mechanics, electricians, potters, and so on. Thus, most of an individual's knowledge is just more or less shallow understanding or very limited practical knowhow, along with a more-or-less precise pointer to expert knowledge (Rabb et al., 2019). For instance, we know that "smoking causes lung cancer" but most of us are not sure why. So the neurobiological representations under study are really mostly pointers to knowledge that experts have or to pathways of transmission by which we can reliably access that information. Hence, the network that anchors much of our knowledge about the causal structure of the world is actually a network that sits across brains, not within a brain: It is not an aggregate of brain contents, but a pattern of interactions among brains with certain contents. Because it is the contents that are important, and not the specific brains, there are an unlimited number of patterns of interactions that would generate and maintain the same causal beliefs. But the actual justification for those beliefs is more systematic than that. We have seen that it depends on community norms for attributing knowledge and associated institutions of knowledge certification. Within a given community, whatever complies with those norms qualifies as knowledge. Some communities may have rather eccentric norms, and regard some things as general knowledge that another community regards as wildeyed conspiracy theory (issues of fake news and slander come to mind). Accordingly, an account of most of our knowledge will need to include the role of such social institutions and norms. I can legitimately claim to know that the sun does not revolve around the earth, that anthropocentric climate change is real, that the Pythagorean Theorem is true, and a great many other things I "learned in school, " even if I cannot myself produce proofs for any of them, or even say precisely what they amount to (Note that this is different from the case in which I could produce a proof if I sat down and tried to work one out). I know these things because they are known by recognized knowledge sources and I got them from socially recognized reliable transmitters of knowledge. This holds even if I can't now remember where I learned it and am not capable of coming up with the evidence or proofs that sit in the heads of others. My indirect and usually very superficial knowledge is anchored in the social network of experts and paths of transmission. Similarly, even the knowledge of experts is typically anchored in large part in that of other experts, as architects rely on results in materials science, industrial design, designers and manufacturers of drafting tables and instruments, and so on. Again, an enormous amount of anyone's knowledge exists only by way of a larger community of cognizers and their interactions. These aspects of knowledge-including knowledge worked out in the privacy of my study or laboratory-are "knowledge" only by virtue of being anchored in a larger social network, independently of the particular neurobiology they are grounded in. Consider a team of researchers writing a manuscript together. A complete account of collaboration and outsourcing involved in joint manuscript writing would have to include not only the brains of the authors, but also those whose evidence or testimony provides the support for claims made in the manuscript. If the manuscript presents findings summarizing a report, then the network would have to include the brains of everybody who wrote the report, or perhaps only those who contributed relevant parts. But how would you decide whose brain is relevant? It would depend on whether relevant knowledge was referenced in the manuscript. In other words, the structure of the knowledge is necessary to determine the relevant source and corresponding neural network to represent that knowledge. The knowledge would therefore not be reducible to a neural network, because identifying the network would depend on the knowledge. Anyone attempting to describe the cross-brain neural network involved in writing a given manuscript, in the relevant processing and transmission (or lack thereof) of various sorts of information from multiple diverse sources, would not know which brains to look at, or what to look for in different brains, without already being able to identify how each bit of information in the manuscript is grounded. But even if we could identify a posteriori the network of brains or profiles of brain activity pertinent to a given piece of collaborative writing, we would be no further in explaining how or why the article came to be written. The reason that some ideas enter into a representation is because they elaborate on or integrate the representation in a more or less coherent way. One reason a report gets cited in a manuscript is that it supports or illustrates some informational point. If there is resonance among neural networks, it is because the information they represent is resonant; the neural networks are secondary. The knowledge held by the community is driving; any emergent neural networks are coming along for the ride. At the beginning of this essay, we stated three widely-held assumptions in cognitive neuroscience that are inconsistent with facts about what and how people know. Our aim is not to diminish the important contributions of cognitive neuroscience. The assumptions we stated do hold for a variety of critical functions: Procedural knowledge is held in individual brains (or at least individual nervous systems in interaction with the world), and people obviously retain some symbolic knowledge in their individual brains. Moreover, common sense is enough to indicate that knowledge at a basic-level (Rosch, 1978) is regularly transferred between individuals. But far more symbolic knowledge than people are aware of is held by othersoutside the individual's brain. Thus, the purpose of much of cognitive neuroscience, to reduce knowledge to the neural level, is a pipe dream. The fact of communal knowledge creates a key limitation or boundary conditions for cognitive neuroscience. SUMMARY AND IMPLICATIONS We have elaborated a theory of the community of knowledge, identifying as primary components outsourcing and collaboration, along with an hypothesis about how we construct BOX 1 | Cognitive neuroscience meets the community of knowledge. Our understanding of how the world works is limited and we often rely on experts for knowledge and advice. One way that we rely on others is by outsourcing the cognitive work and task of reasoning to experts in our community. For example, we believe that "smoking causes lung cancer" even though many of us have little understanding of why this is the case. Here, we simply appeal to knowledge and expertise that scientists within our community hold. And we behave in a manner that is consistent with knowing this information. We believe that smoking would elevate the risk of lung cancer; if a person were diagnosed with lung cancer, we would suppose they were a smoker; and we choose not to smoke because of the perceived cancer risk. But, again, an explanation for why "smoking causes lung cancer" is something that most of us do not know or understand. Our limited understanding simply relies on experts in the community who have this knowledge; we outsource the cognitive task of knowing and rely on experts for advice. It may appear that this example is a special case and that we rarely outsource our knowledge to others. But, in fact, we do this all the time. Think of how well people understand principles of science, medicine, philosophy, history, and politics, or how modern technology works. We often have very little knowledge ourselves and instead rely on others to understand, think, reason, and decide. This reliance reflects how our individual beliefs are grounded in a community of knowledge. By appealing to the community, we can ground our limited understanding in expert knowledge, scientific conventions, and normative social practices. Thus, the community justifies and gives meaning to our shallow knowledge and beliefs. Without relying on the community, our beliefs would become untethered from the social conventions and scientific evidence that are necessary to support them. It would become unclear, for example, whether "smoking causes lung cancer," bringing into question the truth of our beliefs, the motivation for our actions, and no longer supporting the function that this knowledge serves in guiding our thought and behavior. Thus, to understand the role that knowledge serves in human intelligence, it is necessary to look beyond the individual and to study the community. In this article, we explore the implications of outsourcing for the field of cognitive neuroscience: To what extent is cognitive neuroscience able to study the communal nature of knowledge? How would standard neuroscience methods, such as fMRI or EEG, capture knowledge that is distributed within the community? In the case of outsourcing, knowledge is not represented by the individual and knowledge is not transferred between individuals (i.e., it is the expert(s) who hold the knowledge). Thus, to study outsourcing, cognitive neuroscience would need to establish methods to identify the source of knowledge (i.e., who has the relevant information within the community?) and characterize the socially distributed nature of brain network function (e.g., what is the neural basis of outsourcing and the capacity to refer to knowledge held in the community?). In this article, we identify the challenges this poses for cognitive neuroscience. One challenge is that representing the source of expertise for a given belief is not straightforward because expertise is time and context dependent, may rely on multiple members of the community, and may even depend on experts that are no longer alive. Another challenge is that outsourcing may reflect emergent knowledge that is distributed across the community rather than located within a given expert (e.g., knowledge of how to operate a navy ship is distributed across several critical roles; Hutchins, 1995). Standard methods in cognitive neuroscience, such as fMRI or EEG, are unable to directly assess knowledge distributed in the community because they are limited to examining the brains of individuals (or, at most, very small groups). Thus, we argue that the outsourcing of knowledge to the community cannot be captured by methods in cognitive neuroscience that attempt to localize knowledge within the brain of an individual. We conclude that outsourcing is a central feature of human intelligence that appears to be beyond the reach of cognitive neuroscience. epistemic pointers to potential sources of knowledge, whether those sources be people to whom we outsource knowledge or with whom we might collaborate. Our hypothesis places limits on the power of cognitive neuroscience to explain mental functioning (Text Box 1). Cognitive neuroscience has often focused on tasks that, at least on their face, are performed by individuals (cf., Becchio et al., 2010;Schilbach et al., 2013). But the limited predictive power of these tasks for human behavior may reflect the fact that these tasks and methods do not capture normal human thinking and may explain some of the limited replicability and generalization of fMRI findings (Turner et al., 2019). People devote themselves to tasks that involve artifacts and representational media designed by other people, to issues created by other people, to ideas developed by and with other people, to actions that involve other people, and of course to learning from sources outside themselves. None of these tasks are amenable to a full accounting from cognitive neuroscience. Furthermore, our appeal to collective knowledge serves to reinforce the multiple realizability problem (Marr, 1982), allowing functional states to operate over complex and dynamic social networks. Whatever neural representations correspond to a bit of knowledge, they are tied to my belief by virtue of a functional relation (a placeholder in my brain that expresses the equivalent of "experts believe this!"), along with the existence of a reliable pedigree for that belief, not simply because my brain is part of a larger neural network. Functional states reflect communal knowledge. Because the human knowledge system is distributed across people, the parts of it that are anchored in others' knowledge are beyond the reach of cognitive neuroscience. In sum, the community of knowledge hypothesis implies that it's a mistake to think of neurobiology as sitting beneath and potentially explaining the cognition that constitutes the emergent thinking in which groups and communities engage. And that's most thinking. It also implies that components of that socially distributed cognitive system cannot in principle be defined in terms of or eliminated in favor of neurobiology. Notice that our argument against reductionism has nothing to do with the nature of consciousness, the target of many such arguments (Searle, 2000;Dennett, 2018). In our view, this is a virtue because consciousness has escaped serious scientific analysis and therefore provides little ground for a serious scientific argument. The representations entailed by collective cognition, in contrast, can be analyzed. In principle, the representations involved in (say) designing a complex object may be abstract in the sense that they reflect interactions among knowledge stored in multiple brains, as well as the physical and virtual worlds, but they are describable nonetheless. As such, the emergent features of human cognition that we are advocating are well-documented and well-established as subjects of fruitful scientific research. Our argument does have positive implications about how to make progress in cognitive neuroscience. To mention only some of the most basic of these, it suggests that our models of information processing for most tasks should focus on communal, not individual, representations. Because most of what we know and reason about is stored outside our heads, our models should not be exclusively about how we represent content, but also about how we represent pointers toward knowledge that is housed elsewhere. Because our actions are joint with others, models of information processing require not only a notion of intention, but a notion of shared intention (Tomasello et al., 2005). Finally, models of judgment that apply to objects of any complexity need to address how we outsource information, not just how we aggregate beliefs and evidence. CONCLUSION The goal of this article is to focus cognitive neuroscientists on important facts about cognitive processing that have been neglected, and that, if attended to, would facilitate the project of cognitive neuroscience. Greater understanding of how people collaborate would help reveal how neural processing makes use of group dynamics and affiliation, and it would support a more realistic model of mental activity that acknowledges individual limitations. Greater understanding of how people outsource would help reveal the actual nature and limits of neural representation, and shed light on how people organize information by revealing how they believe it is distributed in the community and the world. And greater appreciation of the emergent nature of knowledge in society would help us recognize the limits of cognitive neuroscience, that the study of the brain alone cannot reveal the representations responsible for activities that involve the community. Thus, we join the call for a new era in cognitive neuroscience, one that seeks to establish explanatory theories of the human mind that recognize the communal nature of knowledge and the need to assess cognitive and neural representations at the level of the communitybroadening the scope of research and theory in cognitive neuroscience by recognizing how much of what we think depends on other people. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
v3-fos-license
2024-05-09T15:19:15.928Z
2023-12-01T00:00:00.000
269630815
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://scholarsjournal.net/index.php/ijier/article/download/4138/2871", "pdf_hash": "1532dfc7b4e18ff5f6502fed6f71237eb47d0f5f", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42274", "s2fieldsofstudy": [ "Education", "Political Science" ], "sha1": "d3bf71d371937c187e79b3823c72a1cfa1a5cad8", "year": 2023 }
pes2o/s2orc
Conceptualizing University Autonomy and Academic Freedom: Reflections on state of Autonomy and Academic Freedom in Public Universities in Kenya There has been mixed view on the understanding and implication of the terms university autonomy and academic freedom. Different people have used the terms to imply different things, with academia looking at it as absolute freedom of universities to run their affairs. On the other hand, political leaders have taken these terms as delegated and thus needs to be exercised to enhance society’s social economic value. There is a consensus that university autonomy and academic freedom is a fundamental anchor of universities world over. Differences in conceptualization of these terms have resulted in friction between the political leaders and scholars. This paper therefore aimed at crystalizing the common bases of use and application of the terms with a view of creating a common understanding across the divide in order to reduce the tension. The paper also gives reflections on how the university autonomy and academic freedom has been perceived and applied in Kenya public universities. The debate on university autonomy and academic freedom seem to have evolved overtime and appears to have settled on a common conceptualization world over. The paper observes that, the terms university autonomy and academic freedom implies allowing universities adequate latitude to run their affairs in a way that it optimizes stakeholders’ value. Introduction The concept of university autonomy and academic freedom has become synonymous with progressive universities and has equally been regarded as indispensable character of universities globally (Altbach, 2001; The common character of a university seem to settle on a common anchor, that is, freedom of internal governance which means free hand to; make decisions on who will teach, what will be taught, who will graduate and what will be researched (Guruz, 2011).According to Haastrup, Ekundoyo and Adedokum (2009), it includes; freedom to determine selection of students, appointment and removal of academic staff, determination of content of university education, control of degree content, determination of rates and size of growth, establishment of balance between teaching and research, freedom of publication and allocation of recurrent expenditure.It entails giving universities freedom to appoint key officers, determine the conditions of service of their staff, control their finances and generally regulate themselves as separate legal entities. The selective use of term "giving universities freedom" suggests that the concept of university autonomy is donated by the giver, insinuating that it should be understood and exercised within certain boundaries. Universities world over are established through legal instruments which provides legal boundaries beyond which the exercise of institutional autonomy would be violated.According to Dlamini (1996), autonomy of universities is not absolute because as a right, no right is absolute.The argument given in this contention is that, university autonomy architecture should be that which unites; the scholars, society and government. University autonomy therefore does not insulate universities from external influence but subsist within that environment in a shared exchange of value.Sifuna (2012) reinforces this view by noting that, too much autonomy might lead to university education being unresponsive to society while too much accountability might destroy the necessary academic ethos.University autonomy thus is the institutional authority to govern and manage their processes while maintaining fiduciary and governance accountability to external authority, usually the government as price for protection, financial support and legitimacy. Academic freedom The concept of academic freedom has been conceptualized differently, with political leaders holding divergent views from the scholars.Kwame Nkrumah, former President of Ghana, while addressing university dinner in 1963 was of the view that: " There is, however, a tendency to use the word academic freedom in a another cause, and assert the claim that a university is more or less an institution of learning having no respect or allegiance to the community in which it exist….This assertion is unsound in principle and objectionable in practice''.Mwalimu Julius Nyerere, the former president of Tanzania was also of a similar view and held that: "I fully accept that the task of a university is to seek for truth and that its members should speak the truth as they see it regardless of consequences to themselves.But you will notice the word to 'themselves'.I do not believe that they do this regardless of society….The students eat the bread and butter of peasants because they have promised a service in the future….." (Dlaminini, 2002). These statements being associated with renowned African statesmen represents a common narrative on how the concept of academic freedom was conceptualized by leaders.Academic freedom was thus expected to be exercised with responsibility to society.The scholars on the other hand looked at it somehow as being absolute and responsibility exercised in pursuit of scholarly profession without restrictions.The phrase; without restrictions, without extraneous control, without government control and unlimited freedom dominated Conceptualizing University Autonomy and Academic Freedom: Reflections on state of Autonomy and Academic Freedom in Public Universities in Kenya International Journal for Innovation Education and Research Vol. 11 No. 12 (2023), pg.73 scholarly dialogue in discussion of the concept of academic freedom in universities.This was taken to imply that, universities should be allowed free hand to pursue knowledge and to determine its application.This divergence of thought brought some tension between the political leaders and the academia because of differences in conceptualization and application. Review of literature shows common conceptualization of the term by scholars.According to Haastrup, Ekundayo and Adedokum (2009), academic freedom connotes freedom of expression and action, freedom to disseminate information and freedom to conduct research, distribute knowledge and truth without restriction. It includes complete and unlimited freedom to pursue inquiry and publish its results, professors' independence of thought and discourse within established professional standards (Adres, 2021).Adres (2021) insists that, academic freedom is not an individual right from any constrain but freedom to pursue the scholars' profession according to the established standards.Academic freedom does not necessarily mean freedom of speech but freedom of mind, inquiry and expression necessary for proper performance of scholarly professional conduct and thus forms an essential part of a right to education. Academic freedom therefore provides the liberty required for advancement of knowledge and practice of scholarly profession.It is a right to education that has individual and collective dimensions and is discharged through complex relationships between students, faculty, institutions, governments and society.According to Sifuna (2012), academic freedom is more directed at the individual level rather than the institution.The institution provides the structures necessary to guarantee academic freedom.Therefore, academic freedom is considered as one aspect of autonomy.Universities thus provides the basic structure necessary for advancement of academic freedom by the faculty through setting up required structures to facilitate pursuit of scholarship.According to Dlamini (1996), the concept of academic freedom is subject to some re-assessment in the right of changing social circumstances because the knowledge generated and transmitted should be to the benefit of society.Conventional academic freedom therefore is a state regulated autonomy in which the freedom of academics in teaching and research is necessary for the discharge of their normal functions but these functions are exercised within boundaries controlled by the government and management (Taylor & Francis, 2009).This means that the conception and practice of academic freedom is not absolute but limited to academia responsibility to society.Academic freedom is therefore granted in the understanding that it enhances the pursuit and application of worthwhile knowledge.This therefore becomes the basis of support by society through funding of academia and universities.In a nut shell, academic freedom is both to individual as a right to education and self-expression and institutional as a right of an institution to determine for them what is going to be taught, who to teach and who is going to be taught.Thus, the role of government shifts from a regulator to an evaluator.Universities as citadels of academic freedom are believed to be self-regulating through collegial process and coordinated through the senates and are conscious of their responsibility to society.Academic freedom therefore must be viewed in the lenses of unrestricted pursuit of knowledge, dissemination and application in consciousness of societal expectations. Conceptualizing University Autonomy and Academic Freedom: Reflections on state of Autonomy and Academic Freedom in Public Universities in Kenya International Journal for Innovation Education and Research Vol. 11 No. 12 ( 2023 " any health university must be governed more by freedoms than restrains…….., while never ignoring or betraying the most precious function of an academic body, this university must gear itself at once and with constructive zeal to all the needs and realities of the nation building" ( Sifuna, 2012). This statement emphasized the consciousness of the government on the importance of university autonomy and academic freedom while at the same time appreciating the responsibility of the university to nation building imperative.equally conferred those universities with institutional autonomy and academic freedom as a common anchor of their mandates. The basic governance structure across universities comprised of the Chancellor, University Council, University Management Boards, Senates and Students Organizations.The Chancellor, who was also the head of state, was without exception the head of all universities in Kenya.The president therefore had the controlling power on the running of all public universities including appointment of the Vice Chancellors and University Councils.It's apparent at this point that, in spite of the clarity and consciousness on the importance of university autonomy and academic freedom, these ideals were not given adequate space to thrive in Kenyan public universities.In a situation where the president was the head of public universities, it is without doubt that, the university autonomy could only be exercised at the dictates of the president as the Chancellor, meaning that universities in Kenya under this legal regime exercised constrained institutional autonomy and Conceptualizing University Autonomy and Academic Freedom: Reflections on state of Autonomy and Academic Freedom in Public Universities in Kenya International Journal for Innovation Education and Research Vol. 11 No. 12 (2023), pg.75 academic freedom.Evidently, the government of the day was actually extremely sensitive on the academic discourse in universities. A number of scholars faced limitations on what to teach, the extent of content, what to research on and what to publish with a number being arrested for reasons associated with their academic activities (Sifuna, 2012). Ngugi wa Thiong'o was for instance arrested and imprisoned without charge in 1977 after the performance of his Gikuyu language socially critical play, 'I will marry when I want' (Habib, Morrow & Bentely, 2008).Later Although the appointment of Vice Chancellors and their Deputies was done by Cabinet Secretary for university Education, University Councils were largely in control of who was to be appointed.This period was therefore marked by significant university autonomy and academic freedom across public universities.In spite of this period being associated with significant university autonomy and academic freedom, universities were accused of perpetuating ethnicity through biased appointment of top management, especially the Vice Chancellors and their Deputies (Sifuna, 2012).This raised eye brows in government quarters leading to subsequent amendment of universities Act in 2016 and 2019.The amendment of universities Act particularly in 2019 affected the manner in which the top management of public universities was to be appointed.Furthermore, review of financial position of most public universities depicts a sector in serious financial challenges with a number reporting huge pending bills.Public universities' over reliance on government funding pushes them to tipping point of erosion of their institutional autonomy and academic freedom.Guruz (2021) avers that, for universities to exercise autonomy and academic freedom, it is the responsibility of the government to assure the financial sustainability of universities while keeping a reasonable distance from internal governance of universities.Thus over reliance of public universities on government funding reflects negatively on their autonomy and academic freedom. The regulatory mandate of Commission for University Education in universities is also likely to impact negatively on academic freedom of universities.The regulation by the Commission puts undue control on what is to be taught, who to teach, who to be taught and the manner of pursuit of knowledge and enquiry. Available literature shows that this responsibility is universally vested with University Senate.Guruz ( 2011) is of the view that, the responsibility of government and their agencies should change from regulatory to evaluation in order to guarantee academic freedom in universities.Available studies have shown that, over regulation of universities slows down their organizational performance (Davis, 2015;Berube & Ruth, 2015). Conclusion Institutional autonomy and academic freedom is a key anchor of universities world over.There is however no absolute university autonomy and academic freedom because universities are agencies of the society. University autonomy and academic freedom therefore is to be exercised in a manner that is accountable and responsible to societal interests.University autonomy and academic freedom therefore does not necessarily mean absence of government influence but rather a balanced interaction between the government and universities to maximize value to the society.Although there is no empirical evidence on effect of reduced university autonomy and academic freedom on their organizational performance, it is apparently necessary to give universities sufficient latitude to carry out their mandate of pursuit of knowledge and dissemination if their value to society is to be optimized. Conceptualizing University Autonomy and Academic Freedom: Reflections on state of Autonomy and Academic Freedom in Public Universities in Kenya ), pg.741.4 Reflections on the state of academic freedom and institutional autonomy in Public Universities in KenyaDevelopment of universities in Kenya is associated with two main epochs; one where universities were established through their own Acts of parliament and two, where universities were established under one Act of parliament, the universities Act, 2012 (2012-to date).An overview of university education in Kenya between 1970 and 2012 depicts a sector that demonstrated sustained expansion.This growth was an indication of the importance placed on university education in Kenya's social economic transformation agenda from independence.An overview of the state of university autonomy and academic freedom in public universities in Kenya depicts a picture of universities that experience significant deficit in institutional autonomy and academic freedom despite the outright provision of institutional autonomy and academic freedom in their enabling legal instruments.Review of some leaders' position on university autonomy and academic freedom shows clarity on their appreciation of importance of these two aspects of university education.During the occasion of inauguration of University of Nairobi in 1970, Mzee Jomo Kenyatta, the first president of Kenya was clear on the need for university autonomy and academic freedom and noted that: Ngugi wa Thiong'o and other scholars like Ali Mazrui and Michere Mugo left the country for fear of their lives.This period was thus marked by mass exodus of university dons seeking refuge in other countries due to political intolerance associated with their academic philosophical inclinations.University autonomy and academic freedom was therefore and without doubt limited despite being guaranteed by law establishing post independent universities in Kenya up to 2012.The enactment of the universities Act, 2012 marked a paradigm shift in the governance and management of universities in Kenya.All universities were (re)established under one Act of parliament, the universities Act, 2012, marking a major shift in governance of Public universities in Kenya.Unlike in preceding period, public universities were awarded charters to govern their operations.The award of charter also signified ceding of control by government over the governance of public universities.The enactment of universities Act arose from the need to among other reasons; align the university sector to Kenya constitution 2010.The promulgation of the Kenya Constitution, 2010 marked a major shift in governance of Kenya as a nation by deepening democratic principles in governance and management of state affairs.As a consequence of the new constitutional order, universities enjoyed renewed autonomy and academic freedom atmosphere expressly provided under Article 33(1) of the Kenya Constitution.Anchored on this provision, the universities Act entrenched the university autonomy and academic freedom by allowing universities rights to control their internal affairs and determination of their academic pursuits.Under the Act, the president was no longer the Chancellor; university councils took charge of recruitment and management of staff including the Vice Chancellors.Government significantly reduced control of academic activities in universities other than the oversight through Commission for University Education. Conceptualizing University Autonomy and Academic Freedom: Reflections on state of Autonomy and Academic Freedom in Public Universities in Kenya International The amendment conferred the responsibility to recruit Vice Chancellors and their Deputies to Public Service Commission.The consequence of this shift was that, the control of Council to recruit the top management of universities was severely eroded leading to reduced university autonomy.Although this may not have had direct impact on academic freedom, the control on appointment of Vice Chancellors and their Deputies Journal for InnovationEducation and Research Vol. 11 No. 12 (2023), pg.76 impacted on academic freedom because Vice Chancellors by virtue of their position are academic heads in universities.Thus being direct beneficiaries of a heavily government controlled process; they are likely to be more responsive to the government than to the University Council.Apparently, the university Council members in public universities (as provided in the universities Act) are directly or indirectly government appointees.Circumstantially, this situation is unlikely to allow independent decision making process.It is thus fair to conclude that, the decision making processes in public universities in Kenya are heavy on government control, a state which without prejudice, have undermined university autonomy and academic freedom in public universities.
v3-fos-license
2020-12-17T09:07:03.677Z
2020-12-16T00:00:00.000
229302334
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0243965&type=printable", "pdf_hash": "a3ce8c3fb42552abe856d8989ace4716b203c8e4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42280", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8a0e25c2a8fa6c8d007e8241e4cfff4b6c9d357b", "year": 2020 }
pes2o/s2orc
Decontamination of N95 masks for re-use employing 7 widely available sterilization methods The response to the COVID-19 epidemic is generating severe shortages of personal protective equipment around the world. In particular, the supply of N95 respirator masks has become severely depleted, with supplies having to be rationed and health care workers having to use masks for prolonged periods in many countries. We sought to test the ability of 7 different decontamination methods: autoclave treatment, ethylene oxide gassing (ETO), low temperature hydrogen peroxide gas plasma (LT-HPGP) treatment, vaporous hydrogen peroxide (VHP) exposure, peracetic acid dry fogging (PAF), ultraviolet C irradiation (UVCI) and moist heat (MH) treatment to decontaminate a variety of different N95 masks following experimental contamination with SARS-CoV-2 or vesicular stomatitis virus as a surrogate. In addition, we sought to determine whether masks would tolerate repeated cycles of decontamination while maintaining structural and functional integrity. All methods except for UVCI were effective in total elimination of viable virus from treated masks. We found that all respirator masks tolerated at least one cycle of all treatment modalities without structural or functional deterioration as assessed by fit testing; filtration efficiency testing results were mostly similar except that a single cycle of LT-HPGP was associated with failures in 3 of 6 masks assessed. VHP, PAF, UVCI, and MH were associated with preserved mask integrity to a minimum of 10 cycles by both fit and filtration testing. A similar result was shown with ethylene oxide gassing to the maximum 3 cycles tested. Pleated, layered non-woven fabric N95 masks retained integrity in fit testing for at least 10 cycles of autoclaving but the molded N95 masks failed after 1 cycle; filtration testing however was intact to 5 cycles for all masks. The successful application of autoclaving for layered, pleated masks may be of particular use to institutions globally due to the virtually universal accessibility of autoclaves in health care settings. Given the ability to modify widely available heating cabinets on hospital wards in well-resourced settings, the application of moist heat may allow local processing of N95 masks. The response to the COVID-19 epidemic is generating severe shortages of personal protective equipment around the world. In particular, the supply of N95 respirator masks has become severely depleted, with supplies having to be rationed and health care workers having to use masks for prolonged periods in many countries. We sought to test the ability of 7 different decontamination methods: autoclave treatment, ethylene oxide gassing (ETO), low temperature hydrogen peroxide gas plasma (LT-HPGP) treatment, vaporous hydrogen peroxide (VHP) exposure, peracetic acid dry fogging (PAF), ultraviolet C irradiation (UVCI) and moist heat (MH) treatment to decontaminate a variety of different N95 masks following experimental contamination with SARS-CoV-2 or vesicular stomatitis virus as a surrogate. In addition, we sought to determine whether masks would tolerate repeated cycles of decontamination while maintaining structural and functional integrity. All methods except for UVCI were effective in total elimination of viable virus from treated masks. We found that all respirator masks tolerated at least one cycle of all treatment modalities without structural or functional deterioration as assessed by fit testing; filtration efficiency testing results were mostly similar except that a single cycle of LT-HPGP was associated with failures in 3 of 6 masks assessed. VHP, PAF, UVCI, and MH were associated with preserved mask integrity to a minimum of 10 cycles by both fit and filtration testing. A similar result was shown with ethylene oxide gassing to the maximum 3 cycles tested. Pleated, layered non-woven fabric N95 masks retained integrity in fit testing for at least 10 cycles of autoclaving but the molded N95 masks failed after 1 cycle; filtration testing however was intact to 5 cycles for all masks. The successful application of autoclaving for layered, pleated masks may be of particular use to institutions globally due to the virtually universal accessibility of autoclaves in health care settings. Given the ability to modify widely available heating cabinets on hospital wards in Introduction The COVID-19 pandemic is proving to be an exceptional stress on hospital and health systems resources around the world. Many countries are experiencing or imminently expecting shortages for a variety of equipment and disposable supplies. A tightening supply of N95 masks that allow for protection from airborne pathogens and aerosolized viruses including SARS-CoV-2 is of particular and immediate concern. Without an adequate supply of N95 masks, health care providers are at substantial risk of contracting COVID-19 during the course of their duties. The occurrence of patient to health care worker (HCW) spread of SARS-CoV-2 at sufficiently high rates would lead to demoralization of the workforce, depletion of HCWs for quarantine and would turn hospitals into hotspots for infection transmission. N95 masks are normally single use products. However, according to news reports, extended use and re-use of N95 masks has occurred or is ongoing in multiple institutions in the United States, Canada, Italy and many other countries [1]. Persistent shortages may increase the reuse of N95 masks globally as the pandemic progresses. We sought to determine whether a range of different N95 masks would retain structural and functional integrity after treatment with widely available decontamination techniques. Concurrently, we also determined the ability of each decontamination technique to effectively inactivate virus on experimentally inoculated masks. Four mask models, including VFlex 1804, Aura 1870, 1860 (3M Company, St. Paul, Minnesota) and AO Safety 1054S (Pleats Plus) Respirator (Aearo Company, Indianapolis) were subjected to all decontamination technologies for the purpose of performance testing as well as quantifying viral inactivation. Two additional respirator models, 3M 8210 and 9210 respirator models (3M Company, St. Paul, Minnesota) were included only for performance testing following decontamination. No valved-type masks were assessed. EtO gas treatment was done using the model 5XLP Steri-Vac Sterilizer/Aerator (3M Company, St. Paul, Minnesota) with 1 hr exposure and 12 hr aeration time. LT-HPGP treatment was performed using a STERRAD 1 100NX sterilizer (Advanced Sterilization Products, Irvine, California). This device generates hydrogen peroxide vapor from 59% liquid H 2 O 2 , which is then electromagnetically excited to a low-temperature plasma state. Highly reactive species are generated from the hydrogen peroxide vapor in this state to facilitate faster decontamination of medical equipment. A standard 47 minute cycle with 30 minutes of exposure time to the reactive species was used for the mask treatment. No aeration is required as part of the standard cycle. VHP treatment was performed with the VHP 1 ARD System (Steris, Mentor, OH), it uses 35% liquid H 2 O 2 to generate hydrogen peroxide vapor. Two program cycles were used: A one hour cycle, consisting of 10 min dehumidification, 3 min conditioning (5 g/min), 30 min decontamination (2.2 g/min) and 20 min aeration; or a 5 hour cycle, consisting of 10 min dehumidification, 3 min conditioning (5 g/min), 2 hr decontamination (2.2 g/min), 2 hr dwell and 45 min aeration. Both Program cycles had peak VHP concentrations of 750 ppm. For PAF, a dry fogging system using fogger head and nozzles purchased from Ikeuchi USA (Blue Ash, OH) was used as described elsewhere [2]. One tenth diluted Minncare Cold Sterilant, a liquid peracetic acid (Mar Cor Purification, Skippack, PA) was used. The fogger was run until the relative humidity rose to 80-90%, which required 30 ml of the diluted chemical. The fogger was then turned off and the masks exposed for 1 hr. VHP and PAF treatments were conducted in a 40 ft 3 glovebox (Plas Labs Inc. Lansing, MI). Ultraviolet-C irradiation (UVCI) at a wavelength of 254 nm was delivered using an Asept.2X UV-C disinfection unit (Sanuvox Inc., St. Laurent, QC) at a distance of 86 inches/ 218 cm according to protocol described by Lowe et al [3] modified for local conditions. Assay of UV-C dose, measuring 400 mJ/cm 2 , was performed using a PM100A dosimeter (Thorlabs Co., Newton, NJ) and a S120VC light sensor (Thorlabs Co., Newton, NJ). Assessment of UV exposure was measured on a representative 3M 1870+ Aura above the outer layer, below the first layer, and beneath the thick middle layer of fabric using Photochromatic Ultraviolet-C (UV-C) Dosimeter Disks (Intellego Technologies, Stockholm Sweden). Standard autoclaving was performed using an Amsco Lab 250 model (Steris Life Sciences, Mentor, OH) with a peak temperature of 121˚C for 15 min; total cycle time was 40 min (10 min conditioning/air removal, 15 min exposure, 15 min drying/exhaust). Moist heat treatment (MHT) was applied through the use of an OR-7854 warming cabinet (Imperial Surgical/SurgMed Group, Dorval, QC) set at 70˚C and 75˚C. Humidity was passively increased to 22% by placement of an open 2 gallon stainless steel container filled with hot tap water and a wet cotton towel draped to the base of the container to increase evaporative surface area. Temperature and humidity in the cabinet were confirmed with a model OM-EL-USB-2-Plus logger (Omega Environmental, St-Eustache QC). Effectiveness of decontamination The ability of each decontamination technology to inactivate infectious virus was assessed using experimentally inoculated masks. Small swatches cut from one of each of the 4 respirator models was surface contaminated on the exterior with vesicular stomatitis virus, Indiana serotype (VSV) or SARS-CoV-2 (contaminated group). SARS-CoV-2 was only utilized if the decontamination method was available within the CL3 suite at Canada's National Microbiology Laboratory. VSV was used if the decontamination method was only available outside the CL3 suite. The inoculum was prepared by mixing the virus in a tripartite soil load (bovine serum albumin, tryptone, and mucin) as per ASTM standard to mimic body fluids [4]. Ten μl of the resulting viral suspension containing an estimated 6.75 log TCID 50 of VSV or 5.0 log TCID 50 of SARS-CoV-2 was spotted onto the outer surface of each respirator at 3 different positions. Following 1-2 hr of drying, swatches from masks underwent each of the decontamination procedures. Corresponding positive control masks were concurrently spotted with the same viral inoculum, dried under the biosafety cabinet, and processed for virus titer determination to account for the effect of drying on virus recovery. Following decontamination, virus was eluted from the mask material by excising the spotted areas on each mask swatch and transferring each into 1 ml of virus culture medium (DMEM with 2% fetal bovine serum and 1% penicillin-streptomycin). After 10 minutes of soaking and repeated washing of the excised material, the elution media was serially diluted in virus culture medium for evaluation in a fifty-percent tissue culture infective dose (TCID 50 ) assay. 100 μl of each dilution was transferred into triplicate wells of Vero E6 cells (ATCC CRL-1586) seeded 96 well plate. At 48 hours (VSV) or 96 hours (SARS-CoV-2) post-infection, cells were examined for determination of viral titres via observation of cytopathic effect. Titres were expressed as TCID 50 /ml as per the method of Reed and Muench [5]. Results for each treatment indicate mean ± standard deviations of three biological replicates. Impact of decontamination on structural and functional integrity A group of the N95 masks without viral contamination (clean group) underwent multiple decontamination treatments by all the decontamination methods. Afterwards, these respirator masks were visually and tactilely assessed for structural integrity and underwent quantitative fit testing using a TSI PortaCount 8038+ (Shoreview MN, USA) to assess functional integrity. Fit testing was carried out on volunteer staff members who previously successfully fit tested for a given mask model. Masks were considered to be functionally intact if quantitative fit testing resulted in a fit factor of more than 100 for normal and deep breathing exercises [6,7]. For autoclaving, VHP, PAF and UVCI, we assessed integrity after 1, 3, 5 and 10 cycles; for LT-HPGP treatment after 1, 2, 5 and 10 cycles; for EtO gas treatment after 1 and 3 cycles; and for MHT after 3 and 10 cycles. The filtration efficiency evaluation was conducted by SGS Lab (Grass Lake, Michigan, USA), following the ASTM testing conditions for particulate filtration (ASTM F2299 and ASTM F2100). Briefly, masks were individually packaged in labeled paper bags and overnight couriered to the testing facility. At the facility, aqueous suspensions of monodisperse Latex polystyrene beads at 0.1 μm were prepared for the challenge particles. Filtered and dried air was passed through a nebulizer to produce an aerosol containing the suspended Latex beads. The fit test sampling probes (TSI Incorporated, Shoreview, MN, USA) leftover from fit testing were sealed with hot glue. N95 filtering facepiece respirators (FFRs) were attached to a filter holder and placed between inflow and outflow tubes. The aerosol was passed through a charge neutralizer and mixed and diluted with additional preconditioned air to produce the challenge aerosol to be used in the test. The aerosol was fed (1.0 scfm) through the FFRs, and filtration efficiency was obtained using two-particle counters (Lasair 1 III 110 Airborne Particle Counter, Particle Measuring Systems 1 , a Spectris company Boulder, CO, USA) connected to the feed stream and filtrate. Pressure differential (DHII-007, Dwyer Instruments International, Michigan City, IN, USA), airflow (M-50SLPM-D/5M, Alicat Scientific, Tucson, AZ, USA), temperature, humidity (HMT330 Humidity and Temperature Meter, Vaisala, Helsinki, Finland) and barometric pressure (PTU200 Transmitter, Vaisala, Helsinki, Finland) were also characterized in the experimental apparatus. Filtration efficiency based on the ASTM methodology was calculated as the persistent fraction of aerosolized 0.1 μm latex microbeads in air before and after passage through the N95 mask [8]. An N95 mask should filter a minimum of 95% of aerosolized particles of that size. Effectiveness of decontamination Apart from UVCI, all the decontamination treatments assessed successfully inactivated the challenge VSV from all of the four mask materials in comparison to the untreated drying controls (Table 1). A demonstrable reduction of greater than six logs of infectious VSV was recorded for those respirator masks. Mask materials inoculated with SARS-CoV-2 had no recoverable virus following autoclaving and peracetic acid dry fogging treatments (Fig 1). While VHP decontamination led to complete inactivation of SARS-CoV-2, an extended cycle time was required compared to that of VSV (Table 1). Complete moist heat inactivation of SARS-CoV-2 was achieved with 3 hrs exposure at 75˚C and 22% relative humidity (RH) ( Table 1); any exposure of SARS-CoV-2 of less than 3 hrs at 75˚C or with an exposure of 3 hrs at 70˚C (both at 22% RH) resulted in a reduction of viral titre with residual recoverable virus (Fig 1). The titer of the starting SARS--CoV-2 virus was slightly lower than that of VSV, therefore the maximum demonstrated reduction was 4.5 logs. We could not validate the effectiveness of EtO and LT-HPGP against SARS-CoV-2 as they were not available at the National Microbiology Laboratory. Although several UVCI doses were assessed, only the highest dose is reported. For UVCI, a substantial and consistent decrease in virus titer (between 4 and 5 log) was shown; however, persistent viable VSV was isolated from each mask. The maximum delivered dose on each side of the masks was 560 mJ/cm2. A total dose of 1120 mJ/cm 2 was delivered to each mask taking into account lamps placed on each side. Lower delivered doses similarly consistently showed persistent viable VSV of UVCI. A supplemental examination using disposable adhesive photochromatic UV-C dosimeter disks ("dots") that exhibit a defined color changes with specific UV-C dose exposure demonstrated a failure of UV-C to penetrate through the middle of 3 layers of the 3M 1870 and AO Pleats Plus masks (see Fig 2). In summary, all decontamination methods (except UVCI) resulted in no growth of virus in decontaminated specimens. Impact of decontamination on structural and functional integrity All decontamination methods resulted in no significant change on visual or tactile inspection. In addition, all masks exhibited preserved structural and functional integrity of masks as assessed by fit testing for at least one cycle of treatment ( Table 2). The 3M 1870 Aura model exhibited some stiffness of straps with more than 16 cumulative hours of MHT. The 3M Vflex 1804 and 9210 as well as the AO Safety 1054 models exhibited some mild bleeding of the ink label upon autoclaving (Fig 3). Moist Heat †: 70˚C with 22% relative humidity X 1 hr (VSV) or 75˚C with 22% relative humidity X 3 hrs (SARS-CoV-2). UV-C: Ultraviolet light-C radiation (254 nm wavelength). Autoclaving resulted in functional failure of the 3M 1860 and 8210 (molded) models after the first cycle but the other masks (all pleated, layered fabric models), retained integrity through 10 cycles, the highest number tested. All masks treated with EtO and UVCI retained integrity though 3 and 5 cycles respectively (maximum number of cycles tested). LT-HPGTtreated masks failed fit testing beyond the first cycle (5 of 6 respirators at after 2 cycles; 6 of 6 failures with 5 and 10 cycles) while VHP exposure, PAF and MHT maintained mask integrity through the maximum 10 cycles tested. With a few exceptions, filtration testing demonstrated congruent deficiencies in filtration efficiency (Table 3). A filtration efficiency of � 95% is considered consistent with the N95 designation. One exception was that the fit test failing masks in the autoclave group (molded mask models 3M 1860 and 8210) passed filtration efficiency testing. In addition, while all masks passed fit testing after a single cycle of LT-HPGP, half failed filtration testing at the same point. Discussion The unprecedented nature of the COVID-19 pandemic has revealed previously unrecognized deficiencies in global pandemic preparedness. In particular, the depletion of single-use disposable personal protective equipment has led to prolonged use of gear far beyond standard recommendations and considerable HCW anxiety. The international shortage of N95 masks that protect against exposure to aerosolized virus, which may occur during intubation and other invasive tracheobronchial procedures, is of particular concern given the respiratory nature of the SARS-CoV-2 infections. The shortage of these masks and their use for periods beyond Fig 2. Penetration of UV-C through N95 masks. The degree of UV-C penetration through a layered N95 mask was demonstrated using photochromic UV-C Dosimeter Disks. A 3M 1870+ Aura respirator mask was cut in half, and dosimeter disks were placed directly on top (left disk), half-way under the first fabric layer (top center disk), or halfway under the thick middle layer (lower central disk) of material. A color change from yellow (unexposed) to deep pink was achieved on exposed portions of all disks. The lighter orange color, consistent with reduced UV-C exposure, was revealed on the disk partially covered by the top layer of mask material during UV-C treatment, while the disk placed half beneath the thick middle fabric layer showed no color change from yellow, indicating a lack of exposure to significant UV-C radiation. https://doi.org/10.1371/journal.pone.0243965.g002 recommended may be part of the reason for the reported high incidence of infection seen in health care workers. We sought to determine which decontamination techniques potentially available for use in hospitals might be suitable for the task of sterilizing a variety of N95 masks without compromising their structural or functional integrity. The perfect method would be available globally, scalable and inexpensive. In addition, the method would ideally allow for repeated decontamination cycles. Our tests of decontamination effectiveness demonstrate that the majority of decontamination methods assessed were highly effective in sterilizing all the N95 models. No viable virus [3,[9][10][11], this study presents SARS-CoV-2 specific data, which is crucial for evidence driven decision making. Vesicular stomatitis virus, a bullet shaped enveloped, negative-sense RNA virus of the Rhabdoviridae family that commonly infects animals [12], was used as a surrogate for SARS--CoV-2 for decontamination procedures (LT-HPGP, EtO and UVCI) only available at our hospital. We could not validate SARS-CoV-2 against these three technologies because it is a Risk Group 3 virus, which cannot be manipulated outside a CL3 laboratory. Most importantly, our results clearly show that the use of individual N95 masks can potentially be extended several-fold without degradation of functional integrity. VHP [13], PAF [14] and MHT appear to be most effective across all masks with respect to viral inactivation and retention of mask functional integrity. Recent publications have supported the possibility of using VHP and a similar hydrogen peroxide technology, Hydrogen Peroxide Vapor (HPV), for large-scale N95 decontamination strategies [9,15]. However, these studies lack inactivation data against SARS-CoV-2 or a surrogate virus. Here, we demonstrated that these methods allow at least 5 cycles of decontamination for all assessed masks without impairment of structural or functional integrity. The potential use of VHP for N95 decontamination has been widely speculated in the context of COVID-19. Recent preprints assessing VHP to decontaminate experimentally inoculated N-95 masks have shown conflicting results in efficacy against SARS-CoV-2. In one study, complete inactivation of 4.5 logs of viable SARS-CoV-2 was demonstrated following VHP treatment 21 . In the second report, where inoculum was prepared in artificial saliva, the presence of both viral RNA and infectious virus was observed in VHP-treated mask materials [16]. The decreased efficacy of VHP decontamination in the presence of an organic soil load has been noted in a number of studies [17]. Interestingly, while a full kill was achieved using VHP in our study, which also used a soil load, a five hour cycle time was required for complete SARS-CoV-2 inactivation compared to only a single hour for VSV. This extended treatment time should be taken into consideration if turnaround times are critical in a given institution. PAF is an attractive, mobile and affordable decontamination technology [14]. Compared to VHP generating systems, with initial costs in the $75,000 CAD range and requiring annual calibration by company technicians, dry fogging systems on the other hand have significantly lower start-up costs ($5,000-10,000 CAD) and no associated annual maintenance costs. As a result, PAF may be more readily available in poorly resourced settings. This method was able decontaminate all tested masks successfully without affecting their functional integrity up to 10 cycles (maximum cycles tested). Handling and storage of extremely corrosive liquid peracetic acid and the routine cleaning requirement of the nozzles immediately after fogging to prevent clogging are the two disadvantages of this system. Low temperature hydrogen peroxide gas plasma is commonly used in most hospitals for decontamination of high value reusable equipment such as endoscopes [18]. This study demonstrates that N95 masks do not consistently tolerate even one standard (47 min) cycle of treatment. All masks did pass fit testing after one cycle of LT-HPGP; however, half of these failed filtration testing. With 2 cycles, quantitative fit and filtration testing was impaired five of six and all 6 masks respectively; after 5 cycles, all were impaired by both testing methods. We postulate that the high concentration of liquid hydrogen peroxide (approximately 60%) and its strongly charged ionized vapor state of this device may have neutralized the filter media's electrostatic charge, which is critical in trapping airborne particulates. Ethylene oxide gas treatment is an older method of decontaminating materials [19]. The process is somewhat more complex than others and significant safety concerns exist in that the gas is flammable, explosive and potentially carcinogenic [20]. A prolonged period of aeration following item exposure to the gas is required to eliminate chemical residue. A very long cycle time of more than 20 hours compared to an hour or less for other decontamination methods is the result. Despite these drawbacks, some institutions in poorly-resourced settings may not have LT-HPGP or VHP. For that reason, our finding that all four mask models assessed tolerate at least 3 cycles of EtO decontamination without significant structural or functional deterioration as measured by fit and filtration testing may be useful. However, we would recommend against the use of this approach unless and until there is advanced testing to ensure that all traces of ethylene oxide and its related byproducts are entirely eliminated with sufficient aeration [21]. UV-C has been recommended as a method for decontamination/sterilization of N95 masks for potential reuse [3,22]. Virus inactivation is mediated by direct UV-C mediated damage to the viral genome. For hard surfaces, UV-C doses of <10 mJ/cm2 have been shown to be effective in generating >99% (2-3 log) reduction in viability of single strand RNA viruses [23]. The question of the required dose for sterilization of porous materials is more problematic. Suggestions of the dose required for viral inactivation efficacy have ranged from 60 mJ/cm 2 to at least 1800 mJ/cm 2 [3,22,[24][25][26]. However, our results suggest that even at doses congruent with those recommended for enveloped RNA viruses, complete sterilization did not occur. Preliminary studies by others have yielded similar results with SARS-CoV-2 [27]. Based on our ancillary data using photochromatic UV-C dosimeter disks (Fig 1), we suggest the inability to totally clear viable virus stems from the fact that virus spotted in 10 μL volumes (consistent with droplets) soak into the respirator mask material deep enough protect viable virus from UV light. Further, the protein-rich nature of the soil load used in our experimental inoculum provided additional protection from UV-penetration. While there is substantial viable virus reduction with UVCI and mask integrity is well maintained, the inability to fully clear masks of viral contamination may be problematic with respect to HCW acceptance of the technique. The technique is otherwise available in most well-resourced hospitals and is scalable. Our data show that MHT, like VHP and PAF, is highly effective for viral decontamination for all respirator models assessed and is well tolerated for repeated cycles (tested to a maximum of 10) with retention of N95 structural and functional integrity as assessed by both fit and filtration efficiency testing. The method is generally available in the community (industrial manufacturing convection ovens, bulk sterilization facilities, and industrial meat processing and livestock transport cleaning facilities) and can be relatively easily adopted in hospitals using widely available equipment (e.g. blanket warming cabinets). Another advantage is that this method is scalable and available directly within many hospital wards, allowing for local N95 mask reprocessing and easy re-use by specific individuals. A limitation is that availability is restricted to relatively well-resourced institutions. Several preliminary study publications have recently confirmed the ability of MHT of varying temperature, humidity parameters and durations to clear SARS-CoV-2 and/or preserve respirator integrity [27,28]. Similar work has been done in the past in relation to influenza virus [24,25]. The application of moist heat (pasteurization) has been used to decrease microbial pathogen counts in food products for decades. Studies clearly demonstrate that applications of >55˚C heat can rapidly inactivate most viruses including human coronavirus pathogens such as SARS-CoV (SARS virus) [29] and MERS CoV (MERS virus) [30] as well as a variety of pathogenic domestic animal coronaviruses [31,32]. Available data also suggests that addition of increasing humidity enhances viral inactivation. The mechanism of virus inactivation is not entirely clear but may involve capsid and envelope disruption [33,34]. As expected, standard autoclaving using a peak temperature of 121˚C to denature viral proteins results in complete elimination of viable virus. Surprisingly, however, 4 of the 6 assessed respirator mask models tolerated up to 10 cycles while maintaining structural and functional integrity according to fit testing. Although all masks maintained integrity after one autoclave cycle, the more rigid, molded 3M 1860 and 8210 models demonstrated loss of function with more than a single autoclave cycle. Interestingly, filtration remained intact in these respirators while fit testing failed suggesting the failure was due to issues of structural damage to the ability of the respirator to fit the subject. Similar findings were recently reported by Bopp et al, who demonstrated that the molded 1860s model failed fit testing following a single autoclave cycle of 121˚C for 30 minutes while pleated masks could withstand multiple cycles [35]. Three of the 4 other layered fabric, pleated models retained integrity with up to 10 autoclave cycles (maximum number of cycles tested) with the exception of the 3M 9210 model which showed a modest decrease in filtration efficiency. These findings could be highly relevant to institutions in poorly-resourced areas of the world in that autoclaves would be expected to be available in any established hospital or major medical clinic around the world. Unfortunately, we were unable to examine the differences in mask materials and construction that might contribute to the failure of the 3M 1860 and 8210s model compared to the others due to the proprietary nature of the technology. Single use of N95 masks for each patient encounter is ideal and recommended; unfortunately, the resource stress due to the current COVID-19 crisis has breached this ideal. According to public reporting, extended use and re-use of N95 masks has become common in hospitals in areas where SARS-CoV-2 is high. This risks functional failure of N95 masks, spread of infection to wearers and increased risk of virus transmission from health care workers to others. Our data suggests that most decontamination methods other than UVCI are effective in complete virus inactivation for at least one cycle without loss of structural integrity. However, neither LT-HPGP nor EtO gas are recommended at this time due to limited tolerance of N95 masks tested to repeat cycles, prolonged cycle times and/or potential toxicity. Our data show that PAF, VHP, MHT and autoclaving can be used to decontaminate N95 masks through at least 5 cycles without loss of function. Autoclaves can be used on a subset of N95 mask types and may be easily accessed by any healthcare institution globally when N95 mask shortages occur. MHT is also easily accessible in well-resourced settings and is scalable especially if hospital heating cabinets can be used. This simple method should also allow decontamination to remain at the ward level easing the way to re-use of masks by a single individual. Based on our data in combination with a study that showed new N95 respirator masks begin to demonstrate increasing failures after 5 cycles of fit testing (without regular use or decontamination between cycles) [7], a limit of 5 decontamination cycles using PAF, VHP, MHT or autoclave (the last for non-molded masks) decontamination seems to be an appropriate suggestion if reuse is necessary. Although we tested the functionality of decontaminated masks via quantitative fit testing, our testing cannot take into account the respirator's ability to withstand the rough handling that extended wear by health care workers, which stress and perspiration can inflict. Another limitation of this study is that our findings may or may not apply to other types of N95 masks. We also could not distinguish whether failure of fit or failure of filtration efficiency led to the failings of those masks upon treatment by LT-HPGP or autoclave treatments. Nonetheless, it is reassuring that the practice of appropriate decontamination and subsequent re-use of N95 mask should not pose a health risk to the already taxed health care workers. Conclusions Amid the current surge of COVID19 cases, validated decontamination strategies to extend the utility of N95 masks may prove critical in the event of further global shortages. Given successful inactivation of SARS-CoV-2 combined with maintained functional integrity following 5 cycles of decontamination, peracetic acid dry fogging, VHP, autoclaving (for a subset of masks), and moist heat treatment are viable options for decontamination of most models of N95 masks.
v3-fos-license
2020-05-21T00:15:33.154Z
2020-05-08T00:00:00.000
218931952
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/amse/2020/8575189.pdf", "pdf_hash": "8e53e2cb8b296c5de19d0049e3b72367fe8b8b42", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42281", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "905c1de686e9e25c80017a2fc512a4e017490f59", "year": 2020 }
pes2o/s2orc
Thermal Aging of Menzolit BMC 3100 VSB-Technical University of Ostrava, 17. Listopadu 15/2172, Ostrava 708 33, Czech Republic Center for Basic and Applied Research, Faculty of Informatics andManagement, University of Hradec Kralove, Rokitanskeho 62, Hradec Kralove 500 03, Czech Republic Department of Geology, Faculty of Space and Environmental Science, University of Trier, Trier, Germany Adekunle Ajasin University, Akungba-Akoko, Nigeria Introduction Menzolit BMC 3100 is a composite material that is sparsely described in the scientific literature. Detailed reference is only made to the material when producers give product specifications in offering the material as an advertised item. Such specifications are also limited, providing only specific physical values that characterise the material and rarely publishing details on material composition. Polymer composites play very important roles in modern industry, especially the automotive sector [1][2][3][4][5][6][7][8], in which light weight, high specific modulus, and strength are critical factors put into consideration in production and where these materials find a wide range of application. is paper describes the composition of polymer composites, e.g., Menzolit BMC 3100, and the unique advantages they offer, relative to traditional materials. Details are given on prevailing market situation of polymer composites in Europe, and their special application in the European automotive industry. Specific emphasis is also laid on the manufacturing of spare parts from thermoplastic, thermosetting plastics, and short and continuous fibre-reinforced composites. In [9][10][11][12], the authors offer insights on some recent technological and environmental applications of reinforced polymers. In [13], the author noted that Menzolit BMC 3100 is a special material developed for the automotive industry to produce headlights. is polymer composite material is gradually replacing traditional metal reflectors, mainly due to its simplicity and low cost in serial production. Nevertheless, future use of Menzolit BMC 3100 in modern automobile lighting systems pose an unanswered question, given the current use of "cold" light emitting diodes (LEDs) and laser for the same purpose and thermoplastic materials (e.g., polycarbonate) sufficing as heat resistant. Table 1 shows the properties of Menzolit BMC 3100 (given as average values of the test results) derived from nonpigmented, compression moulded panels at room temperature [13]. Theoretical Background According to [14], composite producers provide a specific dataset useful for reference purpose, especially when designing these materials. Nevertheless, for best datasets, a number of experimental procedures may be necessary. One way is to determine a composite composition from the characteristics of its constituents. While there seems to be extensive use of polymer matrix composites (particular the form reinforced with fibre) in aircraft structures [15] and automobile, accurate prediction of strength of the materials becomes very important [14,16,17]. e authors of studies such as [14] have researched how the reinforced forms of polymer matrix are impacted upon failure. Failure analysis is carried out on part of a composite material made up of polymeric matrix as well as fibre with the aim of understanding the stress-strain situation of the whole composite material (fibre and matrix included) [15]. is relationship is used to structurally predict matrix or fibre failure of the material. Depending on the kind of technique adopted, there is the possibility of individually predicting matrix failure and then fibre failure in a separate setup via computational methods [16,17]. Nevertheless, advanced composite characteristics do not only depend on the kind of matrix, but the type of reinforcement also plays a vital role as well as a feature that is not associated to its composition, configuration of reinforcement. In some composite materials, only strengthening fibre concentration can be controlled, and composite dimensions cannot be controlled [18]. A typical example is mouldable short fibre-reinforced thermoplastics. is paper thus offers a relatively comprehensive study of the physical parameters of BMC exposed to heat over a relatively large temperature range from ambient temperature to 300°C. Features of BMC 3100. Menzolit BMC 3100 is typically designed using two unique kinds of moulding: compression moulding and injection moulding [13]. e former involves moulding Menzolit compounds using heat steel moulds with shear edges, as high density moulds generally offer the best results. Furthermore, 20-30 seconds-per-millimetre of wall thickness is preferred for curing when using compression mould. is is important as introduction of Menzolit compound to hot mould must be followed by a quick closing of the press to disallow precuring [13]. Injection moulding often takes place between 140°C and 165°C (standard compounds) and 30°C-40°C (injection unit). Back pressure is just needed to assure constant dosing, injection pressure varies between 50 and 250 bar, and injection time should be as short as possible but long enough to ensure ventilation. A small holding pressure should be applied until the gate is cured. A guide for curing is 10 seconds per millimetre of wall thickness [13]. ermoanalytical Techniques. According to [19], thermal analysis refers to a number of heat measurement techniques in which physical properties of a substance are measured with respect to time or temperature while the temperature of the material, in a specified atmosphere, is either constantly heated or cooled (temperature programme) at a specific desired temperature. In [20], the author reported that thermoanalytical measurement generally depends on how temperature of a material interacts with volume, heat of reaction, and mass. ese methods find a wide range of application in scientific discourse, ranging from pharmaceuticals [21], automobile, and aviation construction materials [22]. A number of thermoanalytical techniques exist in the literature. However, for the purpose of this study, only infrared spectrometry, differential scanning calorimetry, thermomechanical analysis, thermogravimetric analysis, and heat deflection temperature techniques are discussed. Differential scanning calorimetry, commonly abbreviated as DSC, deals with effects from physiochemical processes otherwise known as phase-transition reactions for which specific heat is a major constituent [23], so that heat flow rate into a substance is measured as a function of temperature while the temperature of the substance itself is programmed [24]. Via thermocouple, temperature difference (between sample and reference) is thus measured. ermogravimetric analysis involves changes in mass as a result of material/substance interacting with the atmosphere, evaporation, and decomposition [23,24]. It involves mass measurement of a material as a function of temperature, while subjecting the material to some form of controlled-temperature programmes. In [25], the author explained thermomechanical analysis as a measure of stiffness and damping properties of materials in terms of Method ermogravimetric analysis was carried out using a quantitative analysis of BMC composition with the aid of Hi-Res TGA TA Instruments 2950. Furthermore, isothermal analysis was used to determine the heat load time for up to 300 minutes. For degradation of the surface layer, the 1147 cm −1 band corresponding to the C-O-C bond was closely monitored. Brinell hardness follows experimentally with the use of Brinell hardness tester ZWICK/ROELL. Results and Discussion A quantitative analysis of BMC composition ( Figure 1) can be performed with thermogravimetric analysis (Hi-Res TGA TA Instruments 2950) [26,27]. Depending on the weight-totemperature ratio, data on the amount (mass) of the individual components in the sample can be obtained. For proper evaluation of the TGA curve, it is necessary to know the chemistry of ongoing decomposition reactions: To determine the heat load time, an isothermal thermal analysis of a small amount of material was performed (up to 300 minutes). Figure 2 graphs the mass loss (in %) dependence on temperature. Mass loss increased above 240°C. Due to thermal degradation, the organic matrix decomposed in the surface layer. Decomposition is reflected by the loss of specific chemical bonds and the change in intensity of the respective bands in the IR spectrum (FTIR Nicolet iS10 spectrophotometer). e FTIR graph in Figure 3 shows the gradual decrease of bands corresponding to the bonds of the organic matrix of the polymer. To assess degradation of the surface layer, the 1147 cm −1 band corresponding to the C-O-C bond was observed. With increasing temperature and progressive degradation of the surface layer, the polymeric matrix decomposed, resulting in a reduction in the intensity of the C-O-C bonding band. e 1728 cm −1 band corresponds to the vibrational movement of the C�O bond, referred to as "stretching vibration" in acrylate groups of polyester resins, and the 1147 cm −1 band corresponds to the stretching vibration of C-O-C bonds. e results of surface decomposition are plotted in Figure 4. From the weight dependence of the load to the load temperature, the material was stable up to about 250°C with a heat load time of 30 minutes. is corresponds to the evaluation of FTIR surface analysis (see graph in Figure 4). e next part of the experiment describes measurement of Brinell hardness (Brinell hardness tester ZWICK/ ROELL). As the temperature increased, hardness was found to decreased ( Figure 5). From 200°C, hardness decreased almost linearly. Conversely, the measurement time increased. Without heat stress, the bullet was crushed almost instantaneously. By contrast, after a 30-minute load at 300°C, Advances in Materials Science and Engineering the specimen took a bullet for a while before the surface layer was pierced. As the temperature increased, it is likely that a "sintered" layer of material formed on the surface and that this material reduced the hardness of the material. Furthermore, the materials were tested for Charpy impact strength (CEAST Resil 5.5) versus loading temperature applied over 30 min. In this test, we also saw decreasing values with increasing temperature (Figure 6). e increase in impact strength at 300°C/30 minutes could be due to the reinforcement/sintering of the sample surface. e results of the thermomechanical analysis (TMA TA Instruments 2940) did not show any visible dependence on degradation (Figure 7). e only difference is visible between the unexposed sample, which had an additional cure reaction, and the most intensively loaded specimen at 300°C/30 minutes, where the CLTE (linear thermal expansion coefficient) was almost linear over the entire temperature range. Determining the glass transition temperature of BMC material with the DSC method (DSC Du Pont Instruments 2910) according to ISO 11357-2 was somewhat problematic. e material data sheet for BMC indicated a glass transition temperature (Tg) of 185°C (according to ISO 11357-2). e determination method was set to meet the specified conditions. e signal response for BMC was very poor. No clearly identifiable transition (probably T g ) was observed around 185°C. Certain transitions can be evaluated at around 90°C and 130°C. However, these were almost unidentifiable during the second heating, so it is possible that they may have been irreversible (temperature history, evaporation of volatile components, and absorbed moisture) ( Figure 8). As the thermal load increased, the response diminished. e results were compared with the measured TA-Instruments thermoset curves [28]. e clear glass transition was not observable in any material, as the temperature range with the occurrence of glass transition always overlapped the cure reaction. e weak signal response may have been due to the presence of a large percentage of inorganic fillers (limestone and glass fibre account for about 80% of the composite at the expense of the polymer matrix, which is only 20%). For this reason, heat is likely to be greatly diffused by the sample, and the reaction of the polymer matrix to increasing cell temperature was weak and the change in heat flow was not detectable. During the first heating (1st step), the DSC curve showed undefined transitions around 90°C and 130°C. e transition stages were very small, so their interpretation is uncertain. e second heating (2nd step) did not show any changes in heat flow (Figure 9). A noticeable change could be seen in the thermal history of the sample exposed to a heat load of 250°C on the DSC first heating curve (1st step). During the second heating (2nd step), these effects were no longer visible, and the DSC curve shows no significant heat flow changes ( Figure 10). e TMA dimension change curve reflects the DSC record of heat flow change [29]. Physical transitions caused nonlinearity of the stretch curve ( Figure 11). e first transition in the temperature range up to 100°C, corresponding to evaporating moisture and volatile components. At temperatures above 150°C, evidence of additional curing of the BMC on both curves is seen. HDT (VICAT-HDT CEAST) measurement was based on ISO 75 for Plastics-Determination of deflection temperature under load. By default, a body of 10 mm thickness was used. e load was determined according to the standard at 0.45 or 1.82 MPa (method A or B). e resulting HDT temperature, according to the standard, indicated the temperature at which the test body had a deflection of 0.32 mm. As the BMC can withstand temperatures above 200°C, which is the maximum operating temperature of the HDT, the BMD had to be >200°C for all samples. e test was therefore designed to read deflection of the test body for the selected temperatures, and the dependence of shape deformation on rising temperature was plotted. Only the test conditions from the standard were used, which were a heating rate of 120°C/hr and a load of 1.82 MPa. From the results of the HDT test, it is clear that the sample deflection significantly increased with the applied heat load (Figure 12). Optical surface analysis showed the following results (KEYENCE digital 3D microscope). As we said above, Menzolit BMC 3100 belongs to the group "bulk moulding compounds" based on unsaturated polyester resin. e material is glass fibre reinforced. Figure 13 shows differently oriented glass fibres reaching about 500 μm in length and various CaCO 3 particles in the range of tens to hundreds of μm. Figure 14 shows CaCO 3 particles in the range of tens to hundreds of μm. All are connected by a cross-linked polyester matrix. e filler structure is not preferentially oriented, but the fibres are randomly arranged or form the bonded bundles associated with the mineral filler and the resin. At a thermal load of 280°C/30 minutes, the surface layer of the material was visibly degraded to a depth of 60-70 μm. After thermal stress of 300°C/30 minutes, the surface layer was already damaged to a depth of about 90 μm. At a lower thermal load, degradation on the cross section was not so visible. is corresponds to the results of the TGA and FTIR analysis, where more degradation occurred at temperatures above 250°C. Only a colour change of the surface layer occurs below this temperature. Limitations Having critically examined the stability of Menzolit BMC 3100 under specific thermal conditions, the glass transition of the polymeric matrix could not be detected probably due to the large percentage of inorganic filler; therefore, it was not possible to study the effect of thermal degradation on its Advances in Materials Science and Engineering value. ermoanalytical techniques used in this study also have specific limitations for which each method may not be completely effective. Conclusions It is important to take note of the shape and quality of thermoanalytical curves so far discussed [30]. For example, for selected temperatures or mass losses as they have been interpreted in line with all demonstrated experimental conditions, consideration of the shapes and quality of the thermoanalytical curves helps to obtain further important information on Menzolit BMC 3100. e fundamentals of the techniques discussed and analysed in this study are functions of changes in the temperature profiles as a heat flux passes through a material [31], Menzolit 3100 in this case. ermal techniques experimented tend to develop understanding of aging of the materials. Experimental results are therefore in comparison to the numerical simulations as functions of mass, weight, and temperature trends. e influence of thermal aging on physical properties of a composite material is detected experimentally for Menzolit BMC 3100, a composite material. Menzolit BMC 3100 is a type of polycomponent composite material comprising an organic polymer matrix formed by polyester resin and two main inorganic components, a mineral filler of CaCO 3 and irregularly arranged glass fibres. A suitable ratio of these components achieves a desired temperature resistance while maintaining sufficient mechanical properties for its intended use in the automotive industry. Menzolit BMC 3100 can be considered a temperatureresistant composite material suitable for use in applications with continuous temperatures up to 200°C. Above this temperature, the material begins to degrade at the surface, especially its organic component (polyester resin). is type of degradation has a negative impact on a variety of its physical properties. Exposure to temperatures above 200°C reduces the material's hardness, toughness, and shape stability. Degradation increases with higher thermal loads almost linearly for all the observed properties. Conflicts of Interest e authors declare that they have no conflicts of interest.
v3-fos-license
2020-09-26T13:05:53.314Z
2020-09-01T00:00:00.000
221914033
{ "extfieldsofstudy": [ "Business", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-2615/10/9/1717/pdf", "pdf_hash": "3467abad37b144743a7b6962d97a79ef1e45a56e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42282", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "sha1": "b370171c5a5e6e4c2ed81f79be974f6330be171f", "year": 2020 }
pes2o/s2orc
Selection for Favorable Health Traits: A Potential Approach to Cope with Diseases in Farm Animals Simple Summary The losses caused by the outbreak of diseases are disastrous for the animal farming industries. There is an urgent need for an efficient, economical, and permanent disease control method to cope with the adverse effects of diseases in farm animals. In this review, we have proposed that genetic/genomic selection for animals with favorable health traits provide potential methods to eliminate the adverse influences of diseases in farm animals. It is undeniable that the traditional methods for disease control (e.g., vaccination, treatment, and eradication strategy) and several other rising disease control and detection methods (e.g., genome editing, biosensor, and probiotics) are contributing to the prevention of diseases from farm animals, curing infected animals, and detecting sick individuals; however, the limitations and deficiencies of these methods cannot be ignored. Although genetic/genomic selection solutions are facing some challenges, the developments of selection-associated techniques (e.g., high throughput phenotyping and sequencing, and generation of big data) and the advantages of selection over the other disease control methods can provide animal farming industries the ability to cope with the issues caused by diseases through breeding for health traits. Abstract Disease is a global problem for animal farming industries causing tremendous economic losses (>USD 220 billion over the last decade) and serious animal welfare issues. The limitations and deficiencies of current non-selection disease control methods (e.g., vaccination, treatment, eradication strategy, genome editing, and probiotics) make it difficult to effectively, economically, and permanently eliminate the adverse influences of disease in the farm animals. These limitations and deficiencies drive animal breeders to be more concerned and committed to dealing with health problems in farm animals by selecting animals with favorable health traits. Both genetic selection and genomic selection contribute to improving the health of farm animals by selecting certain health traits (e.g., disease tolerance, disease resistance, and immune response), although both of them face some challenges. The objective of this review was to comprehensively review the potential of selecting health traits in coping with issues caused by diseases in farm animals. Within this review, we highlighted that selecting health traits can be applied as a method of disease control to help animal agriculture industries to cope with the adverse influences caused by diseases in farm animals. Certainly, the genetic/genomic selection solution cannot solve all the disease problems in farm animals. Therefore, management, vaccination, culling, medical treatment, and other measures must accompany selection solution to reduce the adverse impact of farm animal diseases on profitability and animal welfare. Introduction Disease control is a global challenge for livestock industries and farmers, as diseases bring tremendous economic losses to farm animal production systems. The animal farming systems in both developed and developing countries are suffering economically from different infectious diseases. Direct economic losses from the outbreaks of disease can account for up to 20% of the revenue in developed countries and up to 50% of the revenue within the livestock sector of the developing world [1]. Basically, all farm animal production systems are vulnerable to disease. Many diseases, such as bovine viral diarrhea (BVD), Johne's disease, and bovine respiratory disease complex (BRDC) in cattle farming; bluetongue and sheep pox in sheep farming; porcine reproductive and respiratory syndrome (PRRS), and African swine fever (ASF) in the swine industry; Newcastle disease and Marek's disease in the poultry industry; and Aleutian disease in the mink industry, contribute to economic losses and cause serious animal welfare issues via persistent infection, increased mortality, reduced productivity and reproduction performance, and decreased product quality. Therefore, finding the effective solutions to combat diseases has become a top priority for all livestock industries. To control diseases, many methods have been used with some level of success. Vaccination, medical treatment, and eradication strategy are common methods to control health issues caused by diseases. These methods, however, are facing some bottlenecks, such as the side effects of vaccination [2,3], public concerns about residual drugs and drug resistance after employing medical treatment [4][5][6][7][8][9], and financial cost and high recurrence rate of using eradication strategies [10,11]. Several other methods including genome editing, biosensor, and probiotics provide animal farming industries more options to enhance animal health. Unfortunately, the lack of effective legal oversight (e.g., genome editing) and technological immaturity (e.g., genome editing, probiotics, and biosensor) make these technologies not widely available for controlling diseases of farm animals. This makes seeking alternative solutions one of the main concerns for animal producers. Breeding for favorable health traits is one solution that is highly anticipated. Health traits mainly include health body traits, disease susceptibility traits, and immune system traits. Selecting favorable health traits, which are complex traits influenced by many genes and environmental factors is a powerful tool against disease [12]. Host genetics is significant in controlling the health status of each individual in the same environment. Compared with the other methods of disease control in farm animals, the selection of animals with favorable health traits such as disease resistance, disease tolerance [13], and immunity responses [14] has many advantages. Classical genetic selection and genomic selection are playing important roles in genetically improving health and controlling diseases. Although many challenges exist in both selection methods, the great potential to genetically eradicate diseases from farming systems is still attracting the attention of many animal farming industries. Given the importance of disease in farm animals and the dramatic development of technologies for disease characterization, it is crucial to have a comprehensive and holistic view about challenges and solutions for combating disease in farm animals. Therefore, this review paper was written: (1) to present an overview of common diseases in farm animals and the methods used to control them; (2) to highlight the advantages of coping with diseases by selecting for health traits through genetic or genomic selection, as well as the current stages of selection on major diseases in livestock industries; and (3) to discuss the major challenges of employing health trait selection and the potential solutions that can help improve selection. The Influence, Prevalence, and Controlling Issues of Common Diseases in Farm Animals Disease in farm animals is a significant challenge to farm animal industries worldwide. Cattle, sheep, swine, poultry, and fur-bearing animals such as mink are the most important farm animals for human society and provide the main resource of milk, meat, egg, wool, and fur. Unfortunately, all these important farming systems are vulnerable to disease (Figure 1). In cattle, BVD, Johne's Disease, and BRDC are the most costly and persistent diseases ( Table 1). The BVD commonly causes respiratory and reproductive complications in the herd. The prevalence of BVD in Northern Ireland can reach as high as 98.5% in non-vaccinated dairy herds and 98.3% in beef herds [15]. The BVD causes the dairy industry to lose 40 to 100 thousand US dollars per herd in Canada and 10 to 40 million US dollars per million calvings in Europe [16,17]. Culling infected animals and vaccinations are employed as short-term strategies to control this disease; however, they do not effectively eradicate BVD from the dairy farms [18,19]. Johne's disease affects the small intestine of ruminant animals and results in weight loss, diarrhea, decreased fertility, and death. The current strategy of controlling Johne's disease is based on timely detection through Mycobacterium avium ssp. Paratuberculosis enzyme-linked immunosorbent assay testing and then culling infected animals as there is no effective vaccine or treatment. For this reason, Johne's disease is still rampant worldwide [10]. Approximately 68% of dairy operations in the USA were affected by this disease [20]. This disease causes economic losses of 15 million Canadian dollars per year to the dairy industries in Canada, and 200 to 250 million US dollars per year in the USA [21]. The BRDC, which is usually associated with infections of the lungs, causes pneumonia in calves and has been regarded as one of the primary causes of morbidity and mortality in beef farming [22,23]. In the USA, BRDC is the leading natural cause of death in beef cattle and causes financial losses of more than one billion US dollars annually [24]. The main method of controlling BRDC is using antibiotics; however, bacterial pathogen resistance to antibiotics for BRDC has caused the producers, practitioners, and the animal health industry to doubt the sustainability of using antibiotics to control BRDC [25]. In cattle, BVD, Johne's Disease, and BRDC are the most costly and persistent diseases ( Table 1). The BVD commonly causes respiratory and reproductive complications in the herd. The prevalence of BVD in Northern Ireland can reach as high as 98.5% in non-vaccinated dairy herds and 98.3% in beef herds [15]. The BVD causes the dairy industry to lose 40 to 100 thousand US dollars per herd in Canada and 10 to 40 million US dollars per million calvings in Europe [16,17]. Culling infected animals and vaccinations are employed as short-term strategies to control this disease; however, they do not effectively eradicate BVD from the dairy farms [18,19]. Johne's disease affects the small intestine of ruminant animals and results in weight loss, diarrhea, decreased fertility, and death. The current strategy of controlling Johne's disease is based on timely detection through Mycobacterium avium ssp. Paratuberculosis enzyme-linked immunosorbent assay testing and then culling infected animals as there is no effective vaccine or treatment. For this reason, Johne's disease is still rampant worldwide [10]. Approximately 68% of dairy operations in the USA were affected by this disease [20]. This disease causes economic losses of 15 million Canadian dollars per year to the dairy industries in Canada, and 200 to 250 million US dollars per year in the USA [21]. The BRDC, which is usually associated with infections of the lungs, causes pneumonia in calves and has been regarded as one of the primary causes of morbidity and mortality in beef farming [22,23]. In the USA, BRDC is the leading natural cause of death in beef cattle and causes financial losses of more than one billion US dollars annually [24]. The main method of controlling BRDC is using antibiotics; however, bacterial pathogen resistance to antibiotics for BRDC has caused the producers, practitioners, and the animal health industry to doubt the sustainability of using antibiotics to control BRDC [25]. In sheep, bluetongue and sheep pox are two common diseases in the sheep industry, causing significant economic losses (Table 1). Bluetongue causes huge economic losses to the sheep industry due to high mortality and morbidity, as well as the trading of animals associated with its outbreak. The prevalence of bluetongue was 19% in Italy [26], but in Sudan, the prevalence has been as high as 94% [27]. In 2007, the cost of the bluetongue disease for sheep breeding farms in the Netherlands was estimated at 12.6 million euros [28]. Vaccination has been regarded as the most viable method for the prevention and eradication of bluetongue disease; however, the expensive cost and potential side effects seriously influence the practicality and effectiveness of bluetongue disease vaccine [29]. Sheep pox is a serious, and often fatal infectious disease in sheep and causes a high mortality rate in sheep populations. Although In swine, outbreaks of contagious diseases, such as PRRS and ASF, have not only resulted in significant economic losses for swine industries but have also caused animal welfare and environmental concerns ( Table 1). The PRRS can cause anorexia, lethargy, hyperemia of the skin, dyspnea, hyperthermia, increased mortality rates, and reduction in average daily gain [34]. Up to 48% of swine farms in Ontario, Canada, were infected by PRRS from 2010 to 2013 [35]. In 2013, the total annual losses due to PRRS in the US were estimated at 664 million US dollars [36]. In Canada, the cost of PRRS was estimated at 130 million Canadian dollars per year [37]. Vaccination is considered the most feasible method for PRRS control; however, the high mutation rate and antigenic variability of the PRRS virus influences the effectiveness of controlling PRRS through vaccination. Meanwhile, the limited protection period of the vaccine against PRRS makes vaccination effective for only short time periods instead of eradicating the virus permanently [38,39]. The ASF is a viral disease that leads to high morbidity and mortality in swine and has drastic influences on global domestic swine production. The absence of an effective vaccine and available methods of disease control causes tremendous economic losses to the infected areas [40]. The ASF was reported in most provinces of China from August 2018 to July 2019 and resulted in an insufficient supply of pork products in China. The overall mean rate of incidence was 12.5%, and the highest incidence rate of 30% occurred in April-May 2019 [41]. In Russia, ASF has resulted in the loss of 800,000 pigs and 0.83-1.25 billion US dollars since its outbreak in 2007 [42]. In poultry, diseases such as Newcastle disease and Marek's disease have caused devastating economic losses worldwide (Table 1). Newcastle disease was regarded as one of the biggest threats to the poultry industry as this disease significantly affected poultry production throughout the world and has accounted for huge economic losses due to high mortality, high morbidity, and trade restrictions [43]. The average prevalence in adult birds was 85% in the breeding and wintering grounds of Michigan, Mississippi, and Wisconsin states of the US, and Ontario province of Canada from 2009 to 2011 [44]. The outbreak of Newcastle disease in California state of the US from 2002 to 2003, caused 3.3 million birds to be culled and cost 200 million US dollars to eradicate the virus [45]. With no effective treatment for Newcastle disease, vaccination is primarily used by the poultry industry to control the spread of disease. The multiple worldwide outbreaks of Newcastle disease in the past few years, however, have shown that the vaccination strategies are not fully effective in controlling this disease in different environmental conditions [46,47]. Marek is another disease that affects the poultry industry and is one of the most ubiquitous highly contagious viral avian infections affecting chicken flocks worldwide. Although the clinical Marek disease is not always apparent in infected flocks, the subclinical decrease in growth rate and egg production can significantly affect the economic benefits of chicken farms [48]. In Iraq, the overall prevalence of Marek disease was 49.5% with a range of 37% to 65% in different areas [49]. Even though mass vaccination is relatively efficient in controlling Marek's disease, the appearance of highly virulent strains that can decrease vaccine immunity results in Marek's disease virus continuing to cause a serious threat to the poultry industry [50,51]. The annual economic losses due to Marek's disease were estimated as high as 1-2 billion US dollars worldwide [52]. As the primary source of fur among all fur industries, mink farming also suffers from the serious economic losses caused by Aleutian disease (Table 1). Aleutian disease, a chronic and persistent viral infection can cause a decrease in litter size (2.5 kits per whelping), high adult and embryonic mortalities (30-100%), and poor fur quality [53][54][55][56]. From 1998 to 2005, 24% to 71% of farmed mink were infected in Nova Scotia province of Canada [57]. The test-and-remove strategy, which is the process used to remove mink tested positive for Aleutian Disease, is employed as the main method to control Aleutian disease because of the ineffective immunoprophylaxis and treatment [58]. The unsatisfactory outcome of the test-and-remove strategy, however, makes Aleutian disease still a major problem and results in tremendous economic losses for the mink industry in North America and Europe [57,59]. The annual economic losses to the mink industry were estimated at approximately ten million US dollars in Denmark during 1984 [60]. Current Methods to Control Diseases in Farm Animals Many disease-controlling methods are contributing to help farm animals cope with diseases. Vaccination, treatment, and test-based culling strategies are common approaches for the livestock industry to treat diseases and reduce the economic losses caused by subsequent health issues. Meanwhile, the development of genome editing, biosensor, and probiotics have provided more options for solving the economic and animal welfare issues caused by disease in animal farming systems. These methods have made great contributions to the control of diseases, but their deficiencies exposed in the application process cannot be ignored (Table 2). Vaccination has long been a key tool to reduce disease in livestock and maintain the health and welfare of livestock. Vaccines are contributing to preventing and mitigating many livestock diseases (e.g., Johne's Disease and BRDC in cattle, bluetongue and sheeppox in sheep, PRRS in swine, and Newcastle and Marek's diseases in poultry), which have complex, limited or no treatment options available, as well as reducing the use and misuse of antibiotics [79][80][81][82]. Vaccines play a significant role in preventing livestock diseases, but they also have some unsatisfactory side effects. First, vaccines are only administered to healthy subjects because they aim to prevent, not to treat. This means the vaccine can only protect the animal from disease, instead of eradication of disease [83]. Second, vaccination may cause adverse reactions in vaccinated animals. This means a vaccine may cause some adverse side effects (e.g., anaphylaxis, decrease in production traits) to a recipient [2,3]. Third, mass vaccination campaigns can be very expensive and may be unprofitable for some livestock farmers [84]. Medical Treatments Medical treatment is one of the main typical treatments for coping with diseases in farm animals. Veterinary drugs not only play a crucial role in controlling the diseases-related risks but also make contributions to higher agricultural productivity and a steady livestock supply [85,86]. The overall economic benefit can be increased by using the medical treatments because their applications can increase feed efficiency and performance (growth rate, egg production) for 1% to 15% more than animals that do not receive antibiotics or medical treatments [87]. Although veterinary drugs have played an important role in the field of animal husbandry and agro-industry, the increasing occurrence of residues and resistance have become issues worldwide [4][5][6][7][8][9]. Culling Culling infected animals and carrying strict hygiene practices are also commonly applied to control many highly contagious and inextirpable diseases in farm animals by reducing the transmission of disease. High culling rate and cost of culling make it expensive to control some diseases by culling strategy. The overall annual culling rate of 590 randomly selected dairy herds from New Zealand for BVD was 23.1% in 2002, and the cull cost for each cow was 324 US dollars [61]. About 200,000 pigs were culled from August to October of 2018 due to the outbreak of ASF in China. The direct damage from culling was estimated at about 37.8 million US dollars [88]. For controlling PRRS in Vietnam, the government needs to provide a subsidy to encourage pig farmers to voluntarily cull infected pigs [89]. This strategy, however, still cannot eradicate some of the viruses in some cases, such as Aleutian disease in mink and Johne's disease in dairy cattle [10,11,57]. Many potential reasons such as the variability of the virus genome, ineffectiveness of biosecurity failure, viral transmission from wild animals, and persistent virus on the farms lead to the failure of culling strategies [57,90,91]. Genome Editing Genome editing is a powerful technology that can precisely modify the genome of an organism. The main genome editing tools are zinc-finger nucleases, transcription activator-like effector nucleases, and CRISPR/Cas9, which have been successfully employed to many farm animal species including swine, cattle, sheep, and poultry to cope with diseases at affordable costs by creating farm animals with disease-resistant genes [92][93][94][95][96][97][98][99]. There are clear opportunities especially in cases where conventional control options have shown limited success. For PRRS, the in vitro research has shown that the macrophage surface protein CD163 and specifically the scavenger receptor cysteine-rich domain 5 (SRCR5) of the CD163 protein mediate entry of PRRS virus into the host cell [100]. Based on this information, a genome-edited pig with increased resistance to PRRS virus infection could be generated with a disruption to the CD163 gene. The genome-edited pigs created by completely knocking out the CD163 gene [98,101] or by removing only the SRCR5-encoding genome section [102,103] showed resistance to PRRS virus infection. However, such studies did not deliver the complete resistance in the pigs in which the endogenous CD163 gene was edited. The effectiveness of genome editing in disease control will be influenced by many factors, such as the proportion of gene-edited animals in the population and how these gene-edited animals are distributed within and across farms [96]. The disease-specific epidemiological models, however, are missing in helping with defining the exact proportion of gene-edited animals needed for each species/disease. Meanwhile, the limited shelf-life of genome editing needs to be considered. Genome editing shares the potential risk of vaccines, as the efficacy might be time-limited due to the emergence of escape mutants [96]. Especially for some RNA viruses with extremely high mutation rates, like the PRRS virus [104], this concern is justified. So far, no legal regulations have been established to supervise genome-editing animals, and all previous examples are at a preliminary stage. This means that applying this technology to farm animal production still needs a large amount of research and comprehensive monitoring systems to ensure biosafety [96]. On the other hand, public concerns about genome-edited farm animal products are also a factor that cannot be ignored, and directly determines whether genome-edited farm animal products have market value [95]. Biosensor A biosensor is used to quantify physiological, immunological, and behavioral responses of farm animal species through detecting specific interaction results to a change in one or more physico-chemical properties (pH change, electron transfer, mass change, heat transfer, uptake or release of gases or specific ions) [105]. This technology is applied in disease detection and isolation, and health monitoring in cattle, swine, and poultry [106][107][108][109][110][111][112]. Although the biosensor can detect abundant precise data, the data is currently not being effectively transferred into practical information that could be used for the decision-making process in farm animal health management. At the same time, the lack of investment by individual farmers has also limited the widespread application and promotion of this technology [108]. Probiotics The use of probiotics is also believed to have great potential to reduce the risk of the diseases of farm animals especially intestinal diseases and to replace the use of some antibiotics [113,114]. Creating a bacterial competition using probiotics, which are live microorganisms that provide a health benefit to the host when administered in adequate amounts, is a strategy to maintain health and prevent and treat infections in animals [114]. Many probiotic products are available for farm animals to improve their health and prevent them from disease [115][116][117]. Lack of statistical analysis, unclear experimental protocols, lack of precise identification of microorganisms, and missing data related to the viability of the organisms make it difficult to assess the studies associated with probiotics based on earlier research [118]. Meanwhile, the lack of an appropriate government regulatory framework and safety studies slow the industrial exploitation of novel probiotic genera and delay the large-scale application of this technology in animal farming [119]. Health Traits in Farm Animals: Definition, Classification, and Components Historical emphasis on farm animal selective breeding programs were only focussed on profitability, and the most easily measured traits such as milk yield in dairy cows or bodyweight in swine. Recently, selection between and within breeds for health traits is attracting more attention from farm animal producers. The farmers realize that only by having a more comprehensive assessment of animal performance, the level of productivity can be maintained or improved [120]. Health traits could simply be the traits related to the health status of animals, and therefore, they could be disease traits or host immune status. According to the Animal Trait Ontology [121,122], health traits are a part of animal welfare traits. The traits could be further divided into three main groups including health body traits, disease susceptibility traits, and immune system traits. For each group, several subgroups are also included such as immune system traits which could include acquired immune system traits and innate immune systems traits. Health traits are defined by the interaction between host genetics and environment which includes the management factors as well as the pathogens. Host genetics play important roles in animals, which decide the health status of each individual in the same environment. Selection for host genetics often involves selection for disease resistance or tolerance as well as their immune systems. To maximize the host genetic potentials, it is important to study the gene by environment interaction. Genomic selection for gene by environment interaction might become more feasible using the big data [123]. Health traits could be reported at different levels as within (individual variations) or between populations. The heritabilities of health traits depend on many factors such as the nature of the traits or the method of records; however, they are known to be low-to-moderate. For instance, the estimated heritabilities for the susceptibility of cattle to Johne's disease infection were ranged from 0.06 to 0.18 [124][125][126]. Therefore, selection for health traits can be achieved but might require quite longer time compared to the other production traits with higher heritabilities. The Benefits of Selecting Farm Animals with Favorable Health Traits Genetic improvement of animal health brings many benefits to the farmers, such as increase in production, reduction in the cost of treatment, enhancement of product quality and fertility ( Figure 2). Overall, it improves animal welfare as less animals suffer from disease, as well as improving environmental health and human health by reducing the potential disease transmission to humans. Breeding animals with health traits for controlling disease offers several advantages over the other methods of disease control. Selecting health traits, such as disease tolerance, disease resistance, and immune response, can be an inexpensive and relatively simple way to improve animal health, welfare, and productivity. Breeding for health traits appears more and more attractive as the infectious organisms evolve resistance to the drugs and vaccines used to control them, as the costs of treatment and veterinary care increases faster than the value of the animals, and as a result of the huge economic loss caused by the culling of animals with positive disease tests results. Animals 2020, 10, x 11 of 29 or the method of records; however, they are known to be low-to-moderate. For instance, the estimated heritabilities for the susceptibility of cattle to Johne's disease infection were ranged from 0.06 to 0.18 [124][125][126]. Therefore, selection for health traits can be achieved but might require quite longer time compared to the other production traits with higher heritabilities. The Benefits of Selecting Farm Animals with Favorable Health Traits Genetic improvement of animal health brings many benefits to the farmers, such as increase in production, reduction in the cost of treatment, enhancement of product quality and fertility ( Figure 2). Overall, it improves animal welfare as less animals suffer from disease, as well as improving environmental health and human health by reducing the potential disease transmission to humans. Breeding animals with health traits for controlling disease offers several advantages over the other methods of disease control. Selecting health traits, such as disease tolerance, disease resistance, and immune response, can be an inexpensive and relatively simple way to improve animal health, welfare, and productivity. Breeding for health traits appears more and more attractive as the infectious organisms evolve resistance to the drugs and vaccines used to control them, as the costs of treatment and veterinary care increases faster than the value of the animals, and as a result of the huge economic loss caused by the culling of animals with positive disease tests results. Protecting farm animals by vaccination or drug treatment has been the major method used to protect at-risk farm animals; however, the public concern about vaccination or drug treatment is increasing due to the drug residues and the resistance of pathogens and parasites to drugs and vaccines [127]. The intense selection pressure, which evolved into the resistance of parasites to drugs, Protecting farm animals by vaccination or drug treatment has been the major method used to protect at-risk farm animals; however, the public concern about vaccination or drug treatment is increasing due to the drug residues and the resistance of pathogens and parasites to drugs and vaccines [127]. The intense selection pressure, which evolved into the resistance of parasites to drugs, can be imposed on the parasite population by treating farm animals with drugs such as antibiotics or anthelmintics [128]. Genetic improvement of the health of farm animals through selecting disease resistance may reduce the need for treatment with antibiotics and reduce the risk of residues in farm animal products. The worldwide control strategies to cope with helminths are entirely based on the frequent use of dewormers, which are anthelmintic drugs [129]. These control strategies have been increasingly regarded as unsustainable given the emergence of multiple drug-resistant parasites [130]. Each time an anthelmintic is employed, the resistant parasites will be selected for and will pass their resistant genes onto the next generation of worms [129]. As a result, breeding for genetic resistance is a significant component in integrated parasite management programs [131]. The genome-wide selection strategies are playing an important role in selecting animals for nematodes resistance traits [129]. The most frequent reason for using antibiotics in lactating dairy cattle is mastitis [132]. In the earlier research of bovine mastitis in Finland, the proportion of coagulase-negative Staphylococci resistant to at least one antibiotic drug increased from 27% in 1988 to 50% in 1995 and from 37% to 64% for S. aureus strains [133]. Significant increases in the antibiotics resistance were also observed in France as tetracycline resistance in Streptococcus uberis isolates increased from 15.7% to 20.4% and third-generation cephalosporin resistance in Escherichia coli isolates increased from 0.4% to 2.4% in the period from 2006 to 2016 [134]. The issues of antibiotic resistance make a permanent improvement in mastitis resistance for cow through selected breeding [135]. Vaccination can be regarded as an alternative strategy for genetic improvement of mastitis; however, a single vaccination can only provide a short-term protection instead of a permanent protection from generation to generation. Although it may be more cost-effective in the short run by using effective low-cost vaccination, genetic improvement in disease resistance has more advantages in the long run [135]. Selection for health traits can reduce the production costs associated with disease control in farm animals [136]. Culling, or test-and-remove strategy, is one of the common approaches to control highly contagious diseases such as PRRS in swine and Aleutian disease in mink. It can cause huge economic loss to farmers due to the expensive cost in replacing a diseased animal and the loss of farmed animals. Bovine tuberculosis, caused by the bacterium Mycobacterium bovis, is an endemic disease with zoonotic potential in many parts of the world, notably in the UK and Ireland [1]. The primary method used to control this disease is compulsory testing of cattle followed by the slaughter of test-positive animals at a total cost exceeding GBP 227 million in the UK and Ireland in 2010-2011 [137]. Highly tolerant animals still have good performance in an environment with significant virus exposure, and thus genetic selection for disease tolerance has the potential to reduce the production costs associated with culling diseased animals and eliminating the disease virus. In some developing countries, the majority of poor farmers cannot afford or do not have access to therapeutic and vaccine control, and thus the selection for healthy animals is critical for effective disease control [136]. Selection for animals with health traits (e.g., disease tolerance and disease resistance) has the potentials to bring positive economic impacts to animal farming industries. The disease-resistant animal has the ability to prevent the entry of a pathogen or inhibit the replication of the pathogen [138]. Therefore, selecting the disease-resistant animal has the potential to save the cost of medicine treatment and eliminate the economic losses caused by disease (such as reduced production, high mortality, and low fertility). The disease-tolerant animal has the ability to limit the influence of infection on its health or production performance [138]. Hence, selecting the disease-tolerant animal has the potential to minimize the adverse influence caused by disease during the production period. Methods of Selection for Health Traits Artificial selection is the process used for determining the parents for the breeding program, the number of offspring the selected parents produce, and the duration that the selected parents remain in the breeding population [139]. Artificial selection is commonly used in farm animal selection to maximize the benefits by selecting favorable characteristics and excluding the features that are not sought after by the market. The principle of selection is choosing the individuals with the best sets of alleles as genetic parents to reproduce so that the next generation has more desirable alleles than the current generation. The consequence of successful selection is genetically improving future generations of a population by increasing the proportion of desirable genes in the population over time [139]. The progress of selection for farm animal species can be viewed according to the development of molecular techniques as traditional genetic selection, marker-assisted selection and genomic selection. Traditional Genetic Selection Improvement of farm animals has focused on the selective breeding of individuals with superior phenotypes. With the development of increasingly advanced statistical methods that maximize selection for genetic gain, this simple approach has been spectacularly successful in increasing the quantity of agricultural output. Selections for certain health traits have been done for a long time when the ancient people tried to select animals with better health or resistance to certain diseases during domestication [140]. These selections were purely based on their observation of performance characteristics without any information about molecular genetics. Existing selection techniques, however, still rely on laborious and time-consuming progeny-testing programs and often depend on subjective assessment of the phenotype. The traditional genetic selection breeding program evaluates the genetic potential of animals, which is based on breeding value, for some important traits using phenotype and pedigree information observed on the animal [141]. Genetic selection has significantly increased the production levels of farm animal species. The high accuracy of breeding value estimation, the moderate-to-high heritability of most production traits, and the use of large databases containing production records of many farm animal species and their genetic relationships have been found to boost breeding programs based on genetic selection and have become quite successful [142]. The application of genetic selection in commercial farm animals based on aspects of output such as higher growth rate in poultry, less fat percentage rate in swine, and greater milk yield in cows has had significant effects on outputs in the farm animal industries [143]. Genetic selection for health traits has been applied in countries with routine health data records collected for a long time. For instance, health traits have been included in breeding programs in Scandinavian countries since the mid-1970s [144]. Mastitis, ketosis and displaced abomasum diseases records were included in the breeding programs of dairy cattle in Canada [145,146]. The impacts of genetic selection for health traits depend on the nature of the traits (heritability), sample size, methods of recording, the priority of selection (e.g., economic weight in the selection index), environments and species; however, the progress for genetic selection for health traits is often lower than production traits. Marker-Assisted Selection The molecular techniques such as Polymerase Chain Reaction (PCR), Fluorescence In Situ Hybridization (FISH), and Sanger sequencing were developed in the 1980s [147]. These techniques performed the amplification and sequencing of DNA and identification of markers linked to genes for economically important traits such as disease resistance. When available, these markers will provide animal breeders with an objective test system to identify the animals carrying desirable alleles at birth or even earlier such as an embryo or sperm [148]. The method allows the identification of genes or DNA markers for genetically engineering disease resistance and selection of enhanced production traits [148]. Quantitative trait loci (QTL) mapping is the first step to detect chromosomal regions affecting complex traits, which will be used in the fine mapping for identification of DNA markers for traits of interest. The QTL detection experiments in farm animals started in the 1990s when Andersson et al. [149] detected a QTL for fatness on chromosome four in pigs. Many QTLs were detected initially using initial linkage maps in either crossbreds for highly divergent traits of interest, or commercial populations where half-sib families were available. In the early 1990s, QTL experiments were based on resource populations with a few hundred animals; over time resource population size has increased to thousands of animals coupled with an increasingly large number of markers. Consequently, the number of detected QTLs has also increased rapidly in different farm animal species (Table 3). While genetic markers that are linked to the QTL could be used to choose animals for selective breeding programs, the most effective markers are the functional mutations within the trait genes. For instance, the QTL identified for milk yields and components in chromosome 14 of Holstein dairy cattle is linked to the Acyl-CoA: Diacylglycerol Acyltransferase 1 (DGAT1) K232A Polymorphism in Sweden [150], Germany [151], Canada [152], and China [153]. Strategies to identify markers for traits and the application of these markers are described with reference to examples of loci that control a range of different traits [154]. Detection of QTLs, and genes involving the traits of interest helps to develop the marker-assisted selection programs [155]. For example, Ruane and Colleau [156] found that the application of marker-assisted selection could increase 6% to 15% of the selection response for milk production in cattle that used multiple ovulation and embryo transfer in the first six generations of selection. However, most of the detected genes and markers only explain a small proportion of phenotypic variances, and therefore, they are not effective for the selection of quantitative traits. For instance, all genetic markers of 42k genotyping panel could only explain about 11% of phenotypic variation in mortality due to Marek's disease virus infection in layers [157]. Genomic Selection High-throughput genomic technologies especially high-throughput single nucleotide polymorphism (SNP) genotyping, genotype-by-sequencing, as well as the whole genome sequencing methods, have been commercially available for more than ten years. Genomic prediction/selection was the biggest change in the artificial selection of livestock species by adapting high-throughput genotyping technologies in the farm animal sector [158]. Genomic selection refers to making breeding decisions based on genomic estimated breeding values (GEBVs) obtained from SNP effects using various prediction methods [158]. The main approach for genomic selection is to determine the SNP effects from a reference population consisting of a subset of animals with both SNP genotypes and phenotypes for traits of interest, then to use the SNP effects to compute the breeding values (genetic merit) for other genotyped animals that are not yet phenotyped. The basic statistical method used for genomic prediction is similar to the traditional best linear unbiased prediction (BLUP) method that has traditionally been used in animal breeding for a long time, except that the relationship matrix is computed based on SNP genotypes or genomic information. The major advantages of genomic selection are the higher prediction accuracy (compared to traditional EBVs obtained using pedigree information) and the shorter generation interval [159]. The accuracy of GEBVs depends on the size of the reference population used to derive prediction equations, the heritability of the trait, the extent of relationships between selection candidates and the reference population, the relationship between test and reference populations, number of SNPs, number of loci affecting the traits as well as how close assumptions in genomic prediction methods are to the truth [160,161]. Genomic selection has been successfully applied in the farm animal sections and has accelerated the genetic gain not only for the production traits but also for many health traits [162]. Selection for Different Types of Health Traits 3.4.1. Selection for Disease Response Traits (Resistance, Tolerance, and Resilience) Disease tolerance and resistance are the most common targeted disease response traits in farm animal breeding programs, as they are natural and distinct mechanisms of a host's response to infectious pathogens and could be targeted for genetic improvement [13]. Resistance is the ability of a host to prevent the entry of a pathogen or inhibit the replication of the pathogen. Tolerance is an ability of a host to limit the influence of an infection on the host's health or production performance without interfering with the life cycle of the pathogen [138]. To date, most efforts to control infectious disease focus on selecting disease resistance farm animals to improve the ability of the host to fight disease. The heritable differences of disease resistance between animals lead to opportunities to breed animals for enhanced resistance to the disease [163]. In cattle, the major focus on health traits selection is for mastitis resistance. Many different approaches have been proposed in order to increase the possibility of selection for mastitis [164]. Up to date, 2382 QTLs have been identified for mastitis resistance in dairy cows (Animal QTL Database, https: //www.animalgenome.org/cgi-bin/QTLdb/BT/nscape?isID=1439). Not only increasing the number of QTLs, the genetic and genomic selection for mastitis has also achieved a certain level of success (reviewed by Weigel and Shook, [165]) because of the increasing accuracy of prediction for mastitis or the inclusion of different new methods of identification of mastitis incidence in the selection index. For instance, the accuracy of genomic prediction could reach as high as 0.50 to 0.55 for mastitis infection depending on the models [166]. Unlike mastitis, less progress is reported for selection for Johne's disease and BRDC resistance, which might be due to the lack of accurate measurements and their less serious impact on production. The heritabilities for Johne's disease (range from 0.07 to 0.16) and BRDC (range from 0.07 to 0.19) resistance and differences among breeds have been documented in the previous studies [20,124,167,168]. These heritability estimates and significant estimates of additive genetic variances indicate that computing traditional phenotype-based genetic evaluations for resistance to Johne's disease and BRDC is feasible in cattle populations. In swine, 43 QTLs for PRRS resistance have been mapped to 12 chromosomes (Animal QTL Database, https: //www.animalgenome.org/cgi-bin/QTLdb/SS/traitmap?trait_ID=779). The major QTL region was located on chromosome four (SSC4) that explained 16% of the genetic variance of PRRS virus load with a frequency for the favorable allele of 0.16 and a heritability of 0.30 [169]. In poultry, a number of QTLs associated with Marek's disease resistance have been reported in various lines and breeds of chicken using SNP or microsatellite markers since 1998 [170][171][172][173][174]. The research focus associated with selecting health traits has expanded to increase the host's tolerance to reduce the harmful effects of infection on health and performance [13,175]. Genetic selections of disease tolerance are rare, as the genetics of disease tolerance and its measurement are more difficult to elucidate than disease resistance in farm animals [1,176]. Growing evidence, however, indicates the potential for genomic selection of disease tolerance. Genomic studies have been able to map the QTL for tolerance traits as Zanella et al. [177] identified a number of QTLs for Johne's disease and Hanotte et al. [178] detected 16 QTLs for trypanosomosis, in the cross of N'Dama and Boran cattle. Meanwhile, the results of genomic prediction (accuracy of 0.38) for facial eczema suggested that genomic selection for the facial eczema disease tolerance has the potential to help the New Zealand sheep industry to cope with the issues caused by facial eczema [179]. Although both resistance and tolerance traits may be under genetic control and could thus be targeted for genetic improvement, selecting tolerance for disease may have some advantages over selecting disease resistance [176]. Firstly, the resistance ability of a host can limit the replication of a pathogen within the host, and therefore, selecting host resistance has a potential to increase the selection advantages on pathogen strains that can withstand host resistance mechanisms and eventually result in a loss of selection advantage of the host [180,181]. It is the potential pitfall for a long-term breeding strategy which focuses on disease resistance if the disease virus has a high mutation rate such as the PRRS virus in swine [182]. It has been theoretically proposed that selecting tolerance might not motivate such selection pressure on the pathogen [181]. Secondly, compared with the resistance mechanisms which directly influence the life-cycle of the pathogen, improving host tolerance has the potential to provide cross-protection against other strains of the virus, or other prevalent infectious agents due to the mechanisms of tolerance which primarily target host-intrinsic damage prevention or repair mechanisms [175,183,184]. Resilience is another health trait that is attracting the attention of animal breeders. Generally, resilience is an ability of an animal either to minimize the influences caused by disturbances or to return to the body condition prior to exposure of a disturbance [185]. The capability of taking care of a larger number of animals is one of the requirements for the intensification of farm animal production. Selecting resilient animals can improve this capability of the farm animal industries because resilient animals are healthy and easy-to-care-for animals that need less attention time [186]. On the other hand, compared to the direct selection based on disease tolerance and resistance, the selection based on resilience is a more pragmatic way of keeping healthy animals, because it does not need the records on pathogen burden, which is the amount of pathogen in the animal's body [187][188][189]. Resilience, however, is not yet included in breeding goals due to the difficulty of phenotyping [13]. Fortunately, the current developments on the big data collection and new disease resilience indicators defined based on these data provide great opportunities to breed for improved resilience in livestock [190]. Selections for Immune Response Traits Immunity response traits are also important health traits for animal breeders to select for improving the farm animals' ability to withstand disease. The immune system is important to control infections and diseases. The immune response traits have been recommended to be selected for decreasing the incidence and impact of the disease in farm animals [14,191]. In Holstein cattle, the lower occurrence of mastitis improved response to the commercial vaccine, and increased milk and colostrum quality are all observed in cows with superior or high immunity response [118]. Consequently, improving the inherent ability to cope with the diseases in dairy cattle through genetic selection for superior or high immunity response is feasible [192]. In cattle, the High Immune Response (HIR™) and the Immunity+, which are used to identify and select animals with naturally optimized immune responses, have been applied in the genetic selection of cattle for improved immunity and health [14]. In swine, the total and differential numbers of leukocytes, expression levels of swine leukocyte antigens I and II, and serum concentrations of IgG and haptoglobin are immunity traits that have been demonstrated to have additive genetic variation. These immunity traits, therefore, have the potential to be used as criteria to improve the selection of pigs for coping with clinical and subclinical diseases [193]. In poultry, the presence of genetic variability in immune response traits and the discovery of SNPs associated with immune response traits indicate that genetically enhancing antibody response and resistance to parasitism is feasible through genomic selection [194]. Challenges in the Selection of Health Traits Health traits, such as disease resistance, disease tolerance, and immunity response level are usually quantitative traits which are influenced by many genetic and environmental factors. Although genetic selection has significantly increased the production traits in farm animal species such as higher growth rate, less fatness, and greater milk yield [143], selection for health traits is much more complicated and faces some challenging obstacles. The potential problems in selection for health traits can be classified under desirability, feasibility and sustainability [195]. Desirability The desirability describes the importance of the disease relative to the other diseases or production traits. The correlations between health traits and economic traits are often negative, which means the health traits are potentially genetically antagonistic to production traits [196,197]. Milk yield in dairy cattle has unfavorable correlations with many disease response traits [198,199]. The genetic correlations between mastitis and milk production or high somatic cell score and milk production are moderate and positive [200]. In poultry, genetic selection for greater body weight can lead to decreased immunity to fowl cholera and Newcastle disease [201]. The opposite results, however, also occur in some research. For example, van der Most et al. [202] stated that selection for growth in poultry can compromise the immune function, while the selection for immune function does not consistently affect growth. Therefore, identifying the genetic correlations between health traits and production traits in farm animals is an important aspect of health traits selection. Applying the economic selection index is one of the solutions to deal with the antagonistic genetic correlation between traits. In 1943, Hazel [203] first presented the aggregate genotype, which was also called net merit of animals as a linear combination of breeding values for each trait weighted by the economic value of the traits. After that, the economic selection index for multi-trait selection has been used in animal breeding research fields and employed in animal agriculture industries. The breeding objective can be defined as the aggregate breeding value expressed by profit or economic efficiency, and it is the overall goal of breeding programs to increase the profits or economic efficiency for breeders and/or producers. In this way, multi-trait selection with the economic selection index can minimize the adverse influences caused by the antagonistic genetic correlations between target traits to achieve the overall goal of breeding programs [204]. Feasibility Feasibility accounts for the tools available with which to perform the selection. The success of selection for health traits is highly dependent on correctly identifying the phenotype for traits associated with the host's abilities to withstand infectious diseases. Accurately identifying the phenotypes for health traits is expensive and difficult. An extensive data recording is required to enable an accurate genetic evaluation. High labor costs are required for long-term recording of large amounts of phenotypic and progeny data [12]. In a combined population of infected and healthy individuals, it is not correct to consider an individual with good performance to have favorable health traits, nor the sick populations to be genetically susceptible [205]. Some susceptible animals still show good performance because they may not have been sufficiently exposed to the pathogens. An animal displaying a healthy performance without clinical symptoms may have sub-clinical infections and represents a pathogen carrier. The clinical expression of a disease can be confounded by infection with one or more similar diseases such as pneumonia which can be confused with pulmonary adenomatosis, bronchitis, and pleuritis. Meanwhile, diagnosing a disease accurately and specifically is costly and time-consuming [196]. Sustainability Sustainability means the enhanced resistance to the infectious disease in the farms or flocks is stable for a long period especially when the pathogens often evolve faster than the hosts [195]. The long-term success of selection involves not only the choice of the best animals with disease resistance but also the management systems with the ability to cope with the constant changes in the farming environment. For instance, hot environment caused by global warming could impair production and reproductive performance, metabolic and health status, and immune response [206]. The climate changes also cause changes in the pathogens or create novel pathogens which require the producers to constantly adapt new methods and treatments for their animals. Genomic selection of robustness and fitness traits could be a solution for this challenge [190,207]. High-Throughput Phenotyping and Sequencing, and Generation of Big Data Big data is a mix of different sources of data (structured and unstructured) that comprises a large volume of information [208]. The major characteristics of big data include volume, velocity, variety, variability, veracity, validity, and volatility [209]. Big data has been adapted to the farm animal sector such as precision farming [210], biosensors [211], electronic feeding stations, and automatic milking systems [123]. Big data is also important for infectious disease surveillance and modeling [190,212]. It is clear that big data generated from high-throughput phenotyping will give unprecedented opportunities for combating diseases and selecting healthy animals [213,214]. For example, the mastitis and claw health can be recorded via high-throughput phenotyping devices such as real-time biosensors [215,216]. The use of big data for animal health care, however, needs a careful handling of the data [217] and selection of appropriate statistical methods [218,219]. High-throughput sequencing data, such as genomics, transcriptomics, proteomics, and epigenomics etc., have been adapted to improve animal health [220,221] as they could help to understand the biology of disease, computing EBVs, and pinpointing the biomarkers. Data Sharing and International Corporations Data sharing and international corporations can play crucial roles in the selection of healthy traits even those selections that take place locally. The major reason for this is that many diseases in farm animals are transboundary diseases. The outbreaks of diseases could potentially affect other farms in different countries such as the outbreaks of Avian Influenzas Virus that cause significant loss in many nations worldwide. Information sharing plays a crucial role in controlling diseases for nations on the same continent especially for developing countries [222]. It is also important to have a standard protocol for recoding the incidences, progress of the disease and consequences of diseases for better use of data. In cattle, for instance, the International Committee for Animal Recording provides a recording guideline for 1000 diagnoses that can be used toward the genetic improvement of health traits (ICAR GUIDELINES, https://www.icar.org/index.php/icar-recording-guidelines/). International corporations could work together in a joint effort for phenotyping or genotyping animals/disease to enlarge the resources and enhance the human capacity to deal with disease. For example, the use of automatic milking systems from different nations could improve the modeling of mastitis infections [165] or the sharing of omics data could better develop the statistical methods and enhance understanding about the disease biology [223]. The current 1000 Bull Genomes Project is a successful story regarding the sharing of genomic data for improving the prediction accuracy of future genomic EBVs [224]. It is important to indicate that increasing the capacity of cloud storage and computing could also support the sharing of data and corporations. Conclusions Selecting favorable health traits to cope with diseases in farm animals has increasingly become an attractive focus of animal farming industries. Given some limitations and deficiencies of current non-selection disease control methods and the advantages of genetic selection over the other methods, breeding for health traits is a promising solution for the sustainable development of livestock farming. Although some remaining challenges regarding the accuracy of phenotyping and low heritability of disease traits hinder the progress of breeding for health traits, the advancement of sequencing techniques and affordable cost of genotyping make selective breeding more beneficial as a method for disease control but also require more storage and computing power. With the development of cloud computing, big data analyses increase the feasibility of selection for animal health traits. Increasing threats, such as climate change, have caused changes in the environments that require international collaborations to deal with the disease on a global scale. Eventually, smart farming with healthy animals and clean environments will be achieved with the sustainable selection methods of favorable health traits. The genetic and genomic selection solution, however, cannot address all the problems caused by disease farm animals. Therefore, it is necessary to accompany selection solution approaches with other disease control and monitor methods (e.g., vaccination, culling strategy, biosensor, and genome editing) to help animal agriculture industries to reduce the economic losses and animal welfare issues caused by farm animal diseases.
v3-fos-license
2023-05-12T05:08:05.928Z
2023-05-10T00:00:00.000
258615991
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "439801f1c6541a525fb5e75f03b4be6957216aaf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42284", "s2fieldsofstudy": [ "Medicine" ], "sha1": "439801f1c6541a525fb5e75f03b4be6957216aaf", "year": 2023 }
pes2o/s2orc
Effectiveness of intensive stand-alone smoking cessation interventions for individuals with diabetes: A systematic review and intervention component analysis INTRODUCTION Tobacco smoking poses a significant threat to the health of individuals living with diabetes. Intensive stand-alone smoking cessation interventions, such as multiple or long (>20 minutes) behavioral support sessions focused solely on smoking cessation, with or without the use of pharmacotherapy, increase abstinence when compared to brief advice or usual care in the general population. However, there is limited evidence so far for recommending the use of such interventions amongst individuals with diabetes. This study aimed to assess the effectiveness of intensive stand-alone smoking cessation interventions for individuals living with diabetes and to identify their critical features. METHODS A systematic review design with the addition of a pragmatic intervention component analysis using narrative methods was adopted. The key terms ‘diabetes mellitus’ and ‘smoking cessation’ and their synonyms were searched in 15 databases in May 2022. Randomized controlled trials which assessed the effectiveness of intensive stand-alone smoking cessation interventions by comparing them to controls, specifically amongst individuals with diabetes, were included. RESULTS A total of 15 articles met the inclusion criteria. Generally, the identified studies reported on the delivery of a multi-component behavioral support smoking cessation intervention for individuals with type I and type II diabetes, providing biochemically verified smoking abstinence rates at follow-up at six months. The overall risk-of-bias of most studies was judged to be of some concern. Despite observing inconsistent findings across the identified studies, interventions consisting of three to four sessions, lasting more than 20 min each, were found to be more likely to be associated with smoking cessation success. The additional use of visual aids depicting diabetes-related complications may also be useful. CONCLUSIONS This review provides evidence-based smoking cessation recommendations for use by individuals with diabetes. Nonetheless, given that the findings of some studies were found to be possibly at risk-of-bias, further research to establish the validity of the provided recommendations is suggested. 4, 17 Data items 10a List and define all outcomes for which data were sought. Specify whether all results that were compatible with each outcome domain in each study were sought (e.g. for all measures, time points, analyses), and if not, the methods used to decide which results to collect. 4 10b List and define all other variables for which data were sought (e.g. participant and intervention characteristics, funding sources). Describe any assumptions made about any missing or unclear information. 4 Study risk of bias assessment 11 Specify the methods used to assess risk of bias in the included studies, including details of the tool(s) used, how many reviewers assessed each study and whether they worked independently, and if applicable, details of automation tools used in the process. 4, 17 Effect measures 12 Specify for each outcome the effect measure(s) (e.g. risk ratio, mean difference) used in the synthesis or presentation of results. 4 Synthesis methods 13a Describe the processes used to decide which studies were eligible for each synthesis (e.g. tabulating the study intervention characteristics and comparing against the planned groups for each synthesis (item #5)). Item # Checklist item Location where item is reported 13b Describe any methods required to prepare the data for presentation or synthesis, such as handling of missing summary statistics, or data conversions. 11 13c Describe any methods used to tabulate or visually display results of individual studies and syntheses. 4 13d Describe any methods used to synthesize results and provide a rationale for the choice(s). If meta-analysis was performed, describe the model(s), method(s) to identify the presence and extent of statistical heterogeneity, and software package(s) used. 4 13e Describe any methods used to explore possible causes of heterogeneity among study results (e.g. subgroup analysis, meta-regression). 4 13f Describe any sensitivity analyses conducted to assess robustness of the synthesized results. NA Reporting bias assessment 14 Describe any methods used to assess risk of bias due to missing results in a synthesis (arising from reporting biases). NA Certainty assessment 15 Describe any methods used to assess certainty (or confidence) in the body of evidence for an outcome. NA Study selection 16a Describe the results of the search and selection process, from the number of records identified in the search to the number of studies included in the review, ideally using a flow diagram. 4-5 16b Cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded. 5 Study characteristics 17 Cite each included study and present its characteristics. 5-9 Risk of bias in studies 18 Present assessments of risk of bias for each included study. 10 Results of individual studies 19 For all outcomes, present, for each study: (a) summary statistics for each group (where appropriate) and (b) an effect estimate and its precision (e.g. confidence/credible interval), ideally using structured tables or plots. 6-9 Results of syntheses 20a For each synthesis, briefly summarise the characteristics and risk of bias among contributing studies. 9 20b Present results of all statistical syntheses conducted. If meta-analysis was done, present for each the summary estimate and its precision (e.g. confidence/credible interval) and measures of statistical heterogeneity. If comparing groups, describe the direction of the effect. NA 24b Indicate where the review protocol can be accessed, or state that a protocol was not prepared. NA 24c Describe and explain any amendments to information provided at registration or in the protocol. NA Support 25 Describe sources of financial or non-financial support for the review, and the role of the funders or sponsors in the review. 17 Competing interests 26 Declare any competing interests of review authors. 17 Availability of data, code and other materials 27 Report which of the following are publicly available and where they can be found: template data collection forms; data extracted from included studies; data used for all analyses; analytic code; any other materials used in the review. For more information, visit: http://www.prisma-statement.org/ give up", "gave up", cease*, discontinu*, termin*, "break off", "broke off" TI=( quit* OR stop* OR avoid* OR refrain* OR cessation OR abst* OR "give up" OR "gave up" OR cease* OR discontinu* OR termin* OR "break off" OR "broke off" OR "put an end to" ) OR AB=( quit* OR stop* OR avoid* OR refrain* OR cessation OR abst* OR "give up" OR "gave up" OR cease* OR discontinu* OR termin* OR "break off" OR "broke off" OR "put an end to" ) 3,238,787 S7 S6 NEAR/5 S5 44,364 S8 S4 AND S7 2,095 S9 S3 OR S8 2,103 ABabstracts; AKauthor keywords; NEAR/5 -two or more words within a range of five words from each other; TItitles.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-03-15T00:00:00.000
14512410
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/bmri/2011/403260.pdf", "pdf_hash": "3de22370f4490d1ccef2eef996f3f5c3e9d52955", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42285", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Medicine" ], "sha1": "7c5c417b91e427683c8519129ae7198c21820302", "year": 2011 }
pes2o/s2orc
Exploring Airway Diseases by NMR-Based Metabonomics: A Review of Application to Exhaled Breath Condensate There is increasing evidence that biomarkers of exhaled gases or exhaled breath condensate (EBC) may help in detecting abnormalities in respiratory diseases mirroring increased, oxidative stress, airways inflammation and endothelial dysfunction. Beside the traditional techniques to investigate biomarker profiles, “omics” sciences have raised interest in the clinical field as potentially improving disease phenotyping. In particular, metabonomics appears to be an important tool to gain qualitative and quantitative information on low-molecular weight metabolites present in cells, tissues, and fluids. Here, we review the potential use of EBC as a suitable matrix for metabonomic studies using nuclear magnetic resonance (NMR) spectroscopy. By using this approach in airway diseases, it is now possible to separate specific EBC profiles, with implication in disease phenotyping and personalized therapy. Introduction Metabonomics is "the quantitative measurement of the dynamic multiparametric metabolic response of living systems to pathophysiological stimuli or genetic modification" [1] due to any exposure (including drug administration), lifestyle and environmental stress. It, therefore, appears to be a powerful tool to monitor possible changes in metabolic pathways, and measure the levels of biochemical molecules generated in a living system. Metabolites are small molecules with molecular mass ≤1 kD [2] and are the end products of cellular activity. Observation of changes in metabolite concentrations may reveal the range of biochemical effects induced by a disease condition or its therapeutic intervention. The metabonomic analysis has two major potential applications, with implications in early diagnosis and disease phenotyping. It may also allow the recognition of unexpected or even unknown metabolites to formulate new pathophysiological hypotheses [3]. Moreover, the identification of individual metabolic characteristics could predict personal drug effectiveness and/or toxicity [4,5]. The application of metabonomic analysis in chronic airway diseases has not been fully explored, but it holds a valid background. Several airway diseases, such as asthma or chronic obstructive pulmonary disease (COPD), which are largely spread in the population, cannot be qualified by a single biomarker and need a system biology analysis. Furthermore, other airway diseases such as cystic fibrosis (CF), although characterized by genetic abnormality, might be fruitfully investigated. Finally, the respiratory tract offers a natural matrix, the exhaled breath, which appears to be noteworthy for metabonomic analysis. Exhaled breath contains many different molecular species such as small inorganic molecules like nitric oxide (NO) or carbon monoxide (CO), volatile organic compounds (VOCs), and so forth, [6], which can be assayed in both the liquid and gaseous phases. NMR-Metabonomics The principal techniques used in metabonomics of breath ("breathomics") are mass spectrometry (MS) and nuclear magnetic resonance (NMR) spectroscopy, since they can handle complex biological samples with a high sensitivity, selectivity, and high throughput [7]. MS, usually combined with chromatographic separation methods, separates the molecules of a sample on the basis of their retention time and mass-to-charge ratio m/z, and their representation in a spectrum [8,9]. Real-time measurements of breath are also possible using direct breathing ports and techniques such as proton transfer reaction-mass spectrometry (PTR-MS), selected ion flow tube-mass spectrometry (SIFT-MS), and ion mobility spectrometry (IMS), as well as other analytical techniques including chemical sensors and various forms of laser spectrometers [6]. MS metabonomics has recently been applied to CF, where airway inflammation brings about an increased production of reactive oxygen species, responsible for degradation of cell membranes and causing the formation of VOCs. Robroeks et al. [10] have evaluated if VOCs metabonomics, analyzed by gas chromatography-time of flight-mass spectrometry to assess VOC profiles, could discriminate CF and controls, and CF patients with and without Pseudomonas colonization. By using 22 VOCs, a 100% correct separation of CF patients and controls was possible, while with 10 VOCs, 92% of the subjects were correctly classified. The reproducibility of VOC measurements with a one-hour interval was very good. The authors concluded that metabonomics of VOCs in exhaled breath was possible in a reproducible way, and it was not only able to discriminate between CF patients and controls, but also between CF patients with or without Pseudomonas colonization. NMR spectroscopy studies molecules by recording the interaction of a radiofrequency electromagnetic radiation with the nuclei (e.g., 1 H, 13 C, 15 N, etc.) placed in a strong magnetic field. A single nucleus in a molecule can be "observed" by monitoring the corresponding line (a "resonance") in an NMR spectrum, and the various parameters of that line (frequency, splitting, linewidth and amplitude) can be used to determine the molecular structure, conformation and dynamics of the molecule. In principle, assignment (i.e., identification) of NMR resonances for common metabolites could be possible by comparing the observed chemical shifts (i.e., the position of the line in a spectrum) with published reference data. When dealing with metabolites of unknown structure, chemical procedures for the separation of each molecule and use of two-dimensional NMR experiments (that spread signals in two dimensions) are required. Since NMR spectra show hundred of resonances, the presence of a discriminating element (e.g., a signal characteristic of a specific metabolite) in a series of spectra is often undetectable by visual inspection due to the inherent spectral complexity generated by line overlapping, and it is better highlighted by multivariate analysis (principal component analysis, PCA), which carefully identifies hidden phenomena and trends in ensembles of spectra [11]. The application of PCA to a group of spectra can immediately show whether all spectra behave similarly grouping in a single class, or fall apart into different groups. The main advantage of using NMR spectroscopy is its ability to provide a rapid and accurate metabolic picture of the sample with minimal sample pretreatment [12]. Furthermore, since the technique is nondestructive, the samples can be investigated several times as long as some preventative measures are taken to avoid metabolite degradation. Use of NMR Metabonomics for the Study of Airways Metabonomics has been employed to investigate several body fluids such as urine, plasma, serum, and tissue extracts as well as in-vivo cells and their extracts [13], but only few applications to airway diseases characterization have been reported. Airway hyperreactivity (AHR), an important characteristic of airway pathophysiology in human asthma, has recently been evaluated in an animal model of asthma exacerbation by urine NMR-based metabonomics [14]. The authors assumed that airway dysfunction and inflammation would produce unique patterns of urine metabolites observed by high-resolution proton ( 1 H) NMR spectroscopy, and the data analyzed by multivariate statistical analysis. In this model, challenged (ovalbumin, administered intraperitoneally, plus ovalbumin aerosol) guinea pigs developed AHR and increased inflammation compared with sensitized or control animals. Partial least-squares discriminant analysis using concentration differences in metabolites could discriminate challenged animals with 90% accuracy. Noteworthy, urine metabonomic profiles were able to separate not only sensitized from challenged and from naïve animals, but also from animals treated with dexamethasone which improves AHR. Recently, Slupsky et al. demonstrated specific changes in NMR metabonomic urinary profiles during episodes of pneumonia caused by Streptococcus pneumoniae or Staphylococcus aureus [14]. NMR metabonomics was also used to study the mechanism behind the formation of airway biofilm caused by Pseudomonas aeruginosa, an infection particularly prevalent in patients with CF [15]. In this kind of patients, the sessile lifestyle, referred to as a biofilm, allows the antibiotic resistance and makes easier the process of colonization through the synthesis of sticky, polymeric compounds. In contrast, the planktonic, free-floating cells are more easily eradicated with antibiotics. In this study, chemical differences between planktonic and biofilm cells, based on 1 H-NMR, have been reported. In this study, NMR techniques have highlighted the metabolic differences between the two modes of growth in P. aeruginosa, and PCA, and spectral comparisons revealed that the overall metabolism of planktonic and biofilm cells displayed marked differences, which require more extensive NMR investigations. More recently [16], metabolite profiles of bronchoalveolar lavage fluid (BALF) from pediatric patients with CF were correlated to the degree of airway inflammation using NMRbased metabonomics. BALF was collected from 11 children with CF during clinically indicated bronchoscopy. The BALF spectra with high levels of neutrophilic airway inflammation displayed signals from numerous metabolites whereas the spectra from subjects with low levels of inflammation were very sparse. The metabolites identified in samples taken from subjects with high inflammation include known markers of inflammation such as amino acids and lactate, as well as many novel signals. Statistical analysis highlighted the most important metabolites that distinguished the high-from the low-inflammation groups. This first demonstration of metabonomics of human BALF shows that clear distinctions in the metabolic profiles can be observed between subjects experiencing high versus low inflammation. However, the bronchoalveolar lavage has the important limitation of being invasive, requiring the introduction of exogenous fluid into alveolar space. EBC EBC is a noninvasive method of sampling the airways; it can be easily repeated and is acceptable to patients. Currently, EBC is used to measure biomarkers of airway inflammation and oxidative stress, and guidelines for its use have been recently published [17]. EBC can also be considered a matrix for analysis of environmental toxicants. EBC collection requires the cooling of the exhaled breath (Figure 1(a)), resulting in a fluid sample that contains evaporated and condensed particles (water, ammonia, etc.) plus some droplets from the airway lining fluid [17,18]. These droplets are released by turbulent airflow and can be added to the water vapor from anywhere between the alveoli and the mouth. Therefore, not only volatiles, but also several other mediators with nonvolatile characteristics can be found in EBC samples, including adenosine, different interleukins (-4, -5, -8), interferon-γ [17]. EBC is mainly (>99%) formed by water vapor, but also contains aerosol particles in which several other biomolecules including leukotrienes, 8isoprostane, prostaglandins, hydrogen peroxide, nitric oxidederived products, and hydrogen ions, can be detected [17]. EBC markers of oxidative stress such as hydrogen peroxide, isoprostanes, nitrogen oxides, pH, ammonia, prostanoids and leukotrienes are increased in bronchial asthma [19]. EBC pH is lower in asthmatics and correlates well with sputum eosinophilia, total nitrate/nitrite, and oxidative stress [20], but did not reflect the clinical status of the patients. EBC markers that correlate with disease severity, response to treatment, or both are hydrogen peroxide, leukotrienes, 8-isoprostane, nitrate, and nitrite [10]. It is assumed that airway surface liquid becomes aerosolized during turbulent airflow so that the content of the condensate reflects the composition of airway surface liquid, although large molecules may not aerosolize as well as small soluble molecules. The major advantage of EBC is represented by the possibility to analyze both volatile and nonvolatile compounds [21]. There are some recent approaches to compare traditional blood test (glucose and urea) with the EBC in metabolic diseases. Accordingly, glucose in EBC from healthy volunteers was reproducible, unaffected by changes in salivary glucose, and increased during experimental hyperglycaemia [22]. Notably, EBC parameters are influenced by smoking, alcohol consumption, equipment, exercise, mode and rate of breathing, nasal contamination, environmental temperature and humidity, and assays used [23,24], leading to undesirable variability. Exogenous contamination may also originate from the oral cavity. Ammonia and sulfur-containing compounds like H 2 S, methyl sulfide or mercaptans are released from the oral cavity, being produced by bacteria from different oral niches. However, oral sterilization before EBC collection or continuous saliva deglutition have been suggested to limit the effects of such contaminations [14]. The influence of age, sex, circadian rhythm, and infection remains unknown. Thus the analysis of EBC currently has important limitations. Reference analytical techniques are required to provide definitive evidence for the presence of some inflammatory mediators in EBC and for their accurate quantitative assessment in this biological fluid. Finally, the physiological meaning and biochemical origin of most of volatile compounds are still not known, and biochemical pathways of their generation, origin, and distribution are only partly understood. Unfortunately, the concentrations of various mediators studied are very low, requiring highly sensitive assays. Metabonomics of EBC in Respiratory Diseases NMR-based metabonomics can be used to analyze EBC samples from adults, allowing a clear-cut separation between healthy subjects and patients with airway disease [11]. Although less sensitive than ELISA and MS, NMR spectroscopy requires minimal sample preparation with a rapid acquisition time of spectra (10-15 min). Furthermore, it shows a high degree of sensitivity (≤ μmol/L), and is nondestructive, allowing complete detection of metabolites present in the sample ("sample metabolic fingerprint") at a reasonable cost. NMR is also able to detect potential contamination of EBC from saliva, and examine the interfering effect of residual external contaminants, which is crucial for a correct EBC analysis of the variability of some biomarkers [11,25,26]. To date there are several recommendations on the methodological approach to EBC collection, but its standardization is not completely defined, as EBC can be contaminated by metabolites originating from saliva as well as microbes present in the mouth [27,28]. We have recently proposed a possible protocol for EBC collection for NMR purposes (Figure 1) [11]. It requires that subjects breath through a mouthpiece and a two-way nonrebreathing valve, which also serve as a saliva trap, at normal frequency and tidal volume, while sitting comfortably and wearing a noseclip, for a period of 15 minutes (Figure 1(a)) [29]. They maintained a dry mouth during collection by periodically swallowing excess saliva. Condensate samples (1-2 mL) are immediately transferred into glass vials, closed with 20-mm butyl rubber lined with PTFE septa, and crimped with perforated aluminum seals. Volatile substances, possibly deriving from extrapulmonary sources are removed by applying a gentle stream of nitrogen for 3 minutes before sealing [30,31]. Nitrogen was used because concentration of volatile solutes in EBC is dependent on their distribution between the saliva, exhaled air, and droplets, and the condensate, which can be altered by multiple factors including minute Figure 1: Metabonomics of EBC using NMR. The exhaled breath is cooled in (a), then transferred into the NMR tube (0.5-0.7 mL) (b) and put in the spectrometer (c) to collect the spectra (d). ventilation, salivary pH, solubility, temperature, and sample preparation [29]. Therefore, spectral differences may depend upon uncontrollable variables that prevent reliable quantification. The nitrogen stream also removes oxygen from solutions, which, together with freezing of sealed samples in liquid nitrogen, immediately "quenches" metabolism at the collection time, and prevents any metabolic decay [32,33]. Samples are then stored at −80 • C until NMR analysis. Drying of the samples should be avoided to circumvent irreversible solute precipitation, and/or formation of insoluble aggregates, which we observed upon dissolving the dried condensate for NMR measurements. Before NMR acquisition, EBC samples should be rapidly defrosted and transferred into the NMR tube (Figure 1(b)). To provide a field frequency lock for NMR acquisition, 70 μL of a D 2 O solution [containing 1 mM sodium 3-trimethylsilyl [2,2,3,3-2 H 4 ] propionate (TSP) as a chemical shift and concentration reference for 1 H spectra, and sodium azide at 3 mM] are added to 630 μL of condensate reaching 700 μL of total volume. Following acquisition (Figures 1(c) and 1(d)), 1 H-NMR spectra are automatically data reduced to 200-250 integral segments ("buckets") using dedicated software packages (e.g., AMIX, Bruker Biospin, Germany). The resulting integrated regions are then used for statistical analysis and pattern recognition. To avoid possible errors in signal intensity due to difference in the volume of collected EBC samples, before pattern recognition analysis each integral region is normalized to the sum of all integral region of each spectrum. In the presence of contaminant peaks (e.g., those originating from the condenser disinfectant), which randomly alter the total area of the spectrum, each bucket has to be normalized to the TSP peak of known concentration, referring to a standard region comprised, for example, between 0.014 and −0.014 ppm. Figure 2 represents spectra of saliva (left traces) and EBC samples (right traces) from a healthy subject (HS, lower spectra), a laryngectomized (middle spectra) and a COPD (top spectra) subjects. A visual examination establishes a striking correspondence between EBC spectra of HS and laryngectomized, suggesting that potential oral contamination (e.g., bacteria and/or saliva) is undetectable and, if present, beyond the sensitivity limit of NMR. By resorting to literature data [34] and two-dimensional NMR experiments, we identified all resonances present in EBC spectra (Figure 3). In saliva, signals between 3.3 and 6.0 ppm originate from carbohydrates [35] and these are virtually absent in the EBC spectra ( Figure 2). Compared to saliva, EBC spectra present fewer signals, but both saliva and EBC spectra appear to be different among the HS, laryngectomized and COPD classes considered ( Figure 2); this is the basis for class separation in PCA based on NMR data. A recent study [11] has evaluated the capability of NMR to separate EBC subjects with airway diseases (COPD) from subjects without respiratory diseases. Based on qualitative and quantitative spectral differences, five NMR signals Spectra were recorded on a Bruker Avance spectrometer operating at a frequency of 600.13 MHz ( 1 H) and equipped with a TCI CryoProbe. The water resonance was suppressed by using a specific pulse sequence designed to avoid intensity alteration of signals. The total acquisition time was ca. 10 minutes per sample. Spectra were referred to TSP assumed to resonate at δ = 0.00 ppm. In saliva spectra, the group of signals centered at 3.8 ppm originates from carbohydrates, and is not visible in the corresponding EBC spectra. appear to differentiate "respiratory" (COPD) from "nonrespiratory" (healthy and laryngectomized) subjects. It was also clearly proved that saliva and condensate have different profiles, with saliva contamination showing little influence on the interpretation of EBC by NMR-based metabonomics [11]. Likewise, Carraro et al. [36] reported the acetate signal variation as distinctive in asthmatic children with respect to controls, concluding that acetate increase might be related to increased acetylation of proinflammatory proteins in the extracellular space in the airway environment. Whether the metabonomic of exhaled breath condensate changes during systemic or metabolic disease is currently unknown. NMR-based metabonomic analyses of EBC could clearly discriminate between asthmatic and healthy children, with 95% success rate in their classification. Many authors believe that asthma should no longer be considered a single disease, and that efforts should be made to identify the different biochemical and inflammatory profiles behind asthma symptoms in order to treat them with specificallytargeted therapies [37]. Montuschi et al. [38] recently applied NMR-based metabonomics to discriminate between healthy individuals, patients with stable CF, and cases of unstable CF, showing that NMR is a powerful technique to monitor EBC in CF. In addition, we are currently applying NMR-based EBC metabonomics in other genetic airway diseases such as primary ciliary dyskinesia, in light of the diffusion of fast screening methods based on the exhaled NO on nasal or oral breath. Conclusions The power of NMR-based metabonomics has been shown for several biofluids, including blood, urine, and saliva. We believe that NMR metabonomics could also be applied to EBC, which has the advantage of being noninvasive and reproducible; furthermore it shows a distinctive profile in comparison to saliva, thus supporting its origin from lower airways. Moreover, EBC metabonomic analysis is applied to a living matrix in the absence of external induced perturbations that may represent important preanalytical variable with conventional assay measuring single compound. Journal of Biomedicine and Biotechnology There is only limited experience with metabonomics on EBC in humans, but reproducibility of method has been successfully assessed, and useful protocols to differentiate metabolic profile of patients with asthma, COPD, or cystic fibrosis have been reported. However, more studies are needed to show, if true, that the holistic approach of EBC metabonomics may be a progress over the traditional reductionistic approach in chronic airway disease.
v3-fos-license
2019-07-27T13:05:04.166Z
2019-07-25T00:00:00.000
198912355
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.cdc.gov/pcd/issues/2019/pdf/19_0014.pdf", "pdf_hash": "e344ca380016ef8a730ca3a6b44dce167a15e41d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42286", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e344ca380016ef8a730ca3a6b44dce167a15e41d", "year": 2019 }
pes2o/s2orc
Use of Mass Communication by Public Health Programs in Nonmetropolitan Regions Mass communication is one component of effective public health program implementation (1). It includes news stories (“earned media”), paid media (advertising), and social and digital media (eg, social networking sites, text messaging, mobile applications, websites, blogs) (1). Earned media can increase the visibility of public health issues and support from community members and leaders (1). Sustained media campaigns are recommended population approaches to modifying diet, physical activity, and tobacco use behaviors (2). Mass communication using various channels has helped increase public awareness, knowledge, attitudes, and behaviors on a multitude of health topics (3,4). To understand what contributed to successful communication efforts in nonmetropolitan regions, we conducted individual, 60minute telephone interviews with personnel overseeing programmatic activities ("program managers") and mass communication activities ("communication leads") in 6 REACH/PICH programs. These programs achieved or exceeded annual communication objectives and dedicated at least 10% of annual funding to mass communication. Each interviewed program worked in municipalities with populations of 250,000 or less across multiple US Census regions. Two were tribal programs. By uing a semistructured interview guide, programs were asked open-ended questions about the challenges, opportunities, and promising strategies they encountered when implementing mass communication in small and mid-sized communities. Inductive qualitative analysis identified 4 emergent themes. Theme 1: Building Capacity for Mass Communication For most interviewed programs, the perceived value and role of communication activities grew over time. Hiring communication staff or consulting services built long-term communication capacity. Increasing program capacity for mass communication had the potential to inform the organization more broadly, especially when staff who had gained communication training and experience advanced within the organization or worked across programs or grants. The opinions expressed by authors contributing to this journal do not necessarily reflect the opinions of the U.S. Department of Health Theme 2: Partners Community-based organizations, local agencies, hospitals and educational systems, nonprofit entities, and local businesses may partner -formally or informally -to complement one another's efforts. Programs worked with existing partners to increase the reach of communication efforts. Program partners were reliable and motivated to work with programs to promote messages and materials. Improving the communication capacity of partners was viewed by some as a method to achieve sustainability of program efforts. Determining a core vision and aligning communication strategies advanced efforts of partner coalitions. Programs noted that many key partners for communication efforts were unpaid but shared similar goals. These partners frequently shared messages on behalf of the programs, expanding reach to audiences that may otherwise be less accessible. Quotes regarding partners Unpaid program partners were reliable and motivated to work with programs to promote messages and materials. We didn't have too many paid partners. It was more of "grassroots for the betterment of the community" partner, or it was an initiative that directly benefited their audiences or their membership or their group According to programs, operating in a less populous area has advantages for mass communication. Programs felt it was relatively easy to access existing networks for communication activities in nonmetropolitan regions using existing networks to involve partners and access channels for mass communication. In small, closeknit communities, these relationships were frequently useful and reliable avenues for message dissemination. Special considerations apply to the selection of communication channels in nonmetropolitan regions. Programs serving rural areas noted that channels designed to maximize reach in larger markets (eg, television or outdoor advertising) need to be carefully considered in less populous regions. For example, billboards lack exposure if placed in areas without heavy traffic flow, transit advertising is not relevant in areas with limited to no public transportation, and television and radio broadcasts may reach beyond the key population or geographic region, or alternately, may not reach isolated areas. Programs frequently used alternative, often economical, channels to communicate to these audiences, including social media posts and paid messaging on social media platforms, local-access or cable-access television or radio, internet radio programs, podcasts, newsletters, and mass mailings. Almost all refined and expanded their social media presence during the fund- REACH programs emphasized the importance of allowing cultural contexts to guide program planning, implementation, and mass communication when addressing health disparities. Celebrating and embracing the cultural roots of the community can empower and encourage community members in the process of communicating about public health programs. According to interviewees, consideration of cultural and spiritual aspects like language, ethnic background, church events, and cultural practices reinforced powerful community connections and improved message dissemination. Programs valued the involvement of community members in the cocreation and testing of messaging, particularly when program staff were not from the same social and cultural backgrounds as audiences. The tribal programs emphasized the importance of considering the unique attributes and infrastructure of tribal communities in all aspects of program planning and implementation, including approaches to mass communication. Quotes regarding cultural competence Celebrating the cultural roots of the community empowered and encouraged community members in the process of communicating about public health programs. Discussion Small-to mid-sized public health programs may be unsure of the feasibility of using mass communication in their program activities. REACH/PICH programs in nonmetropolitan regions demonstrated the successful use of mass communication strategies to promote and support programmatic efforts. The experiences suggest that nonmetropolitan communities can achieve communication objectives through various adaptive approaches, frequently leveraging local partners, networks, and resources. Undertaking mass communication for public health programs requires defined objectives, understanding the audiences to reach and the best channels and strategies to reach them, and evaluation methods to track changes over time -each grounded in health behavior change and communication theory (7). To implement mass communication activities successfully, programs can hire dedicated communication staff and external media contractors with public health experience and use a mix of cost-effective media channels and categories to reach key audiences. Audience research using established methodologies -such as focus groups, in-depth interviews, or surveys -supported the creation of mass communication approaches that reflected the local contexts of the communities in which the programs operated. Culture has been identified as an influential factor in how message content, structure, sources, and channels are perceived and received by key audiences, and is associated with multiple facets of health behavior (8). Health programs can look beyond epidemiological and demographic characteristics (eg, age, race/ethnicity, geographic boundaries) to consider culture when understanding key audiences (8). Incorporating cultural aspects into mass communication efforts can increase the salience of health messages and programs offered in communities. These examples illustrate that it is feasible for public health programs in nonmetropolitan regions to increase internal capacity for mass communication to reach target audiences. CDC provides resources on health communication and social marketing for public health programs that are applicable in nonmetropolitan settings (9). These program experiences may inform future research and evaluation to identify the most effective communication strategies in these settings. With dedicated technical assistance and resources, programs serving small to mid-sized populations can develop communication activities that increase internal capacity and gain organizational support for communication, establishing the groundwork for sustaining communication efforts.
v3-fos-license
2022-12-02T14:12:42.789Z
2022-01-01T00:00:00.000
254142138
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-022-10033-5.pdf", "pdf_hash": "37ffb92f61b67161c524a99f251c87c4c61a648c", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42287", "s2fieldsofstudy": [ "Physics" ], "sha1": "37ffb92f61b67161c524a99f251c87c4c61a648c", "year": 2022 }
pes2o/s2orc
Cuspy and fractured black hole shadows in a toy model with axisymmetry Cuspy shadow was first reported for hairy rotating black holes, whose metrics deviate significantly from the Kerr one. The non-smooth edge of the shadow is attributed to a transition between different branches of unstable but bounded orbits, known as the fundamental photon orbits, which end up at the light rings. In searching for a minimal theoretical setup to reproduce such a salient feature, in this work, we devise a toy model with axisymmetry, a slowly rotating Kerr black hole enveloped by a thin slowly rotating dark matter shell. Despite its simplicity, we show rich structures regarding fundamental photon orbits explicitly in such a system. We observe two disconnected branches of unstable spherical photon orbits, and the jump between them gives rise to a pair of cusps in the resultant black hole shadow. Besides the cuspy shadow, we explore other intriguing phenomena when the Maxwell construction cannot be established. We find that it is possible to have an incomplete arc of Einstein rings and a “fractured” shadow. The potential astrophysical significance of the corresponding findings is addressed. Introduction The bending of light rays owing to the spacetime curvature constitutes one of the most influential predictions of General Relativity. At its extreme form, the shadow casted by a black hole [1][2][3][4][5][6] is widely considered as an essential observable in the electromagnetic channel. As the boundary of a black hole shadow is determined by the critical gravitational lensing of the radiation from nearby celestial bodies, it bears crucial information on spacetime geometry around the black hole. a e-mail: wlqian@usp.br (corresponding author) With the prospect to directly probe the underlying theory of gravity in the strong-field region, the related topic has aroused much renewed curiosity in the past decade [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26] (for a concise review of the topic, see [27]). In particular, the supermassive black hole at the center of the M87 galaxy is being targeted by the Event Horizon Telescope (EHT) [28][29][30][31]. Moreover, the developments regarding how to extract information on the black hole in question from its silhouette open up a new avenue with promising possibilities [32][33][34][35][36][37][38]. The black hole shadow is defined by the set of directions in the observer's local sky where the ingoing null geodesics are originated from the event horizon. In other words, no radiation is received by the observer at a certain solid angle due to the presence of the black hole. Intuitively, the shape of the black hole shadow can be derived by analyzing the lower bound of the free-fall orbits that circulating the black hole in a compact spatial region. Such a bound is closely associated with a specific type of null geodesics, dubbed fundamental photon orbits (FPOs) [5,17], first proposed by Cunha et al.. One may, by and large, argue that the edge of the shadow is furnished by the collection of the light rays that barely skim the unstable FPOs. This is because a null geodesic that slightly deviates from an unstable FPO might marginally escape to the spatial infinity after orbiting the black hole for a multitude of times. When tracing back in time, it is either originated from the black hole horizon or emanated by some celestial light source. While the former constitutes part of the shadow by definition, a light ray associated with the latter, on the other hand, contributes to the image of the relevant celestial body. In practical calculations, background radiation sources are placed on a sphere, referred to as the celestial sphere [2], (infinitely) further away from both the observer and the black hole. Owing to the significant gravita-tional lensing, an infinite number of (chaotic) images of the entire celestial sphere pile up in the vicinity of the shadow edge [39][40][41][42]. In the case of the Schwarzschild black hole, the relevant FPOs are the light rings (LRs), forming a photon sphere. While for the Kerr one, the role of the FPO is carried by the spherical orbits [43][44][45]. The LRs are circular planar null geodesics, which by definition, is a particular type of FPOs associated with the axisymmetry of the relevant spacetimes. In particular, it is understood that unstable LRs play a pivotal role in the strong gravitational lensing as well as shadow formation [5,17,40,46,47]. Stable LRs, being rather contrary to the nomenclature, might leads to the accumulation of different modes when the spacetime is perturbed [48]. Such a system is subsequently prone to nonlinear instabilities [49]. In the case of the Kerr black hole, the two LR solutions, restricted to the equatorial plane, are both unstable. From the observer's viewpoint, on the shadow edge, they mark the two endpoints in the longitudinal direction. The analyses of the black hole shadow in Kerr spacetime are simplified by the fact that the corresponding FPOs are of constant radius. This is because the geodesic is Liouville integrable and separates in the Boyer-Lindquist coordinates [50]. In a generically stationary and axisymmetric spacetime, however, the separation of variables is often not feasible for the geodesic motion by choosing a specific coordinate chart. As a result, the FPOs become more complicated and have to be evaluated numerically. Nonetheless, it was pointed out [17] the stability of LRs can be studied by employing the Poincaré maps. Recently, Kerr black hole metrics with Proca hair were investigated, and a quantitatively novel shadow with cuspy edge was spotted [17]. Instead of a smooth shadow, the black hole silhouette is characterized by a pair of cusps at the boundary. The authors attributed the above feature to the sophisticated FPO structure, and in particular, to an interplay between stable and unstable FPOs. To be specific, when compared with the case of Kerr metric, an additional stable branch of FPOs appears, which attaches both of its ends to that of two unstable branches of FPOs. Consequently, a point of the cusp corresponds to a sudden transition between two FPOs from those unstable branches. More lately, a similar characteristic was also reported [18] in rotating non-Kerr black holes [51]. In quantum-gravity inspired models of regular black holes, cuspy shadow, dubbed "dent-like", was also observed in asymptotically safe gravity [52]. For these cases, the metrics involved are rather complicated and mostly possess stable FPOs. Apart from the above intriguing results, it still seems not very clear what is a minimal theoretical setup to reproduce a cuspy shadow edge. If instead of vacuum, the black hole is surrounded by an accretion disk, a trespassing photon is likely subjected to inelastic scatterings. As a result, it will deviate from its geodesic or even be entirely absorbed by an opaque disk. However, if the disk is composed purely of dark matter, it is transparent to the photon. This is because no observational signature regarding the interaction between the photon and dark matter particles has yet turned up, in any experiment designated to direct dark matter detection. Nonetheless, the gravitational effect of the dark matter may still impact the null geodesics, and subsequently, the resultant black hole shadow. For a spherical galactic black hole surrounded by a thick dark matter cloud, it was argued that observable deviation from its Schwarzschild counterpart might be expected [23]. More lately, in studying the rotating dirty black holes [24], the authors found that although the existence of the dark matter modifies the size of the shadow, the D-shaped contour almost remains unchanged. Moreover, although the physical nature remains largely elusive, in literature, many interesting substructures in the dark matter halos and sub-halos have been speculated [53][54][55][56][57]. Among a large variety of alternatives, the venerable CDM model indicates that a discontinuity in the matter distribution might be triggered by the presence of the dark matter [55,56]. In this regard, the exploration of the rich substructure regarding the dark matter halo is closely related to our understanding of the underlying physics. Indeed, the resultant substructures predicted by the theoretical models may, in turn, serve to discriminate between different interpretations about the nature of dark matter. Nonetheless, it is not clear whether a discontinuity in the matter distribution may further appreciably distort the black hole shadow, particularly to the extent such modification becomes potentially observable. In fact, recently, it was shown analytically [58] that a discontinuity in the effective potential significantly affects the asymptotic properties of quasinormal modes. As the last phase of a merger process, such a dramatic change in the quasinormal ringing may potentially lead to observable effects. In fact, discontinuity is present at the surface of compact celestial objects, and the numerical calculations of the curvature modes have indeed confirmed such a nontrivial consequence [59]. Furthermore, it was recently speculated that discontinuity due to a thin disk of matter provides an alternative mechanism of the echo phenomenon [60]. Considering that the quasinormal frequencies at the eikonal limit are closely connected with the shadow and photon sphere in spherical metrics [21,61], it is natural to ask whether some discontinuity out of the horizon of the black hole might lead to meaningful implications in the context of the black hole shadow. This is the primary motivation of the present study. The present study continues to pursue further discussions on black hole shadows concerning the role of dark matter surrounding the black holes. By simplifying the dark matter envelope to a thin shell wrapped around the black hole, the mass distribution is concentrated in an infinitesimal layer so that we have a sharp discontinuity in the effective potential. We will show that such stationary axisymmetric configuration is physically plausible as the Israel-Lanczos-Sen junction condition is satisfied at the slow rotation limit. The resultant spacetime possesses two branches of unstable FPOs but not any stable FPO. Our analysis reveals a sudden jump between the different branches, which results in a cusp on the boundary of the black hole shadow. Moreover, it is argued that the transition point can be determined using the Maxwell construction, which is reminiscent of the Gibbs conditions for the phase transition. We also investigate other intriguing possibilities regarding different model parameterizations, which include the cases involving an incomplete arc of the Einstein rings and fractured shadow edge. The astrophysical significance of the present findings is addressed. The remainder of the manuscript is organized as follows. In the next section, we present our model and the mathematical framework for evaluating the null geodesics as well as the associated celestial coordinates. We study the properties of the FPOs and discuss the relevant criterion to determine the black hole shadow. In Sect. 3, for a specific choice of the metric, we show that the Maxwell construction can be utilized to locate the transition point on the shadow edge, where the cusp is present. The discussions also extend to other meaningful metric parameterizations, where one elaborates on two additional intriguing scenarios. Further discussions and concluding remarks are given in the last section. We relegate the mathematical derivations regarding the Israel-Lanczos-Sen junction condition to the appendix. A dark matter shell toy model In this section, we first present the proposed toy model and then proceed to discuss the FPOs of the relevant metric as well as their connection with the black hole shadow. For the purpose of the present study, we consider a stationary axisymmetric metric with the following form in the Boyer-Lindquist coordinates (t, r, θ, ϕ) where = r 2 − 2Mr + a 2 , = r 2 + a 2 cos 2 θ, where M BH , a BH , M TOT and a TOT are constants, the rotation parameters |a| 1, r sh is the location of thin layer of dark matter. For both the regions r ≤ r sh and r sh < r < ∞, the metric coincides with that of a Kerr one, satisfying the Einstein's equation in vaccuum. Inside the dark matter envelop, the above spacetime metric describes a Kerr black hole with the mass M = M BH and angular momentum J = a BH M BH sitting at the center. For an observer sitting far away (r r sh ), they are essentially dealing with a rotating spacetime with the mass M = M TOT and angular momentum per unit mass a = a TOT . The discontinuity at r = r sh indicates a rotating (infinitesimally) thin shell of mass wrapping around the central black hole. It is important to note that for the above metric to be physically meaningful, it must be validated against the Israel-Lanczos-Sen's junction conditions [62]. In particular, the induced metrics onto the shell from both interior and exterior spacetimes must be isometric. While relegating the details to the appendix, we argue that at the slow rotation limit, the first junction condition can be fulfilled. Therefore, we will only consider the choice of parameters satisfying |a BH | , |a TOT | 1. The null geodesic of a photon in a pure Kerr spacetime satisfying the following system of equations [43] where Here E, L, and Q are the energy, angular momentum, and the Carter constant of the photon. For our present case, the analysis of the null-geodesic motion can be achieved by implementing a simple modification. For light rays propagating inside a given region, namely, r < r sh or r > r sh , its motion is governed by Eq. (3) with the metric parameters given by Eq. (2). The difference occurs when the light ray crossing the dark matter layer at r = r sh . For instance, let us consider a free photon that escapes from the inside of the thin shell. Due to the singularity in the derivatives of the metric tensor at r = r sh , the photon's trajectory will suffer a deflection as it traverses the shell. However, the values of E, L, and Q remain unchanged during the process and thus can be utilized to unambiguously match the geodesics on both sides of the shell. This is because these constants of motion are derived by the corresponding Killing objects implied by the axisymmetry of the spacetime in question. Subsequently, when given one point on the trajectory, a null-geodesic motion is entirely determined by a pair of values, This can be easily seen by rescaling the affine parameter λ → λ = λE in Eq. (3). Moreover, due to the axisymmetry, the separation of variables is still feasible for the present case, and therefore, all the FPOs are spherical orbits as for the Kerr metric. The spherical orbit solution can be obtained by analyzing the effective potential associated with the radial motion, which separates from those of angular degrees of freedom. To be specific, the third line of Eq. (3) can be rewritten as where the effective potential V eff reads Similar to the analysis of the planetary motion in Newtonian gravity, the spherical orbits are determined by the extremum of the effective potential, namely, V eff = ∂ r V eff = 0. One finds where r 0 is the radius of the spherical orbit. These orbits are unstable since the encountered extremum is a local maximum. Moreover, the fact that all FPO are spherical orbits is related to the uniqueness of the above local maximum. On the other hand, for an observer located at (asymptotically flat) infinity with zenithal angle θ 0 , the boundary of the black hole is governed by those null geodesics that marginally reach them. By assuming that the entire spacetime is flat, the "visual" size of the black hole can be measured by slightly "tilting their head" (or in other words, by an infinitesimal displacement of their location). To be specific, when projected onto the plane perpendicular to the line of sight, the size of the image in the equatorial plane and on the axis of symmetry can be obtained by the derivatives of the angular coordinates (ϕ, θ ) [2]. These derivatives can be calculated explicitly using the asymptotical form of the geodesic, namely, Eq. (3) evaluated at the limit r → ∞. We have where the pair (η, ξ ) are dictated by the geodesic of the photon in question. The coordinates in terms of α and β are often referred to as the celestial coordinates in the literature [2]. By collecting all coordinate pairs (α, β), one is capable of depicting the apparent silhouette of the black hole. As discussed above, the relevant null geodesics that potentially contribute to the shadow edge are the FPOs. In contrary to the evaluation of the celestial coordinates, which involves the asymptotic behavior of the metric, the FPOs are determined by spacetime properties in the vicinity of the horizon. Although all the FPOs for our metric are spherical orbits, the presence of a rotating thin shell leads to some interesting implications. In what follows, let us elaborate on the properties of the FPO and their connection with the black hole shadow. First, consider a FPO solution for the pure Kerr spacetime with M = M BH , a = a BH . It will also be qualified as an FPO for the metric defined in Eq. (2), if and only if the radius of the corresponding spherical orbit r 0 satisfies r 0 < r sh . Likewise, a FPO solution with r 0 < r sh for the pure Kerr spacetime with M = M TOT , a = a TOT does not exist physically for the metric under consideration. Secondly, we note that not every FPO contributes to the edge of the black hole shadow. Let us consider, for instance, a photon moves along a spherical orbit right outside the shell with r 0 = r sh + 0 + . When its trajectory is perturbed and let us assume that the photon spirals slightly inward. As the photon conserves its values of (η, ξ ), at the moment it intersects the infinitesimally thin shell, the trajectory is promptly deflected from the tangential direction perpendicular to the radius. This implies that it no longer stays in the vicinity of any spherical orbits, namely, the FPO for the region r 0 < r sh . Subsequently, the photon will spiral into the event horizon rather quickly instead of critically orbiting the black hole for an extensive number of times beforehand. This, in turn, indicates the photon is mapped onto a pixel disconnected from those associated with the FPOs of the region r 0 < r sh , which constitute the shadow edge. The above heuristic arguments can be reiterated in terms of the fact that the pair of values (η, ξ ) for an FPO of the outer region r 0 > r sh does not, in general, corresponds to that of an FPO of the inner region r 0 < r sh . Therefore, the photon which skims the thin layer of dark matter on the outside, by and large, does not contribute to the edge of the black hole shadow. Now, one may proceed to consider a peculiar case, where the pair of values (η, ξ ) for an FPO in the outer region matches that of an FPO in the inner region. Therefore when the trajectory of the former is perturbed and the photon eventually traverses the thin shell, it will still stay in the vicinity of the latter and eventually contributes to the edge of the shadow. Since the values of (η, ξ ) for both FPO are the same, according to Eq. (8), they also contribute to the same pixel in the celestial coordinates. This is precisely the Maxwell condition that we will explore further in the next section. It is worth noting that, even if an FPO does not directly contribute to the shadow edge, it is still subjected to strong gravitational lensing and therefore possibly leads to a nontrivial effect. Moreover, we note that the inverse of the above statement is still valid. In other words, the edge of the black hole shadow is entirely furnished by the FPOs in either region of the spacetime. If some FPOs in the outer region r 0 > r sh contributes to the shadow edge, the section of the shadow boundary is identical to those of a Kerr black hole with M = M TOT , a = a TOT . However, if some FPOs in the inner region r 0 < r sh contributes to the shadow edge, due to Eq. (8), the corresponding section of the black hole silhouette is different from that of the Kerr black hole that sits inside the thin shell. Before proceeding further, we summarize the key features regarding the FPOs in the present model and their connection with the black hole shadow edge as follows • The null-geodesic motion is determined by a pair of conserved quantity (η, ξ ). • The black hole shadow is a projection of asymptotic light rays onto a plane perpendicular to the observer's line of sight, and any point on its edge is governed by the twodimensional orthogonal (celestial) coordinates consisting of (α, β). • The boundary of the black hole shadow is largely determined by the unstable FPOs, 1 but some FPO may not contribute to the shadow edge. • Due to the presence of the thin shell, some formal FPO solutions for the pure Kerr spacetime are not physically relevant. • When the values of (η, ξ ) of a particular FPO on one branch match those of another FPO on a different branch, both FPOs contribute to the same point in the celestial coordinates, probably on the shadow edge. The Maxwell construction and black hole shadow In the last section, we discuss the close connection between the unstable FPOs and the black hole shadow edge. It is pointed out that the null-geodesic motion can be determined in terms of the pair of values (η, ξ ). As this is the same number of degrees of freedom to locate a specific point on the celestial coordinates, the dual (η, ξ ) of an FPO can be used to map onto the corresponding point on the shadow edge in the celestial coordinates. To be specific, the transition point on the shadow edge can be identified by matching (η, ξ ) for two FPOs from different branches, namely, where r cusp BH < r cusp TOT are two distinct FPO solutions, belong to the two distinct unstable branches of FPOs. Since the established condition is between two sets of quantities, it is reminiscent of the Maxwell construction (e.g. in terms of the chemical potentials) in a two-component system [63,64]. On a rather different ground, such a construction was derived from the Gibbs conditions for the phase transition in a thermodynamic system. For the present context, the pair (η, ξ ) determined by Eq. (9) is mapped to (α, β) in the celestial coordinates, which subsequently gives rise to a cusp on the shadow edge. Such a salient feature is similar to what has been discovered earlier [17,18] using more sophisticated black hole metrics. The present section is devoted to investigating different scenarios emerging from the proposed model. We show that, due to an interplay between different branches of unstable FPOs and the location of the discontinuity introduced by the thin shell, the resultant black hole shadow presents a rich structure. The following discussions will be primarily concentrated on three sets of model parameters, given in Table 1. The choice of the parameters aims at enumerating all relevant features in the present model, in terms of the feasibility of the Maxwell construction, as well as the different roles carried by the FPO. In particular, in the first case, the Maxwell construction can be established. Besides, the parameters are chosen so that unstable FPOs contribute both to black hole shadow edge and metastable states, after the unphysical ones are excluded. In the other two cases, on the other hand, one cannot find such a transition between different branches of FPOs via the Maxwell construction. However, two physically interesting scenarios are observed for these cases. The second set of parameters leads to an incomplete section of Einstein rings, while the third set gives rise to a fractured black hole shadow. For simplicity, for all three cases, we set M BH = 1.0, while satisfying |a BH | , |a TOT | 1. By using these parameters, the four rightmost columns list the calculated radii of LRs. The latter correspond to the radial bounds for the spherical orbits if there were no constraints associated with the thin shell. We note if one employs smaller values for |a BH | and |a TOT |, all the observed features remain. We first consider the first set of model parameters given in Table 1, and the calculated cuspy black hole shadow is shown in Fig. 1. To give a more transparent presentation, for the figures, we adopt the following conventions. The FPOs associated with the edge of the black hole shadow are shown by solid curves. Meanwhile, the FPOs that are valid null geodesics of the metric but irrelevant to the shadow are depicted in dashed curves. The gray dotted curves are unphysical FPO solutions. As discussed in the last section, they must be excluded due to the physical constraint related to the thin shell. It is observed that the resultant spacetime is featured by two disconnected branches of unstable FPOs. The FPOs associated with the inner region (r < r sh ) are shown in solid and dashed blue curves. Those associated with the outer region (r > r sh ) are represented by solid and dashed orange curves. As shown in the left plot of Fig. 1, the Maxwell construction, Eq. (9), is indicated by the dashed red rectangle. It corresponds to the transition point (labeled "cusp") in the right plot. However, it is worth noting that, different from previous studies [17,18], the present metric does not possess any stable FPO. Therefore, the latter is not a necessary condition for the presence of the cusp. The Maxwell construction give r cusp BH 3.023 < r cusp TOT 3.032. Our choice of r sh = 3.028 ensures that there is still some room for an interesting feature. In Fig. 1, the cusp divides both branches of unstable FPOs into two parts, shown in solid and dashed curves, while labeled "shadow" and "metastable" in the right plot, respectively. The FPOs on one side of the cusp constitute the edge of the black hole shadow. The FPOs on the other side are, though not contributing to the shadow, still subjected to strong gravitational lensing. As a result, they demonstrate themselves as a particular lensing pattern connected to the cusp. This is nothing but the "eyelash" feature discussed in Ref. [17]. They are labeled "metastable" due to their apparent resemblance to the metastable states in thermodynamics, associated with superheated and subcooled states. In other words, such states are allowed physically but do not directly contribute to the shadow edge in question. Our particular choice of the metric parameters given in the first set of Table 1, namely, r − TOT < r sh < r cusp TOT and r cusp BH < r sh < r + BH , implies that the "eyelash" is present for both branches after the removal of unphysical FPOs. For instance, the dashed orange eyelash shown in the top left and bottom plots of Fig. 1 corresponds to the FPOs with their orbital radii r cusp BH < r 0 < r sh . Now, we move to consider the other two scenarios where the Maxwell construction cannot be encountered. In both cases, the resulting black hole shadow does not possess any cusp, but still, noticeable features are observed. In Fig. 2, we present the results obtained for the second set of metric parameters given in Table 1. From the left plot, the Maxwell construction can not be established, and the resultant boundary of the black hole shadow is subsequently determined by the metric of the Kerr black hole sitting at the center. However, since r sh < r + TOT , the LR solution at r = r + TOT , as well as the nearby FPO trajectories, must be physically excluded. As a result, the LR solution at r = r − TOT and the unstable FPOs attached to it will not form an enclosed contour when transformed into the celestial coordinates (η, ξ ). This is shown in the right plot of Fig. 2, the dashed orange curve indicates the visible section of the ring structure, while the dotted gray part is cut off since r sh > r − TOT . Moreover, even though the above incomplete arc is located in the region outside of the black hole shadow, we argue it gives rise to a nontrivial effect. Similar to the role that the FPOs play in a horizonless compact object [65], it may lead to an infinite number of Einstein rings accumulated in the vicinity of the arc, on both the inside and outside. The novelty for the present case is that the above structure does not form an enclosed curve, as it is truncated by the thin layer of dark matter at (η(r sh ), ξ(r sh )). It is noted that the classical Einstein ring 2 of the Kerr metric with m = M TOT , a = a TOT corresponds to the outmost contour of the above structure. Last but not least, let us discuss the scenario regarding the third set of metric parameters given in Table 1. The calculated black shadow is presented in Fig. 3. In the present case, we note that the resultant solutions of the LR radii are entirely identical to those of Fig. 2. However, due to the difference in the location of the thin shell, different sections of the FPO branches are truncated. As a result, as indicated in the right plot of Fig. 3, the original black hole shadow defined by the η ξ Fig. 1 The Maxwell construction and the corresponding black hole shadow with cusp. The blue curves denote the FPOs associated with the metric for the region r < r sh , while the orange ones are those for the region r > r sh . The solid blue and orange curves (labeled "shadow") are the collections of FPOs that contribute to the shadow edge. The dashed blue (barely visible) and orange curves (labeled "metastable") represent those FPOs that do not directly give rise to the shadow edge. The dotted gray curves (labeled "unphysical") correspond to the FPO solutions that are not physically permitted. Top left: The Maxwell construction, shown in the dashed red rectangle, establishes the transition point between the two branches of unstable FPOs in terms of η and ξ as functions of orbit radius r 0 . The curves with dark colors (dark blue and dark orange) are for η = η(r 0 ), while those with light colors (light blue and light orange) are for ξ = ξ(r 0 ). The unstable FPOs excluded from the shadow edge by the Maxwell construction are denoted as "metastable" due to their resemblance to the metastable states in a thermodynamical system. Top right: The corresponding shadow edge is shown in solid blue and orange curves, where the transition point is labeled by "cusp". The eyelash shape extension of the shadow edge, shown in dashed curves, may still lead to a strong gravitational lensing effect. Bottom: The same as the top right plot, where the region in the vicinity of the "cusp" is amplified black hole sitting at the center cannot form a complete circle. It has to be compensated by part of the unstable branch of FPOs of the outer region, namely, the Kerr metric perceived from infinity. Since there is no explicit Maxwell construction, the two parts of the shadow arc seem to be disconnected. This, apparently, leads to a contradiction, since the shadow edge must be a continuous curve. We understand that in practice, the thickness of the thin layer of dark matter, though nonetheless insignificant, must be finite. As a result, any continuous matter distribution will dictate a specific form of the shadow edge which continuously connects the endpoint of the solid blue curve to that of the solid orange curve. Visually, the resulting black hole is featured by a sharp edge as we refer to it as "fractured". Further discussions and concluding remarks To summarize, in this work, we showed that rich features concerning the black hole shadow can be obtained using a simple but analytic toy model. The model we devised con-sists of a thin shell of slowly rotating dark mass wrapping around a slowly rotating Kerr black hole while preserving the axisymmetry of the system. It is found that the resulting metric possesses two disconnected branches of unstable FPOs. Moreover, their interplay with the location of the dark matter layer leads to various features such as the cuspy and fractured black hole shadow edge. In terms of the Maxwell construction, an analogy was made between the transition among different branches of FPOs and that occurs in a thermodynamic system. In particular, we have investigated three different spacetime configurations aiming at illustrating exhaustively all the features of the present model. The first set of parameters is designated to the case where the Maxwell construction can be established. The parameters are particularly chosen so that both unstable branches contribute to form an enclosed shape, which subsequently defines the contour of the black hole shadow. The point of transition corresponds to a pair of cusps on the shadow edge. Moreover, the remaining FPOs are mapped onto the eyelash shape extension of shadow edge on the celestial coordinates, reminiscent of the metastable states in a thermodynamical system. The other two sets of parameters are dedicated to the cases where the Maxwell construction cannot be established. The second set leads to a scenario where the shadow, solely defined by one branch of FPOs related to the Kerr black hole sitting at the center, is enclosed by an incomplete arc of Einstein rings. For the third set of parameters, again, both unstable branches of FPOs contribute to the shadow. However, since there is no Maxwell construction, the two sections of the shadow edge are apparently disconnected, giving rise to a fractured shadow edge. The above choices of metric parameters are representative of different physical outcomes implied by the proposed model. In terms of which, we show that interesting physics can be realized in a rather straightforward framework. A few additional comments are necessary to clarify the difference and novelty between the method proposed in the present study and those in the existing literature. One may understand that the procedure to calculate the black hole shadow consists essentially of two parts. On the one hand, one needs to identify the relevant FPOs, which largely furnish the boundary of the black hole shadow. On the other, these FPOs should be mapped onto the celestial coordinates (α and β) of the observer's local sky, as given by Eq. (8). The above formalism was introduced in [2,3], and later further developed by many authors [4][5][6]8,9]. The method proposed in the present study concerns some subtlety first part of the procedure. On the one hand, the shadow edge may not be entirely furnished by FPOs, as shown to be the case where there is no horizon [8,9]. Such a special section of the shadow edge can be furnished by either principal null geodesics or particular escaping geodesics. On the other hand, an unstable FPO may not constitute a pixel on the shadow edge. This was demonstrated in [17], where the cause was attributed to the emergence of a stable branch of FPOs. We show that such a scenario can be further explored by elaborating on a thermodynamic analogy, which naturally provides a more transparent interpretation of the physical content. For the case of the Kerr metric, an unstable FPO corresponds to an orbit that sits at the local maximum of the radial effective potential. While such an orbit is locally favorable as to furnish the edge of the black hole shadow, its role eventually also depends on the global properties of the effective potential. To be specific, it also has to be an escaping null geodesic in order to reach an asymptotic observer and its position on the celestial coordinates must be bounded from inside. In this sense, the above scenario is analogous to the condition of instability and phase transition in thermodynamics. The local stability is only a necessary condition for a thermodynamic state to be in equilibrium. At constant temperature and volume, a more general requirement is that the free energy must be globally minimized so that the state is not subjected to any phase transition. The condition for the states that marginally satisfy the last criterion is the well-known Gibbs conditions for the phase transition in a thermodynamic system. In the present context, they possess the form of Eq. (9) and are visually presented in Fig. 1 in terms of the Maxwell construction. To summarize, we proposed a method to determine which FPO should be counted in the shadow calculation, aiming primarily at the scenarios when some FPOs are irrelevant to constitute the shadow. In other words, our method further refines the traditional ones initiated by Bardeen and later developed by several authors, which is tailored to handle the specific cases discussed above. It is worth noting that the spacetime configuration under consideration, and in particular, the presence of the discontinuity in the effective potential, is indeed physically relevant. In what follows, we further elaborate on a few realistic scenarios where discontinuity plays a pertinent role. First, discontinuity makes its appearance in dark halos. By using the N-body numerical calculations, discontinuity, dubbed "cusp", was observed in the resultant halo profile in the context of CDM models [55,56]. Although such a feature in the dark halos was largely considered as a "problem", it has also been pointed out that the rotation curves of specific galaxies are largely compatible with the presence of discontinuous dark halos [66]. On the other hand, in the outer region of the dark matter distribution, a sudden drop in the density profile was also spotted numerically [67,68]. The latter is referred to as "splashback" in the literature, which also gives rise to a discontinuity in the outskirt of the profile. Intuitively, when a rotating black hole is surrounded by these types of dark halos, the corresponding shadow is subsequently subjected to the characteristics investigated in the present study. Second, discontinuity is also a pertinent feature in the context of a dynamically collapsing setup. In the study of the time evolution of a spherical collapsing matter, where the backreaction regarding evaporation is taken into consideration [69,70], the interior metric was found to possess discontinuity. Although the present study does not explicitly involve dynamic black hole metrics, it is plausible that the role of discontinuity essentially remains similar. As the third and last scenario, one might argue that discontinuity constitutes an important assembly component in the context of exotic compact objects (ECOs). Typical examples include the gravatar [71][72][73] and wormhole [74,75]. Indeed, the concept of a discontinuous thin shell is essential to construct the throat of traversable wormholes using the cut-and-paste procedure [76], which allows one to confine exotic matter in a limited part of the spacetime. Subsequently, the resultant metric naturally possesses a discontinuity. Also, as a horizonless ECO, gravastar is characterized by non-perturbative corrections to the nearhorizon external geometry of the corresponding black hole metric. In the original picture proposed by Mazur and Mottola, it is implemented by introducing different layers of matter compositions with distinct equations of state, and therefore, it naturally leads to discontinuity. Such a construction scheme has been subsequently adopted by most generalizations of the model, inclusively for rotating metrics. In this regard, ECOs equipped with unstable FPO have also been a topic of much interest [77][78][79]. Based on the above discussions, one concludes that discontinuity can be viewed as an astrophysically relevant feature in the black hole as well as ECO metrics. In the previous discussions, we have considered a simplified scenario where a discontinuity is planted by including a thin layer of dark matter surrounding the black hole at a given radial coordinate. To a certain extent, the proposed metric is somewhat exaggerated when compared to the cuspy dark matter halos [55,56]. However, the main goal of the present study is to illustrate that some interesting features of the black hole shadow can be understood in terms of a barebone approach. Moreover, one may argue that most of our results will remain valid when one generalizes the metric given in Eqs. (1)- (2) to that regarding a more realistic matter distribution. To be specific, one may consider a thin but continuous matter distribution is used to replace the dark matter shell with infinitesimal thickness while maintaining the axisymmetry. For the case where the Maxwell construction can be encountered, such as that studied in Fig. 1, the two endpoints of the metastable part of the FPO branches will be connected (probably by a branch of stable FPOs). On the left plot of Fig. 1, this corresponds to a curve that joins continuously between the endpoint of the dashed yellow curve and that of the dashed blue curve. It is noted that the Maxwell construction will remain unchanged as long as the section of the metric involving the rectangle stays the same. This is indeed the case if the matter distribution is confined inside the interval r cusp BH , r cusp TOT . Similar arguments can be given to the other two cases where the Maxwell construction cannot be established. In particular, as discussed in the last section, for the scenario investigated in Fig. 3, a finite thickness is required to properly evaluate the shadow edge between the two rings. The main advantage to introduce an infinitesimally thin layer of dark matter is that the dis-continuity brings mathematical simplicity, as well as a more transparent interpretation of the relevant physics content. As discussed in the appendix, the metric proposed in the present study is, in fact, an approximation up to the first order in a. The discrepancy between the induced metrics projected on the hypersurface from the exterior and interior spacetimes is of second order in the rotation parameter. Therefore, one may heuristically argue that such a small discrepancy between the two sides of the shell can be understood as a nonvanishing but insignificant thickness. By considering the above arguments regarding the validity of Maxwell's construction for a shell of small thickness, the approximation assumed for the metric does not undermine our conclusion. Moreover, we note that the second equality of Eq. (8), and subsequently, the entire equation, is valid for any asymptotically Kerr spacetime. As a result, the Maxwell construction utilized in the present study is valid for any axisymmetric metric which asymptotically approaches a Kerr solution. Based on the above discussions and the astrophysical significance of the Kerr-type metrics, we argue that our findings are meaningful and potentially valid on a rather general ground. Last but not least, we make a few comments about the relation with the empirical observations of the black hole shadow, and in particular, the image of M87* obtained recently by the EHT Collaboration [30,31]. The present work, similar to most studies in the literature, has been carried out in the context of a given spacetime configuration, for which the black hole shadow is evaluated. On the other hand, the inverse problem is physically pertinent from a practical viewpoint. From the measured black hole silhouette, one is expected to extract the essential information on the underlying spacetime metric. Such a topic has been explored by several authors [32][33][34]38]. The main idea, as proposed by Hioki and Maeda [32], is to first quantify the apparent shape and distortion of the shadow in terms of characteristic parameters, such as the radius and dent (R s , δ s ). Subsequently, by using an appropriate scheme, the information on the black hole, such as the spin and inclination angle (a, i), can be extracted from these quantities. In the framework of Einstein's general relativity, if one presumably considers a Kerr black hole in the vacuum, the conclusion was drawn that the spin and inclination angle can be determined with reasonable precision [32]. However, one encounters a few difficulties in more general as well as realistic scenarios. As pointed out by Bambi et al., from the apparent shape of the black hole shadow, it is rather difficult to tell apart an astrophysical Kerr black hole from a Bardeen one [33]. Furthermore, there is a strong cancellation between the effect of frame dragging and that of the spacetime quadrupole in a Kerr-like metric. As a result, the size and shape of the shadow outline depend weakly on the spin of the black hole or the orientation of the observer [80]. However, it was also pointed out that the above cancellation can be largely attributed to the no-hair theorem and, therefore, the violation of the latter might substantially modify the shadow [38]. Regarding the image of M87*, at the present stage, the resolution of the data is not yet desirable for quantitative analysis of the detailed features of the shadow edge. To be specific, the reconstructed image was shown to be rather sensitive to the specific characteristics of the crescent structure around the black hole [30,31], while the Einstein rings and black hole shadow cannot be inferred straightforwardly from the data. Since the cusp feature investigated in the present study resides on the specific detail of the shadow edge, it is not yet feasible at the moment. Nonetheless, it is worth pointing out, various studies have been performed out in an attempt to extract the black hole spin parameters using the reconstructed black hole image [34][35][36][37][38]. In conjunction with other observations, such as EMRI and electromagnetic spectra, it is expected to extract more precise information from the black hole candidates in the near future. In this regard, the ongoing observational astrophysics enlightens an optimistic perspective on a variety of promising frontiers. Therefore, it is worthwhile to explore the subject further, inclusively extend the study to more realistic scenarios. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: The calculations carried out in the manuscript have been explained in detail, which suffices to reproduce the results, and therefore no additional data was provided.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . Appendix A: The junction condition of the thin shell In this appendix, we show that the metric proposed in Eqs. (1) and (2) are physically meaningful at the slow rotation limit, namely, |a| 1. To be specific, one validates the Israel-Lanczos-Sen's junction conditions [62] which deal with the case when a hypersurface partitions spacetime into two regions V + and V − . For such a separation to be physically meaningful, the tangential projections of metrics on , namely, the induced metrics, must be the same on both sides of the hypersurface. On the other hand, in the normal direction, the metric might be discontinuous. The amount of discontinuity, in terms of the extrinsic curvature, gives rise to the energy momentum tensor on the hypersurface. Furthermore, if one explicitly indicates the specific form of the equation of state, then the dynamical equation of motion of the hypersurface can be determined. It is noted that even though the metrics in V + = x μ : r > r sh (A1) and V − = x μ : r < r sh (A2) both satisfies the vacuum Einstein equation, for arbitrary a, the two induced metrics on the hypersurface defined by where (x μ ) = r − r sh (A4) are "incompatible". This is because the induced metrics from both sides cannot be put into isometric correspondence. This difficulty is well-known and closely related to that explored extensively in the literature, regarding the possible source for the Kerr metric. According to Krasiński [81], there are essentially four classes of approaches. The class relevant to the present scenario is the third one where one attempts to construct approximate physically acceptable configurations matched to the exterior Kerr metric. Rotating thin shell as an approximate source of the Kerr metric was initiated by Cohen and Brill [82] and extended by la Cruz and Israel [83]. Those studies indicated that metrics similar to that given in Eqs. (1) and (2) are feasible at the slow rotating limit. Following this line of thought, one may generalize the above result and argue that a thin rotating shell separates two slowly rotating spacetimes V + and V − . To be more specific, in what follows, we show that Israel's first junction condition is indeed satisfied up to first order in a. This can be accomplished by explicitly evaluating and comparing the induced metrics for both spacetimes V + and V − . Here, the interior and exterior spacetimes V ± are defined in Eqs. (A1) and (A2) and the shell is defined by Eq. (A4). At the slow rotation limit, one can expand the metrics in terms of a to first order [83,84]. The metric of the exterior spacetime V + gives ds 2 + = − f + dt 2 + + g −1 + dr 2 + r 2 d 2 − 4M + a + r sin 2 θ dt + dϕ, 4M + a + r sh sin 2 θ dt + dϕ. Now, we can show that the above induced metric essentially possesses spherical geometry by properly introducing the "rotating" It is readily shown, by using y a = (t + , θ, ψ + ) as the coordinates on the shell, the following (2 + 1) Minkowski metric on , h + ab dy a dy b = −dt 2 + + r 2 sh (dθ 2 + sin 2 θ dψ 2 + ). On the other hand, by practically identical arguments, one derives the induced metric for the interior spacetime V − h − ab dy a dy b = −dt 2 − + r 2 sh (dθ 2 + sin 2 θ dψ 2 − ), where Apparently, Eqs. (A7) and (A8) are isometric, which means that the tangencial projections of the spacetimes metrics on is continuous. On the other hand, in the normal direction, there is a discontinuity, measured by that of the extrinsic curvature, which gives rise to the energy-momentum tensor on the shell [62] S ab = − 8π where K ab = n α;β e α a e β b is the extrinsic curvature, e α a ≡ ∂ x α ∂ y a , and the normal vector n α ∂ α = −∂ r with = −1 for our present case. If the equation of state of the shell is further given, its dynamic evolution is subsequently governed by the Einstein equation. In the main text, our toy model has been constructed based on the above slow rotating case, which implies |a BH | , |a TOT | 1. In reality, there is some difference between the two induced metrics, whose magnitude is of the order a 2 . Intuitively, such a small discrepancy can be compensated by a nonvanishing but nonetheless thin shell.
v3-fos-license
2024-05-26T15:11:23.248Z
2024-05-24T00:00:00.000
270029885
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/17/11/2538/pdf?version=1716550691", "pdf_hash": "54ed0a1b0ba475e2c5458c21429bf7b43c0e054c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42288", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "sha1": "b5cab1eddbf6a0ffc6be090c953a7d53bfb23110", "year": 2024 }
pes2o/s2orc
Hydrophobization of Reduced Graphene Oxide Aerogel Using Soy Wax to Improve Sorption Properties A special technique has been developed for producing a composite aerogel which consists of graphene oxide and soy wax (GO/wax). The reduction of graphene oxide was carried out by the stepwise heating of this aerogel to 250 °C. The aerogel obtained in the process of the stepwise thermal treatment of rGO/wax was studied by IR and Raman spectroscopy, scanning electron microscopy, and thermogravimetry. The heat treatment led to an increase in the wax fraction accompanied by an increase in the contact angle of the rGO/wax aerogel surface from 136.2 °C to 142.4 °C. The SEM analysis has shown that the spatial structure of the aerogel was formed by sheets of graphene oxide, while the wax formed rather large (200–1000 nm) clumps in the folds of graphene oxide sheets and small (several nm) deposits on the flat surface of the sheets. The sorption properties of the rGO/wax aerogel were studied with respect to eight solvent, oil, and petroleum products, and it was found that dichlorobenzene (85.8 g/g) and hexane (41.9 g/g) had the maximum and minimum sorption capacities, respectively. In the case of oil and petroleum products, the indicators were in the range of 52–63 g/g. The rGO/wax aerogel was found to be highly resistant to sorption–desorption cycles. The cyclic tests also revealed a swelling effect that occurred differently for different parts of the aerogel. Introduction The lotus leaf is commonly considered as an example of the relationship between hydrophobicity and surface morphology [1], sometimes without indicating that the surface of the lotus leaf (Nelumbo nucifera) is a composite of papillary epidermal cells and tubular epicuticular waxes [2,3].Plant waxes can be seen with the naked eye on the leaves and fruits of certain plants.Some plant waxes have found practical uses, for example, carnauba wax extracted from the leaves of the palm tree Copernicia cerifera.This wax has found a wide range of applications, particularly in high-gloss finishes for automobiles and controlledrelease pharmaceutical formulations [4]. This work is related to a research direction that can be described as the modification of rough structures, e.g., those of aerogels, with materials possessing low surface energies such as waxes.In particular, we are interested in the possibility of the additional hydrophobization of partially reduced graphene oxide aerogels using wax in order to improve its sorption properties with respect to organic solvents and oil.In principle, rGO/wax composites are known [5][6][7][8].Wax has often been used as a matrix to form thin samples that effectively Materials 2024, 17, 2538 2 of 11 absorb electromagnetic waves over a wide bandwidth, where rGO is a part of the filler.In this respect, [8] should be noted, where the dielectric losses of an rGO aerogel were optimized by adding wax.However, we have found no single publication which is devoted to regulating the hydrophilic-hydrophobic properties of aerogels by wax additives. Previously, we described a method for preparing composite superhydrophobic rGOpolytetrafluoroethylene (PTFE) aerogels and found that isopropanol, acetone, and hexane almost completely filled in the free volumes of the aerogel [9].The aerogel was also demonstrated to be highly resistant to cyclic loading with solvents.In this work, we describe the preparation of an rGO/wax composite aerogel using soy wax.The aerogel obtained was tested as a sorbent for such solvents as tetrahydrofuran, toluene, acetone, dichlorobenzene, n-hexane, n-heptane, methylpyrrolidone, and propanol-2, as well as crude oil and several petroleum products such as white spirit, kerosene, and machine oil.The sorption capacity of rGO/wax with respect to these solvents turned out to be higher than that of rGO/PTFE, which served as the basis for the certification of the resulting aerogel using various physicochemical methods. Experimental Section 2.1. Materials Soy wax (SW) for container candles was purchased from the Candles Only online store (Russia).Note that soy wax is hydrogenated soybean oil composed primarily of unsaturated fatty acids such as linoleic and linolenic acids.A method for obtaining oil from soybeans in described elsewhere [10].In addition to oil, soybeans also contain many other useful substances [11][12][13][14].Nonionic surfactant Polysorbate 80 (TWIN 80) was purchased from the Mendeleev Shop online store.We used a modified Hammers method [15] and obtained graphite oxide (GO).The details of our graphene oxide production are described in refs [16,17].The lateral size of graphene oxide particles was 0.5-5 microns, and the particle thickness was from 0.7 to 1.7 nm. Synthesis of the rGO/Wax Aerogel The SW suspension was prepared according to the "oil in water" type; namely, 21 mL of water, 3 g of soy wax, and 0.5 mL of TWIN 80 were added to a glass with a volume of 50 mL.The mixture was thermostatted at 55 • C until the wax was completely dissolved and then dispersed using an ultrasonic homogenizer, MEF 93.1 (MELFIZ-ultrasvuk LLC, Moscow, Russia), until a milky white emulsion was obtained.The procedure for preparing GO/wax aerogels was as follows: 150 mL of a graphene oxide suspension with a concentration of 10 mg/mL was treated with ultrasound for 3 min, and 15 mL of SV emulsion was introduced dropwise into it without stopping the ultrasound exposure.Ultrasound treatment was continued for another 5 min after stopping the injection of the SW emulsion.The resulting dispersion was frozen in cylindrical molds with a volume of 5 mL and mounted on a copper plate cooled with liquid nitrogen. The hydrogels were removed from the molds after freezing and were freeze-dried in an IlShin BioBase FD5512 freeze-dryer (Seoul, Republic of Korea).The resulting GO/wax aerogels had a density of 20 ± 2 mg/cm 3 .The aerogels obtained after drying were subjected to heat treatment to remove surfactants and reduce graphene oxide.Heat treatment was carried out stepwise up to 250 • C. The sample was first heated to 100 • C and held for 1 h; then, the temperature was raised by 50 • C at intervals of 1 h.As a result of the treatment, the aerogels changed color from light gray with a brown tint to almost black (Figure 1).The aerogel density after reduction decreased to 13 ± 1 mg/cm 3 .Similar annealing of pure wax led to its melting, but after cooling to room temperature, its color and density remained the same as before melting. Characterization and Measurements The IR spectra (resolution 1 cm −1 , number of scans 32) were recorded at room temperature in the range of 450-4000 cm −1 on a Perkin-Elmer "Spectrum Two" Fourier-transform IR spectrometer (Waltham, MA, USA) with an ATR attachment.The Raman spectra were obtained on a Confotec NR500 Raman microscope (SOLinstruments, Minsk, Belarus).The laser excitation wavelength was 532 nm, the power at the measurement point was 0.1 mW, the beam diameter was ~2 µm. The thermogravimetric analysis (TGA) of the samples was performed using an STA 449 F3 Jupiter instrument (Selb, Bavaria, Germany).The measurements were carried out in the temperature range of 20-550 °C at a rate of 10 °C/min and in a He flow of 50 mL/min.The contact water-wetting angle was measured by using an OCA 20 instrument (Data Physics Instruments GmbH, Filderstadt, Germany) at room temperature.Electron micrographs were obtained on a COXEM EM-30 scanning electron microscope (electron energy of 15 kV and the chamber pressure of 2 × 10 −5 Pa).The XPS spectra were obtained using a Specs PHOIBOS 150 MCD electron spectrometer (Specs, Berlin, Germany) and an X-ray tube with an Mg anode (hν = 1253.6eV).The vacuum in the spectrometer chamber did not exceed 4 × 10 −8 Pa. Study of Sorption Properties The capacity of rGO/wax aerogel sorption with respect to organic solvents, crude oil, and petroleum products was studied under static conditions.For this purpose, 100 mL of a solvent, crude oil, or a petroleum product was poured into the test container in such a way that the thickness of a layer of the solvent, crude oil, or petroleum product was at least 2.5 cm. A pre-weighed aerogel with the mass of 0.070-0.078g was placed in a mesh basket, which was immersed into a container so that the basket with the aerogel was freely placed inside the container.After 10 min, the basket with the aerogel was removed and the remaining solvent was drained for 1 min.After this, the contents from the basket were transferred to a tray with a known weight.Then, the tray with the aerogel was weighed, and the result was recorded.Sorption capacity (Qw) was calculated using the following formula: Qw = ms/ma (1) Characterization and Measurements The IR spectra (resolution 1 cm −1 , number of scans 32) were recorded at room temperature in the range of 450-4000 cm −1 on a Perkin-Elmer "Spectrum Two" Fourier-transform IR spectrometer (Waltham, MA, USA) with an ATR attachment.The Raman spectra were obtained on a Confotec NR500 Raman microscope (SOLinstruments, Minsk, Belarus).The laser excitation wavelength was 532 nm, the power at the measurement point was 0.1 mW, the beam diameter was ~2 µm. The thermogravimetric analysis (TGA) of the samples was performed using an STA 449 F3 Jupiter instrument (Selb, Bavaria, Germany).The measurements were carried out in the temperature range of 20-550 • C at a rate of 10 • C/min and in a He flow of 50 mL/min.The contact water-wetting angle was measured by using an OCA 20 instrument (Data Physics Instruments GmbH, Filderstadt, Germany) at room temperature.Electron micrographs were obtained on a COXEM EM-30 scanning electron microscope (electron energy of 15 kV and the chamber pressure of 2 × 10 −5 Pa).The XPS spectra were obtained using a Specs PHOIBOS 150 MCD electron spectrometer (Specs, Berlin, Germany) and an X-ray tube with an Mg anode (hν = 1253.6eV).The vacuum in the spectrometer chamber did not exceed 4 × 10 −8 Pa. Study of Sorption Properties The capacity of rGO/wax aerogel sorption with respect to organic solvents, crude oil, and petroleum products was studied under static conditions.For this purpose, 100 mL of a solvent, crude oil, or a petroleum product was poured into the test container in such a way that the thickness of a layer of the solvent, crude oil, or petroleum product was at least 2.5 cm. A pre-weighed aerogel with the mass of 0.070-0.078g was placed in a mesh basket, which was immersed into a container so that the basket with the aerogel was freely placed inside the container.After 10 min, the basket with the aerogel was removed and the remaining solvent was drained for 1 min.After this, the contents from the basket were transferred to a tray with a known weight.Then, the tray with the aerogel was weighed, and the result was recorded.Sorption capacity (Q w ) was calculated using the following formula: where m s is the mass of sorbed solvent, oil, or petroleum product in g and m a is the mass of the original aerogel in g.The volume fraction (%) occupied by solvent, oil, or petroleum products was calculated using the following formula: where V o is the volume of dry aerogel and V a is the volume of solvent adsorbed by the aerogel. Cyclic tests of a sample of rGO/wax aerogel were also carried out in the "sorptiondesorption" mode using hexane as an example to assess the possibility of multiple reuses of such a sorbent.To do this, the sample was weighed dry, then soaked in a solvent according to the method described above and weighed again.Next, the sample was dried in an oven at T = 65 • C for one hour.This treatment was sufficient to completely remove the solvent from the aerogel sample.After drying, the sample was weighed again.This set of procedures was considered as one sorption-desorption cycle.A total of 10 such cycles were carried out. IR Spectra Let us note that no peaks with noticeable intensity in the region of 3000-3020 cm −1 corresponding to the stretching vibrations of =C-H bonds in linoleic acid [18] were detected in the IR spectrum of the wax sample presented in Figure 2a.In the IR spectrum of this wax sample, the most intense peaks correspond to the stretching vibrations of C-H bonds, characteristic of paraffins.However, in the region of the rocking vibrations of methylene groups, there is one peak at 718 cm −1 (Figure 2b).The absence of a second peak due to splitting indicates that the alkyl chains in the sample are arranged in a random manner [19].where ms is the mass of sorbed solvent, oil, or petroleum product in g and ma is the mass of the original aerogel in g.The volume fraction (%) occupied by solvent, oil, or petroleum products was calculated using the following formula: where Vo is the volume of dry aerogel and Va is the volume of solvent adsorbed by the aerogel. Cyclic tests of a sample of rGO/wax aerogel were also carried out in the "sorptiondesorption" mode using hexane as an example to assess the possibility of multiple reuses of such a sorbent.To do this, the sample was weighed dry, then soaked in a solvent according to the method described above and weighed again.Next, the sample was dried in an oven at T = 65 °C for one hour.This treatment was sufficient to completely remove the solvent from the aerogel sample.After drying, the sample was weighed again.This set of procedures was considered as one sorption-desorption cycle.A total of 10 such cycles were carried out. IR Spectra Let us note that no peaks with noticeable intensity in the region of 3000-3020 cm −1 corresponding to the stretching vibrations of =C-H bonds in linoleic acid [18] were detected in the IR spectrum of the wax sample presented in Figure 2a.In the IR spectrum of this wax sample, the most intense peaks correspond to the stretching vibrations of C-H bonds, characteristic of paraffins.However, in the region of the rocking vibrations of methylene groups, there is one peak at 718 cm −1 (Figure 2b).The absence of a second peak due to splitting indicates that the alkyl chains in the sample are arranged in a random manner [19].Another interesting distinctive feature of the spectrum of soy wax with respect to the paraffin spectrum is the presence of a rather intense absorption band in the region of stretching vibrations of O-H bonds with the maxima at 3460 cm −1 , 3308 cm −1 (marked in Figure 2a), and 3234 cm −1 .Their positions indicate that the OH groups are also connected by hydrogen bonds.There are no peaks with noticeable intensity in the region of 1630 cm −1 , which is indicative of the absence or extremely low concentration of water molecules in soy wax.A fairly intense absorption band, corresponding to the stretching vibrations of C=O bonds, has two maxima at 1737 cm −1 and 1729 cm −1 .In terms of the ν(C=O) value, our sample of soy wax is different from the samples of soy wax described previously, where only a single peak was observed at 1745 cm −1 [20] and 1744 cm −1 [21] Another interesting distinctive feature of the spectrum of soy wax with respect to the paraffin spectrum is the presence of a rather intense absorption band in the region of stretching vibrations of O-H bonds with the maxima at 3460 cm −1 , 3308 cm −1 (marked in Figure 2a), and 3234 cm −1 .Their positions indicate that the OH groups are also connected by hydrogen bonds.There are no peaks with noticeable intensity in the region of 1630 cm −1 , which is indicative of the absence or extremely low concentration of water molecules in soy wax.A fairly intense absorption band, corresponding to the stretching vibrations of C=O bonds, has two maxima at 1737 cm −1 and 1729 cm −1 .In terms of the ν(C=O) value, our sample of soy wax is different from the samples of soy wax described previously, where only a single peak was observed at 1745 cm −1 [20] and 1744 cm −1 [21] in this spectral region. The IR spectrum of graphene oxide is well known, and the IR spectrum of the graphene oxide used in this work is shown in Figure S1.In accordance with the literature data [22][23][24], the absorption bands in the range of 3000-3700 cm −1 can be attributed to vibrations of O-H bonds.The bands at 2919 cm −1 and 2850 cm −1 are apparently due to vibrations of C-H bonds belonging to aliphatic groups of carbon-based impurities of the sample which was stored in air.The band at 1733 cm −1 can be attributed to vibrations of C=O bonds of carbonyl groups or ketones.The band at 1620 cm −1 is probably composite and may contain contributions from both vibrations of double C=C bonds [25] and deformation vibrations of water molecules.The band at 1162 cm −1 is attributed to vibrations of C-OH bonds [26], and the band at 1046 cm −1 belongs to the vibrations of C-O bonds from different groups, including alcoxy and epoxy groups [22].Other authors also observed a band at 877 cm −1 (see, e.g., Ref. [27]); however, no assignment of the band was conducted.It is highly likely that this peak belongs to deformation vibrations of epoxy groups. Characteristic features of both components can be seen in the IR spectrum of the GO/wax aerogel in Figure 3a.A wide intense peak with a maximum at 3371 cm −1 is due to stretching vibrations of O-H bonds of GO.The spectrum of this aerogel also shows the characteristic numerous low-intensity narrow peaks in the wax "fingerprint" region.It is possible that the spectrum of the GO/wax aerogel cannot be described by a simple sum of the spectra of GO and wax due to the presence of small amounts of surfactants and transformations that may take place during the synthesis and drying of the GO/wax aerogel.Such an analysis is beyond the scope of this work. The IR spectrum of graphene oxide is well known, and the IR spectrum of the graphene oxide used in this work is shown in Figure S1.In accordance with the literature data [22][23][24], the absorption bands in the range of 3000-3700 cm −1 can be attributed to vibrations of О-Н bonds.The bands at 2919 cm −1 and 2850 cm −1 are apparently due to vibrations of С-Н bonds belonging to aliphatic groups of carbon-based impurities of the sample which was stored in air.The band at 1733 cm −1 can be attributed to vibrations of C=O bonds of carbonyl groups or ketones.The band at 1620 cm −1 is probably composite and may contain contributions from both vibrations of double С=С bonds [25] and deformation vibrations of water molecules.The band at 1162 cm −1 is attributed to vibrations of C-OH bonds [26], and the band at 1046 cm −1 belongs to the vibrations of C-O bonds from different groups, including alcoxy and epoxy groups [22].Other authors also observed a band at 877 cm −1 (see, e.g., Ref. [27]); however, no assignment of the band was conducted.It is highly likely that this peak belongs to deformation vibrations of epoxy groups. Characteristic features of both components can be seen in the IR spectrum of the GO/wax aerogel in Figure 3a.A wide intense peak with a maximum at 3371 cm −1 is due to stretching vibrations of O-H bonds of GO.The spectrum of this aerogel also shows the characteristic numerous low-intensity narrow peaks in the wax "fingerprint" region.It is possible that the spectrum of the GO/wax aerogel cannot be described by a simple sum of the spectra of GO and wax due to the presence of small amounts of surfactants and transformations that may take place during the synthesis and drying of the GO/wax aerogel.Such an analysis is beyond the scope of this work.It is interesting to analyze the spectrum of the aerogel after restorative heat treatment, i.e., rGO/wax as the target product (see Figure 3b).First of all, one can note in the spectrum of rGO/wax the inclined background, the absence of a broad band of stretching vibrations of O-H bonds, and the characteristic numerous narrow peaks in the "fingerprint" region of the wax.Such changes with respect to the GO/wax IR spectrum indicate a high degree of graphene oxide reduction.A slanted background usually appears in conductive rGObased samples [28].Despite the absence of characteristic narrow low-intensity peaks, the positions of the most intense wax peaks at 2917 cm −1 , 2835 cm −1 , 1736 cm −1 , and 719 cm −1 were preserved.One may assume that the wax is shielded by the conductive rGO sheets. Raman Spectra Figure 4 shows the Raman spectra obtained at different points of the rGO/wax aerogel.It can be seen that the main contribution to spectrum 1 displayed in blue color is due to the peaks with maxima at 1337 cm −1 and 1578 cm −1 .The positions of these peaks coincide with the positions of peaks D and G in the spectrum of graphene oxide [29][30][31][32][33].In addition to peaks D and G, other narrower peaks can be seen in spectrum 2, displayed in red color.It is interesting to analyze the spectrum of the aerogel after restorative heat treatment, i.e., rGO/wax as the target product (see Figure 3b).First of all, one can note in the spectrum of rGO/wax the inclined background, the absence of a broad band of stretching vibrations of O-H bonds, and the characteristic numerous narrow peaks in the "fingerprint" region of the wax.Such changes with respect to the GO/wax IR spectrum indicate a high degree of graphene oxide reduction.A slanted background usually appears in conductive rGO-based samples [28].Despite the absence of characteristic narrow low-intensity peaks, the positions of the most intense wax peaks at 2917 cm −1 , 2835 cm −1 , 1736 cm −1 , and 719 cm −1 were preserved.One may assume that the wax is shielded by the conductive rGO sheets. Raman Spectra Figure 4 shows the Raman spectra obtained at different points of the rGO/wax aerogel.It can be seen that the main contribution to spectrum 1 displayed in blue color is due to the peaks with maxima at 1337 cm −1 and 1578 cm −1 .The positions of these peaks coincide with the positions of peaks D and G in the spectrum of graphene oxide [29][30][31][32][33].In addition to peaks D and G, other narrower peaks can be seen in spectrum 2, displayed in red color.The positions of some peaks are the same as peak positions in the paraffin spectra; in particular, the position of the peak at 1128 cm −1 coincides with the position of the ν as (CC) peak of paraffin [34][35][36][37].It can be thought that this point of the analysis zone contains mostly soy wax.Thus, the Raman method indicates a nonuniform distribution of components in the aerogel under study. The positions of some peaks are the same as peak positions in the paraffin spectra; in particular, the position of the peak at 1128 cm −1 coincides with the position of the νas(CC) peak of paraffin [34][35][36][37].It can be thought that this point of the analysis zone contains mostly soy wax.Thus, the Raman method indicates a nonuniform distribution of components in the aerogel under study. SEM Images Two SEM images of the rGO/wax aerogel are presented in Figure 5.It can be seen that the spatial structure with large, interconnected voids is formed by sheets of graphene oxide.The wax forms both rather large (200-1000 nm) clumps in the folds of graphene oxide sheets and small (several nm) deposits on the flat surface of these sheets.In the case of the rGO/PTFE (PTFE stands for polytetrafluoroethylene) aerogel, such homogeneity was not observed [9]. TGA Analysis Figure 6 shows TGA curves for wax along with the GO/wax and rGO/wax composites.One can state that the presence of GO or rGO leads to losses beginning at 100 °C (GO) and 150 °C (rGO), which could be associated with the release of water from these components.It can be seen that the water loss in the case of graphene oxide is greater than in the case of reduced graphene oxide.For wax, the maximum rate of weight loss and SEM Images Two SEM images of the rGO/wax aerogel are presented in Figure 5.It can be seen that the spatial structure with large, interconnected voids is formed by sheets of graphene oxide.The wax forms both rather large (200-1000 nm) clumps in the folds of graphene oxide sheets and small (several nm) deposits on the flat surface of these sheets.In the case of the rGO/PTFE (PTFE stands for polytetrafluoroethylene) aerogel, such homogeneity was not observed [9]. The positions of some peaks are the same as peak positions in the paraffin spectra; in particular, the position of the peak at 1128 cm −1 coincides with the position of the νas(CC) peak of paraffin [34][35][36][37].It can be thought that this point of the analysis zone contains mostly soy wax.Thus, the Raman method indicates a nonuniform distribution of components in the aerogel under study. SEM Images Two SEM images of the rGO/wax aerogel are presented in Figure 5.It can be seen that the spatial structure with large, interconnected voids is formed by sheets of graphene oxide.The wax forms both rather large (200-1000 nm) clumps in the folds of graphene oxide sheets and small (several nm) deposits on the flat surface of these sheets.In the case of the rGO/PTFE (PTFE stands for polytetrafluoroethylene) aerogel, such homogeneity was not observed [9]. TGA Analysis Figure 6 shows TGA curves for wax along with the GO/wax and rGO/wax composites.One can state that the presence of GO or rGO leads to losses beginning at 100 °C (GO) and 150 °C (rGO), which could be associated with the release of water from these components.It can be seen that the water loss in the case of graphene oxide is greater than in the case of reduced graphene oxide.For wax, the maximum rate of weight loss and TGA Analysis Figure 6 shows TGA curves for wax along with the GO/wax and rGO/wax composites.One can state that the presence of GO or rGO leads to losses beginning at 100 • C (GO) and 150 • C (rGO), which could be associated with the release of water from these components.It can be seen that the water loss in the case of graphene oxide is greater than in the case of reduced graphene oxide.For wax, the maximum rate of weight loss and complete weight loss occurs at 420 • C and 470 • C, respectively.At 550 • C, the weight losses of the GO/wax and rGO/wax samples are 71.6 and 49.3%, respectively.complete weight loss occurs at 420 °C and 470 °C, respectively.At 550 °C, the weight losses of the GO/wax and rGO/wax samples are 71.6 and 49.3%, respectively.Interestingly, the maximum loss rates of the GO/wax and rGO/wax composites were observed at temperatures of 388 °C and 371 °C, respectively, and these losses can be associated with wax evaporation (the samples were heated in a stream of high-purity helium).The second interesting fact that follows from our TGA studies is that the amount of evaporated wax, calculated from the TGA curves for rGO/wax, of 52% is slightly higher than that of GO/wax (50%).This means that during the reduction of graphene oxide at 250 °C, its weight fraction decreases due to the release of water. CWA Results The contact wetting angle (CWA) for the flat soy wax surface was found to be 100.5°.In the case of the GO/wax aerogel obtained in the shape of a cylinder (Figure 1), the CWA value measured for a flat end surface is significantly higher and equals 136.2°.The annealing procedure increases the contact angle of the end surface to 142.4°.Let us note that a superhydrophobic graphene aerogel with CWA = 151.1-153.9°was previously obtained by using surface chemical reduction [38].For the rGO/PTFE aerogel we obtained, the CWA value was in the range of 161.9-163.7°[9].In this regard, the rGO/wax samples obtained in this work are not outstanding. Adsorption Properties The sorption properties of rGO/wax aerogel were studied in relation to a large set of solvents with different chemical compositions, including water, oil, and petroleum products such as white spirit, kerosene, and machine oil.The data obtained are presented in Table 1, where the data on the sorption properties of the rGO/PTFE composite aerogel [9] are shown for comparison.Interestingly, the maximum loss rates of the GO/wax and rGO/wax composites were observed at temperatures of 388 • C and 371 • C, respectively, and these losses can be associated with wax evaporation (the samples were heated in a stream of high-purity helium).The second interesting fact that follows from our TGA studies is that the amount of evaporated wax, calculated from the TGA curves for rGO/wax, of 52% is slightly higher than that of GO/wax (50%).This means that during the reduction of graphene oxide at 250 • C, its weight fraction decreases due to the release of water. CWA Results The contact wetting angle (CWA) for the flat soy wax surface was found to be 100.5 • .In the case of the GO/wax aerogel obtained in the shape of a cylinder (Figure 1), the CWA value measured for a flat end surface is significantly higher and equals 136.2 • .The annealing procedure increases the contact angle of the end surface to 142.4 • .Let us note that a superhydrophobic graphene aerogel with CWA = 151.1-153.9• was previously obtained by using surface chemical reduction [38].For the rGO/PTFE aerogel we obtained, the CWA value was in the range of 161.9-163.7 • [9].In this regard, the rGO/wax samples obtained in this work are not outstanding. Adsorption Properties The sorption properties of rGO/wax aerogel were studied in relation to a large set of solvents with different chemical compositions, including water, oil, and petroleum products such as white spirit, kerosene, and machine oil.The data obtained are presented in Table 1, where the data on the sorption properties of the rGO/PTFE composite aerogel [9] are shown for comparison. It could be seen that the rGO/wax aerogel is superior to the rGO/PTFE aerogel in the Q w and Q v parameters defined in Equations ( 1) and (2), respectively, whereas the opposite was found with respect to the CWA values.From this, it follows that a high contact angle with respect to water does not always imply a high sorption level for hydrophobic sorbates.The superiority of rGO/wax aerogel over rGO/PTFE aerogel is also reflected in its easier and more environmentally friendly production process.We note that the Q v value of 101.8 for propanol-2 is not an error as it has been verified several times.Apparently, this is a consequence of the swelling of the aerogel.1) and ( 2); b the data from Ref. [9]. The stability of rGO/wax aerogel to sorption-desorption cycles was tested using hexane as an example.The test results are shown in Figure 7, from where it follows that the aerogel is highly resistant to cyclic loading with a solvent.Moreover, its capacity after the third cycle increased by 10-15%, which means that the sample swells during cycling, i.e., its volume increases.1) and ( 2); b the data from Ref. [9]. It could be seen that the rGO/wax aerogel is superior to the rGO/PTFE aerogel in the Qw and Qv parameters defined in Equations ( 1) and ( 2), respectively, whereas the opposite was found with respect to the CWA values.From this, it follows that a high contact angle with respect to water does not always imply a high sorption level for hydrophobic sorbates.The superiority of rGO/wax aerogel over rGO/PTFE aerogel is also reflected in its easier and more environmentally friendly production process.We note that the Qv value of 101.8 for propanol-2 is not an error as it has been verified several times.Apparently, this is a consequence of the swelling of the aerogel. The stability of rGO/wax aerogel to sorption-desorption cycles was tested using hexane as an example.The test results are shown in Figure 7, from where it follows that the aerogel is highly resistant to cyclic loading with a solvent.Moreover, its capacity after the third cycle increased by 10-15%, which means that the sample swells during cycling, i.e., its volume increases.During cyclic tests, it was observed that swelling during the sorption of hexane occurs differently for different parts of the dry rGO/wax aerogel, as shown in Figure 8.A video of the swelling process during hexane sorption can be seen in the Supplementary Information.After the desorption of hexane, significant shrinkage of the sample along one of the bases of the cylinder was observed, and the sample again became similar to a truncated cone.These shape changes occurred in each of the 10 cycles of sorption tests.We associate changes in the shape of the aerogel during the sorption-desorption cycles with the peculiarities of the synthesis, in particular with the temperature gradient at the freezing stage before freeze-drying.During cyclic tests, it was observed that swelling during the sorption of hexane occurs differently for different parts of the dry rGO/wax aerogel, as shown in Figure 8.A video of the swelling process during hexane sorption can be seen in the Supplementary Information.After the desorption of hexane, significant shrinkage of the sample along one of the bases of the cylinder was observed, and the sample again became similar to a truncated cone.These shape changes occurred in each of the 10 cycles of sorption tests.We associate changes in the shape of the aerogel during the sorption-desorption cycles with the peculiarities Figure 4 . Figure 4. Raman spectra of GO/wax aerogel in two different points. 2 Figure 4 . Figure 4. Raman spectra of GO/wax aerogel in two different points. Figure 4 . Figure 4. Raman spectra of GO/wax aerogel in two different points. a The Q w and Q v parameters are defined by Equations ( The Qw and Qv parameters are defined by Equations ( a
v3-fos-license
2024-03-03T17:11:39.882Z
2024-02-29T00:00:00.000
268197559
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/25/5/2826/pdf?version=1709199336", "pdf_hash": "77b6bd6407db66ffc15abecd0c5b9406e3d7a591", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42289", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d8858a7ad2d49eb663912e9ac6a7ae03f8f26ed5", "year": 2024 }
pes2o/s2orc
Role of the Alpha-B-Crystallin Protein in Cardiomyopathic Disease Alpha-B-crystallin, a member of the small heat shock family of proteins, has been implicated in a variety of cardiomyopathies and in normal cardiac homeostasis. It is known to function as a molecular chaperone, particularly for desmin, but also interacts with a wide variety of additional proteins. The molecular chaperone function is also enhanced by signal-dependent phosphorylation at specific residues under stress conditions. Naturally occurring mutations in CRYAB, the gene that encodes alpha-B-crystallin, have been suggested to alter ionic intermolecular interactions that affect dimerization and chaperone function. These mutations have been associated with myofibrillar myopathy, restrictive cardiomyopathy, and hypertrophic cardiomyopathy and promote pathological hypertrophy through different mechanisms such as desmin aggregation, increased reductive stress, or activation of calcineurin–NFAT signaling. This review will discuss the known mechanisms by which alpha-B-crystallin functions in cardiac homeostasis and the pathogenesis of cardiomyopathies and provide insight into potential future areas of exploration. Introduction Proteins are the molecular effectors of cell function, providing structure and functionality in support of the essential biomolecular processes necessary for organism survival and proliferation.Chaperone proteins are present in a wide variety of organisms across the evolutionary spectrum and function to promote and maintain proper folding of proteins, especially under stress conditions.In response to increased temperatures, and other stressors such as oxidative stress, inflammation, and radiation, organisms initiate a "heat shock response", probably more aptly named a stress response, based on activation of, and increased expression of, heat shock proteins (Hsps) [1].Of the many classes of heat shock proteins, the first to be discovered and the most abundantly expressed within cells are the molecular chaperones [1].Molecular chaperones, divided into groups based on function, help stabilize, fold, and refold proteins even at physiologic temperatures but become even more critical for survival during times of cellular stress [1].One such group is the small heat shock protein (sHSP) superfamily whose members prevent aberrant protein interactions [1].Small heat shock proteins function as ATP-independent molecular chaperones that share a common domain architecture across their members [2].Common domains across the members include a highly variable amino acid N-terminal region, a central alpha-crystallin domain (ACD), and a flexible highly disordered C-terminal region [2].These structural domains demonstrate distinct behaviors related to their amino acid composition.For example, the abundance of histidine residues in the ACD is thought to allow sHSPs to respond to changes in pH and metal ion availability, which is critical to their proper function [2] (Figure 1).Studies have implicated both the N-terminal region and the ACD as being directly involved in chaperone activities, while the C-terminal region likely plays a supportive structural role necessary for proper chaperone function [2].As natively folded proteins and the ACD as being directly involved in chaperone activities, while the C-terminal region likely plays a supportive structural role necessary for proper chaperone function [2].As natively folded proteins destabilize, they likely expose hydrophobic residues, which become a signal for sHSP binding and stabilization [2].The sHSPs, through their ACD, overlap structurally with another distinct set of proteins, the crystallins.The originally described function of crystallins is in the transparency and refractive power of the eye lens, although some have other important cellular functions such as preventing protein aggregation [3].Crystallins are separated into two groups based on a conserved core domain among the related proteins as either alpha-crystallins or beta/gamma crystallins [3].The major crystallin groups are categorized into two superfamilies of proteins, the alpha-crystallins which fall under the small heat shock protein superfamily, and interestingly have their own rarely used sHSP designators, and the beta/gamma crystallins which make up their own protein superfamily [3].The alpha-crystallins are made up of two genes: CRYAA and CRYAB which encode alpha A and alpha B crystallins, respectively.Due to the ubiquitous nature of molecular chaperones, both these proteins are involved in a myriad of cellular functions and processes [1].However, this also means that mutations in CRYAA and CRYAB have a wide array of deleterious effects from cancer to eye disorders and cardiac diseases.Decades ago, a novel CRYAB mutation was found to cause hypertrophic cardiomyopathy [4], sparking multiple studies into the effect of CRYAB mutations in cardiovascular disease.In this review, we focus on the cellular functions of CRYAB and the broad set of consequences associated with its dysfunction with a particular focus on its role in heart disease, highlighting decades of research and exciting new developments. Wild-Type CRYAB Crystallins were initially discovered in the eye lens, where they are the predominant structural protein [5].From there, more and more crystallins have been found in organs across the body where they serve to prevent improper protein folding and aggregation.Of the two alpha-crystallin members, CRYAA is mainly found in the eye but also in the pituitary gland and spleen, while CRYAB is widely expressed across all organs and is highly expressed in skeletal and cardiac muscle [5].Most of the beta/gamma crystallins are involved in the transparency and refractive power of the lens; however, some have been found to have other functions, including betaB2-crystallin involved in neurogenesis and betaA3-crystallin involved in calcium binding [5].Wild-type CRYAB functions as a molecular chaperone, where its main functions are to prevent improper protein folding and aggregation and thus prevent proteotoxicity in cells [6][7][8].Wild-type CRYAB has also Alpha-Crystallin B Chain (CRYAB) 2.1. Wild-Type CRYAB Crystallins were initially discovered in the eye lens, where they are the predominant structural protein [5].From there, more and more crystallins have been found in organs across the body where they serve to prevent improper protein folding and aggregation.Of the two alpha-crystallin members, CRYAA is mainly found in the eye but also in the pituitary gland and spleen, while CRYAB is widely expressed across all organs and is highly expressed in skeletal and cardiac muscle [5].Most of the beta/gamma crystallins are involved in the transparency and refractive power of the lens; however, some have been found to have other functions, including betaB2-crystallin involved in neurogenesis and betaA3-crystallin involved in calcium binding [5].Wild-type CRYAB functions as a molecular chaperone, where its main functions are to prevent improper protein folding and aggregation and thus prevent proteotoxicity in cells [6][7][8].Wild-type CRYAB has also been found to be an anti-apoptotic regulator via multiple pathways such as inhibition of caspase-3 and Ras and inhibition of inflammatory responses, such as decreasing proinflammatory peptides and activation of macrophage immunoregulation [9].Additional functions include the regulation of calcium signaling [10], autophagy [11,12], and cellular survival [13]. CRYAB binds to denatured proteins and enhances their solubility, which plays an important role in preventing protein precipitation in cells [14].Proteotoxicity is the state in which unfolded and aggregated proteins negatively impact cellular function [15].Proteotoxicity can be divided into four classes based on functional effects: (1) improper protein folding or structural preservation resulting in altered degradation, (2) poor protein function due to dominant negative mutations, (3) toxic functions due to gain of function mutations, and (4) toxic aggregation of multiple misfolded proteins [15].Wild-type CRYAB functions to prevent the first and the fourth of the proteotoxic classes from occurring in cells.Improper protein folding is a universal problem that can occur in all cells.Protein folding is in part based on the primary amino acid sequence and is influenced by the amino acid side chains [16].Side-chain hydrophobicity plays a major role [16].However, given that the free energy, and therefore the stability, of native proteins is only a few kcal/mol lower than that of their unfolded counterparts, other intramolecular forces, such as backbone hydrogen bonding, cannot be excluded [16].The relatively small amount of free energy separating folded and unfolded proteins also highlights the fact that even single amino acid mutations can result in consequential changes to protein structure and function [16].Even though most proteins exist in their minimal free energy, natively folded state, a persistent degree of misfolding and unfolding can occur in cells even without stress.To stabilize proteins and prevent unfolding, one mechanism that cells have developed to mitigate this process relies on the use of molecular chaperone proteins.As a molecular chaperone, wild-type CRYAB plays a major role in preventing aberrant misfolding and guarding against the development of proteotoxicity.Therefore, it is not surprising that wild-type CRYAB is upregulated in a number of cardiovascular disorders, many of which involve some degree of proteotoxicity [13].It should be noted that a wide variety of proteins and cellular components function to maintain normal protein folding, and mutations that affect these entities lower cellular capacity to maintain proper folding but individually do not fully abolish proper protein folding [15].Therefore, the emergence of clinically apparent pathology may require long periods of repeated injury and additional stress on the system to provoke pathological changes [15]. CRYAB is activated in response to stress through post-translational modification.In response to a number of stresses that can cause alterations in protein folding, both physiological such as heat, TNF-α, and IL-1α and experimental such as okadaic acid and high concentrations of NaCl, CRYAB is phosphorylated at three different serine residues: Ser-19, Ser-45, and Ser-59 [17] (Figure 1).Interestingly, no phosphorylation has been seen in response to agents that increase intracellular cAMP [17].When phosphorylated, CRYAB translocates from the cytosol to the cytoskeleton presumably to prevent protein destabilization [17].CRYAB phosphorylation is likely driven by MAP kinase-activated protein 2 which is itself activated by p38 MAP kinase, suggesting its role in the regulation of CRYAB activity, but it could also be driven by p42/p 44 MAP kinase [17].Studies have shown that wild-type CRYAB overexpression is benign and protective against ischemia and reperfusion injury in vitro and in vivo in transgenic mouse models [18].Furthermore, cardiovascular diseases are often associated with increased oxidative stress.In that vein, overexpression of wild-type CRYAB in H9C2 cells has been shown to protect against oxidative stress and the apoptosis that accompanies it [19].The reduction in apoptosis occurs in association with decreased release of cytochrome c from the mitochondria and downregulation of the apoptosis regulator BCL2, which might be mediated through the PI3K/AKT pathway [19].Wild-type CRYAB is upregulated as an apoptosis inhibitor in certain cancers, and although this article will focus on the cardiovascular system, it is interesting to see the wide range of biological processes influenced by CRYAB [14].The role of wild-type CRYAB as a molecular chaperone is more fully understood through naturally occurring mutations that result in cardiac pathology, as discussed in the following sections. CRYAB 109 Mutations Mutations in the 109th amino acid of CRYAB have been associated with a range of pathologies from cataracts to myopathies [20,21].In terms of cardiac dysfunction, one of the more common mutations noted is CRYAB D109G , a missense mutation that has been implicated in the development of restrictive cardiomyopathy [21].Two additional mutations have been noted at the 109th amino acid: CRYAB D109A , described by Fichna et al. in 2017 [22], in which patients develop isolated myofibrillar myopathy without cardiac involvement, and CRYAB D109H , described by Sacconi et al. in 2012 [23], in which a single patient presented with late stage dilated cardiomyopathy [21].The CRYAB protein spontaneously forms dimers which then form oligomers in physiologic conditions minimizing activity [24,25]; these structures are disrupted in response to stress resulting in its activation and chaperone function [26,27].The amino acid D109 is highly conserved across species as it forms an integral ionic bridge stabilizing the CRYAB dimer [21], the loss of which appears to lead to aberrant chaperone function. The pathology of CRYAB D109G involves abnormal desmin aggregation, based on immunofluorescence localization of these aggregates in C2C12 and Hl-1 cells overexpressing CRYAB D109G [21].Desmin is a muscle-specific intermediate filament that helps stabilize the contractile apparatus and nucleus in sarcomeres and plays a role in sarcomere architecture.Additionally, desmin plays a role in maintaining tissue structure by tightly associating with cell-cell adhesion complexes [28].Desmin is highly expressed in muscle tissue, and proper organization of the desmin filaments is key to maintaining cellular function.Cardiac dysfunction often results from disruption of cardiac structure causing an alteration in contractile function; interestingly, CRYAB D109G affects cardiac cell structure indirectly through improper desmin function [21]. Pathologies arising from desmin-related dysfunction and aggregation are termed desminopathies, and when they involve muscle tissue, they are named desmin-related myopathies.Desminopathies can arise from mutations within desmin itself, and several pathogenic desmin mutations have been described; however, they can also arise from the dysfunction of proteins involved in protein folding and stability [28].CRYAB D109G results in the development of desmin-related cardiomyopathy because the mutant CRYAB is no longer able to efficiently stabilize and prevent the aggregation of desmin filaments.Desmin aggregation in cells is broadly characterized by two criteria defined by Goebel [29], (1) multifocal cytoplasmic inclusions or spheroid bodies and (2) disseminated accumulation of granulofilamentous material [28].Wild-type CRYAB forms stable dimers through ionic bridges between D109 and R120, which are disrupted by mutations in the region and are a particularly common site of missense mutations in patients with myopathies [21].Therefore, instead of binding desmin to stabilize the Z-bands and intercalated disks in muscles, they form cytoplasmic aggregates in conjunction with the mutated CRYAB protein, falling into the first classification of desmin aggregation [21].When the desmin filaments then aggregate, they cause cellular dysfunction which in the heart manifests mostly as forms of cardiomyopathies both hypertrophic and restrictive, although hypertrophy is more commonly noted [21]. CRYAB 120 Mutations Mutations at the 120th amino acid of CRYAB, like mutations at the 109th amino acid, are also involved in various pathologies across the body.The most common mutation associated with cardiovascular disease is the germline CRYAB R120G missense mutation, which is inherited in an autosomal dominant manner [30].As was noted in the previous section, CRYAB forms dimers that are stabilized by ionic bonds at the D109 and R120 amino acids [21].Interestingly, cryoelectron microscopy of purified CRYAB R120G has shown an abnormal quaternary structure with a molecular weight at least twice that of wild-type CRYAB, suggesting the mutation facilitates abnormal oligomerization [31].Interestingly, in vitro studies indicate that CRYAB R120G acts in a dominant negative manner, with the mutant protein compromising the function of wild-type proteins in the dimerized form [18]. CRYAB mutant aggregation then suggests that even in the cases of heterozygous mutations in CRYAB, the mutant protein might cause wild-type proteins to form aggregates resulting in the development of cardiac pathology.As was seen in the mutations at D109, mutations at R120 also lead to desmin aggregation and subsequent cellular dysfunction with loss of normal muscular striations seen in cardiomyocytes isolated from CRYAB R120G transgenic mice [30].Desmin-related myopathies can be defined based on electron-dense granular aggregates in the cytoplasm seen in electron microscopy [30].These structures are divided into two classes by Wang et al.; Type I structures had a relevantly low electron density, were large and regularly shaped, and tended to occupy a large portion of the central part of the cardiomyocyte while Type II structures were composed of finer and smaller granules that are more numerous than Type I granules, irregularly shaped, and surrounded by many fine filaments [30].As was noted in the previous section on CRYAB D109G , desmin aggregations can broadly be classified based on appearance as was done by Goebel [29], while the types outlined by Wang et al. are specific for the electron microscopy appearance in desminrelated myopathies.While distinct, the two classification systems correspond to each other as follows: Wang Type I aggregates in electron microscopy correspond to Goebel multifocal cytoplasmic inclusions or spheroid bodies, while Wang Type II aggregates correspond to the Goebel disseminated granulofilamentous material.It appears that Type I granules were mainly composed of mutant CRYAB aggregates, while Type II aggregates were composed of CRYAB mutants and desmin filaments [30].Although some aggregates contained both desmin and CRYAB mutant protein, interestingly, it was most common for CRYAB and desmin to aggregate independently of the other protein [30]. Mice overexpressing the CRYAB R120G variant additionally are under reductive stress, with myopathic hearts showing increased recycling of oxidized glutathione to reduced glutathione due to augmented expression and enzymatic activity of glucose-6-phosphate dehydrogenase (G6PD), glutathione reductase, and glutathione peroxidase [32].Crossing of these mice with mice expressing reduced levels of G6PD rescued the cardiomyopathic and proteotoxic phenotype [32].In cells with the CRYAB R120G mutation, autophagy, a process by which dysfunctional cellular components are removed, is inhibited, suggesting another mechanism by which mutant CRYAB negatively impacts the function of cells [33].Autophagy as a whole can be broken down into three broad categories: (1) macroautophagy, where cytoplasmic contents are sequestered in an autophagosome that then combines with a lysosome for degradation; (2) microautophagy, in which the lysosomal membrane invaginates, engulfing targets of degradation; and (3) chaperone-mediated autophagy, where chaperones target proteins with a specific peptide sequence which is then unfolded and translocated to the lysosome for degradation [34].In general, autophagy occurs at a base level in the cell, recycling cellular material, particularly damaged proteins, to prevent their harmful accumulation.During stress, increases in autophagy protect the cell from additional harmful accumulation of cellular material, to maintain proteostasis [34].Furthermore, inducing autophagy in CRYAB R120G cultured cardiomyocytes reduces the aggregation burden and cytotoxic aggregation intermediates, referred to as pre-amyloid oligomers [33].A previous study observed that in the hearts of mice overexpressing CRYAB R120G , there is increased autophagy as an adaptive response to proteotoxic aggregates [11].Crossing these mice with mice deficient in autophagy due to Beclin deficiency resulted in worsened proteotoxicity and cardiomyopathy [11].Enhancement of autophagy is thus a viable strategy for improving CRYAB R120G -induced proteotoxicity and cardiomyopathy [11].It is important to note that although protein aggregates are the hallmark of desmin-related cardiomyopathy, their accumulation is only weakly correlated with disease severity, while the amount of pre-amyloid oligomers more strongly correlates with human cardiovascular disease [33].As was noted in the D109 mutants, the cardiac pathology that is most often associated with CRYAB R120G is the development of desmin-related cardiomyopathy. CRYAB 123 Mutation A recently identified mutation in CRYAB by Maron et al., the CRYAB R123W mutation, was discovered through genetic analysis in twins that developed hypertrophic cardiomyopathy with temporal concordance [35,36].Follow-up mouse studies revealed that, unlike the previous two mutations, CRYAB R123W does not cause desmin aggregation but rather leads to cardiac dysfunction through sarcomere-independent mechanisms [36].Knock-in mice with the CRYAB R123W mutation do not develop hypertrophic cardiomyopathy spontaneously but undergo a distinct remodeling process upon pressure overload via transverse aortic constriction [36].Wild-type CRYAB has been previously reported to play a protective role against the development of pathological hypertrophy in pressure-overloaded hearts [10].As for the mechanism behind the protective effects of wild-type CRYAB in this setting, it has been proposed that CRYAB prevents the interaction between calcineurin and NFAT and inhibits the subsequent downstream activation [36].The CRYAB R123W mutant is unlikely to block that interaction as efficiently, therefore leading to aberrant activation [36].Crossing of Cryab R123W mice with NFAT-luciferase reporter mice resulted in an increase in NFAT-luciferase reporter activity, while overexpression in H9c2 cells also led to increased NFAT-luciferase reporter activity [36]. Five NFAT transcription factors have been discovered; NFATc1-c4 are regulated by calcineurin, whereas NFAT5 resides in the nucleus and is not under calcineurin regulation.Calcineurin is a serine/threonine phosphatase activated by sustained high levels of calcium that bind to calmodulin and lead to a conformational change in which the calcineurin C-terminal autoinhibitory domain is disengaged.Once active, calcineurin binds NFAT and de-phosphorylates several serine motifs in the regulatory domain of NFAT, exposing its nuclear localization signal leading to its nuclear localization and transcription factor activity [37].For proper signaling, the calcineurin catalytic domain must be able to bind to the conserved PxIxIT motif on NFAT, located N-terminal to its phosphorylation sites; inability to do so results in NFAT repression [37].It is possible that wild-type CRYAB blocks this interaction, as it has been shown that wild-type CRYAB inhibits the activation of NFAT and its nuclear translocation [38].Furthermore, structural analysis by Alphafold multimer, as seen in Figure 2, predicts that wild-type CRYAB strongly occupies the NFAT binding domain of calcineurin while the CRYAB R123W mutant does not.CRYAB R123W would thus be expected to bind less efficiently and facilitate calcineurin/NFAT activation through a de-repression mechanism.Interestingly, however, overexpression of CRYAB R123W in H9c2 leads to activation of NFAT activity, despite the presence of WT CRYAB, suggesting that an activation mechanism is present rather than a simple de-repression mechanism [36]. CRYAB G154S Mutation CRYAB G154S was discovered by Pilotto et al. in 2006 in a 48-year-old female patient found to have dilated cardiomyopathy, without ocular manifestations, with a family history of dilated cardiomyopathy found in her father [39].The phenotype was characterized by mild LV dilatation, moderately decreased ejection fraction, and a mild increase in serum CPK suggesting possible subclinical muscle involvement [39].It has furthermore been described by Reilich et al. in 2010 as a cause of progressive late-onset distal myopathy, without cardiac and ocular involvement [40].Muscle cells were found in histology to be consistent with myofibrillar myopathy with aggregates staining positive for desmin and CRYAB, although the morphology of the aggregates in electron microscopy is different than those reported for CRYAB R120G mutations [40]. CRYAB R157H Mutation The CRYAB R157H mutation was first noted in 2006 by Inagaki et al.; a 71-year-old patient was found to have dilated cardiomyopathy as well as a family history of dilated cardiomyopathy and sudden cardiac death [41].Furthermore, CRYAB R157H was found to have an impaired ability to bind to the heart-specific N2B domain of titin/connectin compared to wild-type CRYAB [41].Since wild-type CRYAB has been found to associate with the I-band region of titin/connectin, it has been suggested that impaired localization of mutant CRYAB R157H to the I-band region predisposes to early progression of heart failure under stressful conditions [41].Unlike CRYAB R120G , CRYAB R157H does not seem to form cytoplasmic aggregates and does not lose affinity for the I26/I27 domain of titin/connectin found in muscle, suggesting a mechanism for the presence of cardiac but not skeletal pathology [41].Structural analysis of CRYAB R157H found that there was minimal change in the secondary and tertiary structure of the protein; however, there was a significant change in the quaternary structure, with CRYAB R157H forming smaller oligomers upon heat stress [42].Interestingly, although the mutant protein had lower thermal stability, it maintained a comparable chaperone activity compared to wild-type CRYAB [42].Considering both the changes in quaternary structure and the maintenance of chaperone activity, it is possible that CRYAB R157H has significant changes in interaction patterns which might play a role in its pathogenesis [42]. CRYAB G154S Mutation CRYAB G154S was discovered by Pilo o et al. in 2006 in a 48-year-old female patient found to have dilated cardiomyopathy, without ocular manifestations, with a family history of dilated cardiomyopathy found in her father [39].The phenotype was characterized by mild LV dilatation, moderately decreased ejection fraction, and a mild increase in serum CPK suggesting possible subclinical muscle involvement [39].It has furthermore been described by Reilich et al. in 2010 as a cause of progressive late-onset distal myopathy, without cardiac and ocular involvement [40].Muscle cells were found in histology to be consistent with myofibrillar myopathy with aggregates staining positive for desmin and CRYAB, although the morphology of the aggregates in electron microscopy is different than those reported for CRYAB R120G mutations [40]. CRYAB R157H Mutation The CRYAB R157H mutation was first noted in 2006 by Inagaki et al.; a 71-year-old CRYAB Mouse Models Cardiomyopathies have been studied in various cellular and animal models.Mice are the most common model and are often used in conjunction with genetic alterations and induction of cardiomyopathic phenotypes via additional, often surgical, interventions [43].Mice have been extensively used because of the ease of genetic modification and animal maintenance; however, they do not always recapitulate the key features of human disease [43].The use of large animal models of cardiomyopathies, including cats, dogs, and pigs, is growing due to their ability to recapitulate critical features of human physiology and disease, but they have longer life cycles and are more difficult to maintain [43].Here we will discuss the mouse models that have been generated and used to study CRYAB-related cardiovascular diseases, as current CRYAB research relies almost exclusively on them. CRYAB R120G Mouse Models Wang et al. reported the construction of transgenic mice expressing three different expression levels of the CRYAB R120G mutant [30].Germline transmission was confirmed with normal Mendelian offspring ratios indicating no embryonic lethality across the expression levels [30].Protein analysis of the transgenic mutant hearts showed elevated levels of proteins, especially of insoluble proteins likely representing protein aggregates seen on stained myocardial sections [30].As the mutant mice aged, the number and size of aggregates increased [30].Higher expression of mutant CRYAB R120G increased mortality, indicating a possible dose-dependent phenotype [30].Mice with the highest expression level died around age 5-7 months, while mice with intermediate expression levels showed a similar phenotype at age 12-16 months [30].Extracted hearts were grossly enlarged and dilated [30].Necropsy also revealed pulmonary and hepatic congestion, pleural effusion, and subcutaneous edema consistent with congestive heart failure [30].The mutant line 708, whose expression was intermediate, and mutant line 134, whose expression was the highest, were chosen for further study and compared to mice expressing transgenic wild-type CRYAB with expression and protein levels comparable to the mutants [30].On a molecular basis, activation of the fetal genetic program was observed, with an upregulation of atrial natriuretic peptide and β-myosin and a downregulation of α-myosin, phospholamban, and sarcoplasm reticulum calcium in young mice harboring the CRYAB R120G mutant [30].By 3 months, hypertrophy was grossly apparent based on increased ventricular weight/tibial length ratios and continued to worsen as the mutant mice aged [30].Cardiomyocyte size progressively enlarged, and at 3 months, the increased heart size was attributed to concentric hypertrophy [30].However, as the mice aged, the increased size was due to heart dilation suggestive of failure [30].Both the early molecular changes and physiologic progression were consistent with the clinical progression seen in human cardiovascular diseases [30].Comparable to the human pathophysiology of desmin-related cardiomyopathy, 3-month-old CRYAB R120G mice maintained contractile function, but relaxation impairments were noted [30].With age, however, there was progression to severe disease comparable to that in humans with loss of contractile function and relaxation becoming load-dependent [30].It was noted that total cellular levels of CRYAB R120G increased as the transgenic mice aged, while total levels of wild-type CRYAB in transgenic mice remained constant despite higher transcript levels [30].This suggests that high levels of wild-type CRYAB are not necessarily detrimental to the cell; however, progressive accumulation of mutant CRYAB R120G protein that aggregates and induces desmin aggregation leads to progressive cardiac damage [30].Furthermore, it was also found that in CRYAB R120G aggregates, there was often a lack of desmin, suggesting an inability of the mutant to properly bind to desmin.Therefore, desmin aggregates are likely not due to an aberrant interaction with CRYAB R120G but rather form due to loss of chaperone activity [30].It was also found that desmin null CRYAB R120G transgenic mice have a less severe phenotype compared to mice with intact desmin, which suggests that the pathophysiology is not solely driven by loss of desmin function [30].A knock-in mouse model expressing normal levels of CRYAB R120G also demonstrates lens and myopathy phenotypes [44]. CRYAB R123W Mouse Models Mice harboring the Cryab R123W mutation were generated by Chou et al. using C57BL/6 mice with CRISPR/Cas9-mediated homology-directed repair to knock in the mutant allele [36].In this model, mice did not develop hypertrophic cardiomyopathy at a steady state, which is not unexpected given that many models require additional stress for pathology to emerge [36].At a steady state, young mice homozygous for the Cryab R123W mutation were found to have increased E max , a load-independent measure of contractility, compared to wild-type and heterozygous mice; interestingly, this seems to decrease with age [36]. Steady-state mice were also found to have an elevated E/E' indicative of diastolic dysfunction commonly seen in hypertrophic cardiomyopathy patients that developed with age [36].However, using this model in combination with transverse aortic constriction resulted in the development of marked pathological hypertrophy in homozygous and heterozygous Cryab R123W mutants not seen in wild-type mice [36].Like other mouse models of hypertrophic cardiomyopathy (HCM), these mice developed circumferential hypertrophy as opposed to the asymmetric septal hypertrophy seen in humans [36].But otherwise, Cryab R123W mutant mouse hearts showed a greater extent of cellular hypertrophy and large areas of parenchymal fibrosis compared to the wild type, which was consistent with key features of human HCM [36].It should also be noted that mice carrying the Cryab R123W mutation developed progressive systolic dysfunction after transverse aortic constriction, which did not worsen in mice with both Cryab R123W mutation and heterozygous MYBPC3 truncation, suggesting that CRYAB R123W acts in a sarcomere-independent manner [36].Overall, the Cryab R123W mutant mice displayed key elements of human HCM pathology and were stable during steady-state conditions, indicating that these mice are easy to maintain and readily induced to develop pathological hypertrophy with the addition of pressure overload [36].Of note, however, these mice did not develop proteotoxic desmin or CRYAB aggregates and demonstrated increased calcineurin/NFAT activation, indicating a distinct mechanism of promoting pathological hypertrophy compared to the CRYAB R120G variant [36]. Therapeutic Approaches CRYAB-associated cardiac pathology results from the failure of normal protein functions.In the case of CRYAB D109G and CRYAB R120G , the driving pathological mechanism is the induction and accumulation of misfolded proteins in the cell resulting in proteotoxicity [21,30].The CRYAB R120G mutation has been found to result in the development of desmin and aggresome protein aggregates in the cell, likely due to the loss of the molecular chaperone functions that prevent misfolded protein aggregation in response to stress [45].Interestingly, it has been found that the desmin aggregates associated with CRYAB R120G are amyloidophilic, and the most accurate description of these aggregates then would be amyloid-like.This means that CRYAB-based aggregates share some similarities with other amyloid-based degenerative diseases, such as Alzheimer's disease, although there is emerging evidence that this is not unique to CRYAB-based cardiomyopathy, as these aggregates have also been found in the hearts of patients with non-CRYAB mutation-induced hypertrophic and dilated cardiomyopathy [45].As a brief overview of cardiomyopathic disease, from a pathophysiologic standpoint, there are three broad classes: (1) dilated cardiomyopathy is the most common, defined by left ventricular dilation and reduction in ejection fraction; (2) hypertrophic cardiomyopathy, defined by impaired left ventricular relaxation and filling due to thickened ventricular walls; and (3) restrictive cardiomyopathy, defined by decreased elasticity of the myocardium which leads to impaired ventricular filling without systolic dysfunction.CRYAB-based development of cardiomyopathies is summarized in Table 1, and it varies between and even within specific mutations, but CRYAB D109G and CRYAB R120G are associated with desmin-related cardiomyopathies [21,30], while the development of hypertrophic cardiomyopathy due to CRYAB R123W could be related to abnormal calcineurin-NFAT signaling [36]. Since the formation of aggregates plays a central role in the increased proteotoxic state associated with desmin-related cardiomyopathy, it stands to reason that preventing or reversing the protein aggregation either by resolubilizing the aggregates or increasing their degradation could help alleviate the disease.Misfolded proteins are targeted and then cleared by the ubiquitin protease system, a central mechanism for minimizing proteotoxicity [46].Inadequate proteasome function and the resultant increase in proteotoxic stress have been implicated in various human heart diseases and their progression to heart failure [46].Studies have shown that cGMP-dependent protein kinase stimulates proteasome activity, thereby improving the degradation of misfolded proteins in cardiomy-ocytes [46].Cyclic nucleotide phosphodiesterase (PDE) is a key mediator in the breakdown of cGMP and affects the regulation of its associated signaling pathways.Studies have shown that inhibition of PDE1 has protective effects against isoproterenol-induced myocardial hypertrophy and fibrosis in mice and that the depletion of PDE1C is protective against pressure-overload-induced remodeling of the heart via PKA [46].Transgenic mice expressing mutant CRYAB R120G had a significant elevation in the levels of PDE1A [46].Furthermore, inhibition of PDE1A in mice expressing CRYAB R120G that had developed heart failure with preserved ejection fraction improved cardiac diastolic function and survival compared to non-treated mice [46].It should also be noted that PDE1A inhibition resulted in decreased levels of misfolded CRYAB R120G [46].From a mechanistic standpoint, it has been suggested that PDE1 inhibition improves cardiac function in proteotoxic states through PKA-and PKG-mediated proteasomal activation [46], suggesting that inhibition of PDE1 is a possible therapeutic that can be used to target protein-aggregation-based cardiomyopathies.Another therapeutic option would be to target protein stability via increased chaperone activity with the intent to prevent or reverse protein misfolding and aggregation, rather than increasing protein degradation.A novel molecular tweezer, CLR01, functions as a nanochaperone, preventing abnormal protein aggregation by selectively binding to lysine residues [47].The binding of CLR01 to lysine residues is achieved by hydrophobic and electrostatic interactions that compete for binding at critical residues involved in the aggregation of misfolded proteins [47].Importantly, as a possible therapeutic approach, CLR01 has been tested in several in vitro and in vivo models without signs of toxicity [47].In transgenic mice expressing CRYAB R120G , daily injection of CLR01 resulted in decreased levels of aggregates and improved proteotoxicity in hearts compared to untreated mice [47].Preventing protein aggregation then could allow misfolded proteins to remain soluble and then remain a target for degradation pathways [47].Outlined above are two mechanisms that can be targets for therapeutics, increasing misfolded protein degradation and preventing misfolded protein aggregation.Although these are distinct pathways, interestingly, there seems a possibility for synergy between the two methods, with CLR01 increasing the number of misfolded proteins accessible to the protein degradation system and PDE1A inhibition enhancing the efficacy of the proteasomal activation. Another example of a possible therapeutic that has been studied is doxycycline, which has been found to improve mortality in CRYAB R120G transgenic mice with late-stage cardiomyopathy [48].It was found that doxycycline can prevent aberrant protein aggregation in mice with CRYAB R120G desmin-related cardiomyopathy, but interestingly, it does so through an autophagy-independent mechanism as opposed to the previously discussed therapeutics [48].It was found that there was a decrease in both aggregates and oligomeric CRYAB R120G with doxycycline treatment, suggesting doxycycline inhibits CRYAB R120G from inducing aberrant oligomerization, possibly allowing for normal CRYAB oligomeriza-tion [48].Additionally, it has been found that sHSPs often exist in complexes with other sHSPs or target proteins [49].HSPB1 and HSPB8 are two other sHSPs able to modify the aggresomal formation of CRYAB R120G and inhibit its ability to induce the formation of amyloid oligomers [49].This suggests that induction of other non-mutated sHSPs could have therapeutic purposes in CRYAB R120G cardiomyopathy, as was seen when transgenic CRYAB R120G mice were treated with geranylgeranylacetone, an inducer of sHSPs, showed improved survival and heart function and improvements in heart size and fibrosis [49].Exercise has been found beneficial in delaying the onset and progression of neurodegenerative diseases in animal models, including amyloid-based Alzheimer's models [49].Given that some CRYAB mutations induce amyloid-like aggregations, it was hypothesized and discovered that exercise improves both symptoms and mortality in mice with CRYAB R120G cardiomyopathy with a reduction in amyloid oligomers [49].Cellular death secondary to toxic aggregate formations found in mutant CRYAB cardiomyopathies is another avenue being explored for therapeutics [49].Studies have found that overexpression of BCL2 and administration of the mitoK(ATP) channel opener to CRYAB R120G mice lead to improvements in mitochondrial function, cardiac function, and survival [49].These approaches to proteinopathy-based cardiac disease are exciting as they offer various novel methods of therapeutics for a disease that sorely lacks effective medications. Conclusions and Future Directions A large body of evidence has accumulated in support of the essential role of α-Bcrystallin in normal cardiac homeostasis through its function as a molecular chaperone to reduce proteotoxic aggregation and to attenuate pathological calcineurin/NFAT signaling.Naturally occurring mutations that lead to desmin-related cardiomyopathy, restrictive cardiomyopathy, and hypertrophic cardiomyopathy underscore its relevance to human disease.An analysis of the pathological mechanisms in these various conditions underscores the broad effects of CRYAB on cellular function and how different mutations can have distinct effects on either protein aggregation or calcineurin/NFAT signaling to promote divergent phenotypes.Future work to determine the specific effects of pathological mutations on CRYAB structure, function, and interacting proteins will likely provide further insight into downstream pathological mechanisms and identify future targets for therapeutic intervention. Figure 1 . Figure 1.Diagram depicting the domains, mutated residues, and phosphorylation sites of wild-type CRYAB. Figure 2 . Figure 2. (A) Alphafold structural prediction of the interaction between calcineurin (blue) and wildtype CRYAB (green) illustrating that wild-type CRYAB binds well at the calcineurin NFAT binding site.(B) Alphafold structural prediction of the interaction between calcineurin (blue) and CRYAB R123W (red) illustrating that CRYAB R123W binds poorly at the calcineurin NFAT binding site.(C) Overlap of the previous two Alphafold predictions showing the difference between wild-type CRYAB (green) and CRYAB R123W (red) binding to the calcineurin (blue) NFAT binding site. Figure 2 . Figure 2. (A) Alphafold structural prediction of the interaction between calcineurin (blue) and wild-type CRYAB (green) illustrating that wild-type CRYAB binds well at the calcineurin NFAT binding site.(B) Alphafold structural prediction of the interaction between calcineurin (blue) and CRYAB R123W (red) illustrating that CRYAB R123W binds poorly at the calcineurin NFAT binding site.(C) Overlap of the previous two Alphafold predictions showing the difference between wild-type CRYAB (green) and CRYAB R123W (red) binding to the calcineurin (blue) NFAT binding site. Table 1 . Pathology associated with different CRYAB mutations.
v3-fos-license
2019-04-19T13:02:30.876Z
2019-04-17T00:00:00.000
121313732
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1371/journal.pone.0215510", "pdf_hash": "c784ed67fad8d3ea46803f3bcee14b4dc8a9e6d4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42292", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "c784ed67fad8d3ea46803f3bcee14b4dc8a9e6d4", "year": 2019 }
pes2o/s2orc
Local adaptation in natural European host grass populations with asymmetric symbiosis Recent work on microbiomes is revealing the wealth and importance of plant-microbe interactions. Microbial symbionts are proposed to have profound effects on fitness of their host plants and vice versa, especially when their fitness is tightly linked. Here we studied local adaptation of host plants and possible fitness contribution of such symbiosis in the context of abiotic environmental factors. We conducted a four-way multi-year reciprocal transplant experiment with natural populations of the perennial grass Festuca rubra s.l. from northern and southern Finland, Faroe Islands and Spain. We included F. rubra with and without transmitted symbiotic fungus Epichloë that is vertically transmitted via host seed. We found local adaptation across the European range, as evidenced by higher host fitness of the local geographic origin compared with nonlocals at three of the four studied sites, suggesting that selection pressures are driving evolution in different directions. Abiotic factors did not result in strong fitness effects related to Epichloë symbiosis, indicating that other factors such as herbivory are more likely to contribute to fitness differences between plants naturally occurring with or without Epichloë. Nevertheless, in the case of asymmetric symbiosis that is obligatory for the symbiont, abiotic conditions that affect performance of the host, may also cause selective pressure for the symbiont. Introduction Variability in direction and magnitude of natural selection is a major force shaping biodiversity [1]. As a result, natural populations encountering differing selection pressures become genetically differentiated and locally adapted [2][3][4]. Local adaptation is traditionally defined as higher fitness of local than nonlocal individuals in a given environment [5]. Selective agents driving local adaptation consist of both abiotic and biotic factors, and the latter become especially apparent when local populations of closely interacting species coevolve [6,7]. Plant-associated symbionts have the potential to be highly beneficial for the fitness of their hosts, as has been shown for nitrogen-fixing rhizobia and mycorrhizal fungi. This makes symbiotic associations between plants and microbes excellent study systems for examining how PLOS patterns of local adaptation are shaped by symbiosis, especially because plants as sessile organisms need to adapt to surrounding environmental conditions. Estimating the role of symbiotic associations in local adaptation should involve natural environments, where fitness benefits are determined by resource acquisition and allocation. For example, using local and nonlocal soils and reciprocal inoculation, it has been shown that local soil and local genotypes of arbuscular mycorrhizal fungi promote resource acquisition on Andropogon gerardii [8] At its most extreme, coevolution of hosts and symbionts can result in obligatory associations where survival or reproduction are not possible without the symbiotic partner, and fitness of the symbiont and the host become tightly linked. In these cases, selection against nonlocal host plants results in potential fitness reduction for the symbiont. However, the role of vertically transmitted symbionts in local adaptation of their hosts to abiotic environment is still largely unknown. Systemic fungal symbionts of grasses of the genus Epichloë (Ascomycota; Clavicipitaceae) are an example of asymmetric interactions, where the fungus grows asymptomatically between host cells inside aboveground tissues of the plant. Epichloë reproduces asexually by growing hyphae in newly produced tillers and seeds of the host grass, resulting in vertical transmission, and making them entirely dependent on their host [9]. Epichloë species are specialized symbionts of grasses with a shared coevolutionary history with their hosts and are transmitted in host maternal lines [9][10][11]. In agricultural grasses, Epichloë have been viewed as mutualists mostly due to the herbivore-deterring alkaloids that they produce [12]. Studies on natural populations have shown that asymmetric symbiosis that is facultative to the host plant can range from mutualistic to parasitic [13]. Harmful effects on the host plant are most evident in sexual strains of Epichloë species that produce spore-forming structures called stromata-a condition known as choke disease-that prevents or hampers development of seeds on the host plant [14]. However, even asexual vertically transmitted Epichloë species (formerly Neotyphodium, [15]) can be harmful to the host if costs of harboring the symbiont exceed the benefits [16][17][18]. This balance could be altered in novel environments, where allocation of host resources can change and potentially result in costs of harboring Epichloë or benefits of increased resistance to abiotic stress. As symbiosis is obligatory for reproduction and persistence of Epichloë, adaptive evolution of both parties is potentially heavily affected by host plant performance. Because of the tight and asymmetric fitness linkage, adaptation of the host plant to local conditions (temperature, precipitation and annual variation in day length) can play an important role in evolution of grass-Epichloë symbiosis. Local adaptation in plants is often associated with differentiation in flowering responses to temperature and photoperiod, and responses to these factors can influence potential for vertical transmission via successful seed production, making environmental factors influencing host plant performance indirectly governing also fitness of the symbiont. Local adaptation of the host can therefore be beneficial for the symbiont, but unless Epichloë provides fitness benefits for the host or especially if it is costly, plants with Epichloë could be selected against. Natural selection can promote occurrence of Epichloë even when the fungus is not transmitted to all offspring if patterns of selection vary in heterogeneous environments [13,19]. Natural grass populations have been found to consist of plants with and without Epichloë at variable frequencies and they might be completely absent in some areas [20][21][22]. This is in part due to often incomplete vertical transmission, resulting in tillers and seedlings without Epichloë even when associations are mutualistic [23,24]. Loss of the symbiont can be associated with absence of selective advantage and potentially also from genetic mismatches between host and symbiont that can arise from evolutionary conflicts between reproductive modes and genetic variation. These conflicts could be prevalent when cross-pollination of flowers introduces new host genotype combinations in seeds that can prevent growth of the vertically transmitted Epichloë species that cannot actively choose their hosts [25]. We used natural populations of an outcrossing perennial grass, Festuca rubra L. sensu lato (Poaceae, red fescue) and its symbiont Epichloë festucae (Leuchtm., Schardl, & Siegel), as a model to study local adaptation in host plants and whether naturally occurring plants with or without symbiont show different fitness responses. Classical reciprocal transplant experiments where individuals from different environments are reciprocally transplanted in native environments of each origin allows to test for local adaptation, evidenced by higher fitness of the local population compared with each of the nonlocal populations [5]. To our knowledge, few reciprocal transplant studies with multiple sites spanning a large geographic area have been conducted-especially in the context of how fitness of the host can be modulated by the symbiont at native sites of natural host populations in the field. Our prediction was that local host populations have become locally adapted and host genotypes naturally harboring E. festucae (referred to as Epichloë from here on) could have reduced fitness due to costs of symbiosis, increased fitness due to resistance to abiotic stress or show no differences related to the tested abiotic environments when compared with naturally Epichloë-free genotypes. Although positive and negative effects of fungal symbionts including Epichloë on growth and reproduction, photosynthetic rate, abiotic stress tolerance, and competitive ability have been documented [26,27], most of these studies have been conducted with cultivars or in agricultural, nutrientrich environments or greenhouse conditions [28][29][30]. Use of natural populations and environments can demonstrate ecologically relevant fitness differences, and whether hosts harboring the symbiont are favored by selection in nature. We conducted a four-way reciprocal transplant experiment across a broad geographic scale in Europe (northern Finland, Faroe Islands, southern Finland and Spain) and estimated fitness by quantifying several fitness components over three years at each site. We aimed at answering the following questions: first, do we find evidence for local adaptation of the host on abiotic environments on a large geographic scale? Our hypothesis was that in a reciprocal transplant experiment in home environments of each geographic origin in the field, local plants would have higher fitness than nonlocals, and tested this hypothesis both at the level of estimated cumulative fitness and individual fitness components (survival, biomass, flowering propensity and number of flowering culms) in each year. Second, how does Epichloë symbiosis contribute to host plant fitness in natural environments? More specifically, we tested whether naturally occurring host genotypes with or without Epichloë show different fitness responses in local or novel abiotic environments. Study system F. rubra s.l. (referred to as F. rubra from here on) is an outcrossing, perennial fine-leaved, cool-season tuft grass distributed across the Northern Hemisphere. It grows in oligotrophic, mesotrophic and saline habitats with low or moderate levels of competition such as riverbank meadows, semiarid grasslands, rocky outcrops and sea cliffs and it can be found also in harsh arctic habitats as well as alpine meadows. The species has commercial value, as it is one of the most important turf grasses. Natural populations of F. rubra include plants with variable ploidy levels from tetraploid to octoploid [22]. The proportion of plants with the Epichloë symbiont varies between F. rubra populations, with no Epichloë in some regions [20,22,[31][32][33]. To reduce maternal effects prior to the experiment, all field-collected plants were grown in pots filled with a mixture of peat and sand at the Turku University greenhouse in Ruissalo, Finland, where the plants produced new tillers in a common environment. Tillers representing random genotypes from each region (n N Finland = 40, n Faroe = 34, n S Finland = 31, n Spain = 39; S1 Table) were then split to obtain vegetative clones (up to four replicates of each genotype to be planted at each site). The tillers were pre-grown in cell pots (3 cm diameter) for 2-4 weeks prior to planting. Presence of Epichloë in each plant was determined and have been documented earlier based on observations of hyphal growth from surface-sterilized leaf cuttings plated on 5% potato dextrose agar on petri dishes [22]. For estimating performance of natural host genotypes, approximately equal numbers of genotypes with and without Epichloë from each region were included, except from southern Finland where none of the plants had Epichloë (S1 Table). We did not use plants with manipulated Epichloë status, because our aim here was to study effects on natural genotype combinations. Epichloë status of plants was verified by spot checks during the experiment. Examination of ploidy levels in the previous study with flow cytometry showed that most of the Spanish plants included in our study were tetraploid and nearly all genotypes from all other regions were hexaploid [22]. As there were only two octoploid genotypes from northern Finland and three from Spain, and one tetraploid genotype from southern Finland and the Faroe Islands, we were not able to include ploidy level information in our statistical analyses. (Fig 1). To reduce effects of environmental variation within sites, planting was done in a fully randomized design at each site. Because F. rubra occurs naturally in relatively competition-free habitats, competing vegetation was removed periodically throughout the course of the experiment. Experimental areas were fenced to exclude large vertebrate herbivores. Reciprocal transplant experiment Long term climatic observations from weather stations near each transplantation site show differences among sites (Table 1). In general, the growing season is very short at the northernmost site and limited by winter frost and snow. The Faroe Islands are very humid and temperatures are mild year-round. Temperatures in southern Finland are clearly higher than in northern Finland and the growing season is longer, but the climate is more continental than at the Faroe Islands. Mediterranean climate in Spain is characterized by warmer temperatures and the growing season is limited by dry and hot summers. In addition, soil samples were collected at each transplantation site in June 2014 by sampling 3 cm diameter soil cores (depth 0-5cm) which were analyzed for nitrogen, carbon, phosphorus, potassium, calcium and magnesium content as well as pH by Eurofins Viljavuuspalvelu Oy (www.eurofins.fi). Soil from northern Finland and Faroe Islands was found to be more acidic and had higher N and C and lower P, K, Ca and Mg contents than soils from southern Finland and Spain (Table 1). Phenotypic measurements To estimate fitness, data on multiple fitness components was recorded over several years at the four transplantation sites. Survival and flowering status of each plant was determined at each site in three years (2013)(2014)(2015). Survival in the fourth year at the Spanish site in 2016 was also recorded. To estimate reproductive output at the end of the growing season, the total number of flowering culms was counted for each plant in the three study years ( Statistical analysis To estimate total fitness, cumulative survival at the end of the experiment was calculated to be able to test for differences in long-term survival. For comparing reproductive fitness over several years, cumulative reproductive success (cumulative number of flowering culms) over three years was used for the sites in northern Finland, Faroe Islands and southern Finland and over two years in Spain, by also including value zero for plants that were not alive or did not flower in each year. Pairwise tests for local adaptation and fitness effects of the presence of Epichloë at each site were conducted for both estimates of total fitness. Counts of live/dead or flowering/vegetative for each genotype (1-4 per site) at each site were used as response variables for survival and flowering propensity and for all other traits genotypic means at each site were used. Likelihood ratio tests (two-tailed) between generalized linear models in R 3.4.1. [34] were used for all statistical comparisons. For modeling survival and flowering propensity, binomial distribution with a logit link function was used, and Gamma distribution with log link for number of flowering culms and biomass. Gaussian distribution was used for cumulative reproductive success with log10+1 transformation. Model fit was visually inspected using diagnostic plots of residuals. We identified which fitness components and years show different responses depending on region of origin or presence of Epichloë (present or absent) across sites (three-way interactions between site, region of origin and presence of Epichloë and their two-way interactions) and included population nested within region as fixed covariate to control for between-population variation within regions. Year was not included as a variable in the models due to lack of degrees of freedom, but data was instead analyzed separately for each year, as fitness effects of Epichloë symbiosis have been found to vary between years in earlier studies [25]. These tests were not performed on the number of flowering culms in the 2 nd and 3 rd year due to low sample size (� 5) at some sites when only few individuals flowered. Based on these global test results, we proceeded to test for local adaptation and effect of Epichloë symbiosis on fitness. Cases with significant interaction between region of origin and site were selected for specific pairwise testing for local adaptation. Tests for local adaptation were done according to the 'local vs foreign' criterion [5] by performing pairwise comparisons between the local and each of the nonlocal geographic origins at the region level (populations from northern Finland, Faroe Islands, southern Finland and Spain each combined as one region) at each site. For these tests, the models we compared differed only in that they had the regions of origin for the pair to be tested merged in one model and all regions defined separately in the full model and included Epichloë status as a covariate in all models. In this way, the likelihood ratio models tested whether categorizing the two regions of origin separately has a significant effect. Significance of fitness differences between genotypes with or without Epichloë at each site were tested in cases when any of the factors involving the presence of Epichloë were significant. In these analyses, plants from southern Finland were excluded, as none of them had Epichloë. These tests were done by comparing a model with Epichloë presence as a fixed effect with a null model separately for each region of origin at each site. Local adaptation Analysis of fitness data collected at the four transplantation sites showed putative cases for local adaptation indicated by significant region of origin x site interactions in all fitness components in all years ( Table 2). More specifically, we found that host plants from the sampled geographic regions showed different fitness responses depending on planting site (significant site x region of origin interactions) in survival, aboveground biomass and flowering propensity in each year, and number of flowering culms in the first year (Table 2). Descriptive statistics and sample sizes for plants from each geographic region planted at each site can be found in supplementary tables for tests of local adaptation (S2 Table) and presence of Epichloë (S3 Table). Cumulative survival at the end of the experiment was higher for local plants compared with nearly all the nonlocal origins at three of the studied sites, supporting the local adaptation hypothesis (Fig 2, Table 3). In northern Finland, where 77% of the local plants had survived by the end of the experiment, only 63% of the plants from the Faroe Islands and 44% of plants from both southern Finland and Spain were still alive. A contrasting pattern was seen at the Faroe Islands, where the plants from northern and southern Finland had significantly higher survival by the end of the experiment (78% and 85%, respectively) when compared with survival of local plants (50%). In southern Finland, cumulative survival of plants from the Faroe Islands (45%) and Spain (56%) was significantly lower than survival of the local plants at the same site (72%). In Spain, cumulative survival of the local Spanish plants (78%) was higher than that of the nonlocals from northern Finland (24%), southern Finland (20%) and Faroe Islands (39%). Comparing separate years showed that survival differences accumulated over all study years in northern Finland and in Spain (Fig 2, Table 3). At the Faroe Islands, higher survival of nonlocal plants compared with the locals was due to differences in the first two study years (Fig 2, Table 3). In southern Finland, low survival of the nonlocal Faroese plants was only seen in the first year, while high mortality among the Spanish plants at that site occurred only in the second year (Fig 2, Table 3). Analysis of aboveground biomass production showed evidence for local adaptation at all four sites (Fig 3, Table 3). In general, all the plants were substantially larger at the site in Spain already in the first year than at any other site (Fig 3, S1 Fig). At the site in northern Finland, nonlocal plants from Spain produced less than half the amount of aboveground biomass of the local population in each year (mean ± SD) (1 st year: local 2.99 ± 3.05 g vs Spanish 1.15 ± 2.91 g; 2 nd year: local 1.50 ± 2.32 g vs Spanish 0.26 ± 0.60 g). Also in the second year, plants from southern Finland produced about a half of the biomass when compared with the locals (0.61 ± 1.65 g). Evidence for local adaptation was also found at the Faroe Islands, where the local plants produced twice as much biomass in the first year (3.24 ± 4.95 g) as plants from northern (1.53 ± 1.50 g) and southern Finland (1.38 ± 1.68 g). This difference was even greater in the second year, when the locals produced up to four times as much biomass (12.47 ± 22.22 g) as the plants from northern (5.66 ± 8.40 g) and southern Finland (3.91 ± 5.06 g) and Spain (2.59 ± 4.03 g), and the differences were significant. In southern Finland, the Spanish plants had significantly lower biomass than the local plants both in the first (local 7.73 ± 7.65 g vs Spanish 4.61 ± 6.58 g) and in the second year (local 14.36 ± 20.92 g vs Spanish 2.25 ± 3.47 g). Relatively high biomass production of Faroese plants was also found in the second year in southern Finland, where the nonlocal Faroese plants outperformed the locals by producing three times the amount of biomass (43.96 ± 73.18 g). In both years in Spain, biomass production of the local plants ( Cumulative reproductive output combining survival to flowering and number of flowering culms produced in each year showed evidence for local adaptation in northern and southern Finland and Spain, but not at the Faroe Islands (Fig 4, Table 3 Results of statistical comparisons of fitness of Festuca rubra with and without symbiont Epichloë festucae from northern Finland, Faroe Islands, southern Finland and Spain using likelihood ratio tests between generalized linear models in R testing for interactions between planting site, region of origin and presence of Epichloë (symbiont status) and main effect of Epichloë on fitness components in reciprocal transplant experiment at the native sites of each geographic region. Main effects or lower order interactions were not tested in cases where a higher order interaction was significant. In support of local adaptation, flowering propensity of plants that had survived each year was significantly higher for the local plants when compared with the nonlocals, indicating differences in flowering induction or slower development rate of flowering culms (S2 Fig, Table 3). This was especially evident in northern Finland, where all three nonlocal geographic origins had significantly lower flowering propensity than the locals in all studied years. Table 3 for full details of results of the statistical tests. https://doi.org/10.1371/journal.pone.0215510.g002 Table 3. Tests for local adaptation in Festuca rubra based on results of likelihood ratio tests between generalized linear models in R for fitness estimated in reciprocal transplant experiment at native sites of regional origins in northern Finland, Faroe Islands, southern Finland and Spain for estimates of cumulative fitness and fitness components. Significant differences supporting local adaptation (local has higher fitness than nonlocal) are marked with plus signs (+) and cases for local maladaptation (local has lower fitness than nonlocal) with minus signs (-). Spain, flowering propensity of the locals was significantly higher compared with all or some of the nonlocals in all years. Production of flowering culms among the plants that flowered showed evidence for local adaptation in the first year at three of the four studied sites (S3 Fig, Table 3). In northern Finland, local plants produced 40-50% more flowering culms (6.72 ± 5.81) than plants from the Faroe Islands (4.49 ± 4.47) and Spain (4.68 ± 5.14). There were no differences in flowering culm production at the Faroe Islands. In southern Finland the local plants had 40% more flowering culms (11.83 ± 7.60) than plants from northern Finland (8.62 ± 7.77) and more than two Note the different scale on the y axis. Horizontal line: median, box: first and third quartiles, whiskers: 1.5 � interquartile range, outliers not shown. Asterisks indicate significance of likelihood ratio tests between local and nonlocal genotypes ( ���� P < 0.0001; ��� P < 0.001; �� P < 0.01; � P < 0.05; NS P > 0.05). See Table 3 Fitness of host plant genotypes with and without Epichloë symbiosis Performance of host plants with or without Epichloë did not show strong differences in fitness comparisons (Table 2). We did find a significant interaction between region of origin and presence of Epichloë for biomass in the second year and between planting site and presence of Epichloë for flowering propensity in the second year (Table 2). Pairwise tests showed some significant fitness differences between genotypes with and without Epichloë among plants from northern Finland and Faroe Islands, but no such differences were found for the Spanish plants (S4 Table). In northern Finland in the second year, plants without Epichloë from the Faroe Islands were twice as large in terms of biomass (without Epichloë 1.31 ± 1.64; with Epichloë 0.51 ± 0.75; Deviance = 8.05, P < 0.01) (Fig 3) and had more flowering individuals (18%) than plants with Epichloë (3%) (Deviance = 4.31, P < 0.05). In southern Finland, there was a three-fold difference in cumulative reproductive output favoring host plant genotypes with Epichloë among the plants from the northern Finland region (without Epichloë 17.04 ± 28.17; with Epichloë 57.38 ± 85.43; Deviance = 6.32, P < 0.05) (Fig 4). In the second year in southern Finland, there was a 2.5-fold difference in biomass production favoring plants from northern Finland with Epichloë (without Epichloë 8.75 ± 12.44; with Epichloë 21.82 ± 32.04; Deviance = 5.62, P < 0.05) (Fig 3). Also, 40% of the Epichloë-harboring plants from northern Finland flowered, while the percentage of flowering plants without the symbiont was only 4% (Deviance = 18.37, P < 0.0001). In Spain, cumulative survival at the end of the experiment was significantly higher for Faroese plants with (48%) than without Epichloë (23%) (Deviance = 4.01, P < 0.05) (Fig 2). Local adaptation across Europe Our large-scale, multi-year reciprocal transplant experiment revealed local adaptation in F. rubra across Europe, as evidenced by higher fitness in local plants compared with nonlocals in northern Finland, southern Finland and Spain. These findings demonstrate the role of natural selection in shaping genetic and phenotypic differentiation in this widespread host grass . Note the different scale on the y axis. Horizontal line: median, box: first and third quartiles, whiskers: 1.5 � inter-quartile range, outliers not shown. Asterisks indicate significance of likelihood ratio tests between local and nonlocal genotypes ( ���� P < 0.0001; ��� P < 0.001; �� P < 0.01; � P < 0.05; NS P > 0.05). See Table 3 for full details of results of the statistical tests. species. Evidence for local adaptation was supported by multiple fitness components and cumulative fitness estimates. Other studies on grassland plants have documented local adaptation across Europe in some but not all studied species [35,36]. In our study case local adaptation of the host can have consequences for evolution of both partners as fungal symbiont Epichloë is entirely dependent on the grass. Therefore, local adaptation of the host grass will benefit symbiotic partners, and causes natural selection acting against nonlocal host plant genotypes to also decrease performance of nonlocal Epichloë strains. Comparisons across large geographic distances often show local adaptation and differences in selection pressures (e.g. [37][38][39][40]). At higher latitudes, plants need to be adapted to strong seasonal changes in temperature, including long winters with temperatures below freezing and variation in day length and light quality. We found that in northern Finland, fitness advantage of the local origin was due to higher survival and flowering propensity than the nonlocal origins. This could be due to differences in photoperiod responses that are required for flowering induction and preparation for overwintering. In F. rubra as well as in other perennial grasses, flowering induction occurs in two steps where consecutive periods of short days, cold temperatures (vernalization) and long days are required [41]. It is also possible that floral development is in general slower in plants adapted to a longer growing season, and flowering culms in plants from other geographic origins did not have enough time to develop. Plants originating from northern Finland had surprisingly high performance at all sites, indicating that for example responses to photoperiod did not lower their fitness in nonlocal environments. Long-term survival of these northern genotypes was low for example in Spain, possibly due to drought stress during hot and dry summers. Plants from Spanish semiarid grasslands cope with this situation by means of summer dormancy, but seashore populations of F. rubra have been found to remain green throughout the growing season [42]. Common garden experiments with pasture grasses have shown that plants from different geographic origins differ in their responses to climatic extremes, such as drought, that are associated with climate change [43,44]. At the Faroe Islands, local plants were outperformed by nonlocal plants in all fitness components except biomass, but the Faroese plants had relatively low survival and reproductive success also at nonlocal sites. At the Faroe Islands, the surviving local individuals seemed to be able to utilize the long growing season in the cool and humid oceanic climate, resulting in larger biomass compared to nonlocals. Larger vegetative size and low flowering propensity and number of flowering culms could indicate that the Faroese plants differ in their allocation to sexual vs vegetative reproduction, as has been found in sea shore populations of F. rubra in Spain [42]. It is also possible that strong selective pressures related to for example temperature extremes such as cold winters might not have a large role in shaping Faroese populations. This can also have resulted in presence of maladaptive alleles via gene flow either from other regions or cultivars of the same species. There might also be more fine-scale environmental differences across the Faroe Islands that would be revealed by reciprocal transplantations between the specific islands. Furthermore, as our study focused on large-scale climatic differences in abiotic factors between the regions, inclusion of effect of competition with surrounding vegetation could reveal local adaptation also in Faroese plants, if their higher biomass production would be correlated with better competitive ability. However, as F. rubra occurs in habitats with relatively low competition, inclusion of a competition treatment would have significantly changed our results. Role of Epichloë symbiosis in fitness variation of the host Vertical transmission mode of the symbiont is predicted to be associated with mutualistic interactions [45], predicting that Epichloë should be generally promoting fitness of their hosts. This is supported by data from agronomical systems where abundance of nutrients can contribute to beneficial effects of symbiosis to the host, as has been documented for example in perennial ryegrass (Lolium perenne) and tall fescue (Festuca arundinacea) [46]. In a study with both wild grasses and cultivars of tall fescue, an overall beneficial effect of Epichloë was reported in a transplantation experiment, but similarly as indicated in our present study on F. rubra, fitness effects depended on the environment and host plant genotype and varied between years and fitness components [47]. Herbivory is the most studied factor contributing to evolution of the mutualistic association due to alkaloid compounds produced by Epichloë [9]. Fitness benefits for the host grass are determined by resource acquisition and allocation, especially when the symbiont is using resources for production of protective alkaloids requiring nitrogen [48]. In natural populations and environments the defensive role may be more variable and context dependent, as levels of alkaloid production profiles and their success for preventing herbivory can vary [49]. In addition, presence of Epichloë can also result in reduced vegetative biomass, as was found for example in a natural population of Festuca arizonica in a field experiment [48]. We focused here on the role of large scale abiotic factors driving evolution of local adaptation and found no strong fitness differences between plants with or without Epichloë, indicating that herbivory rather than abiotic factors is driving local evolution involved in Epichloë symbiosis. However, in some cases novel environmental conditions can induce gain or loss of fitness in plants with Epichloë in the studied environments. Loss of fitness in Faroese plants harboring Epichloë in our study at the site in northern Finland could be due to breakdown of mutualism in nonnative grass-Epichloë genotype combinations in stressful conditions, associated with expression changes in a set of fungal genes involved in enhanced nutrient uptake and degradation [50]. On the contrary, in Spain survival of the same Faroese genotypes with Epichloë was improved to some degree. This could potentially result from improved resistance to drought in Faroese plants with Epichloë as in Spain plants have to cope with seasonal droughts and with very intensive sunlight year-round. Further studies on drought and salt stress resistance could reveal whether thick and waxy leaves of the Faroese plants with Epichloë would confer drought tolerance and enable persistence of green leaves throughout the growing season also in dry habitats such as Spanish grasslands. Plants with Epichloë from northern Finland showed a clear increase in reproductive fitness and biomass production compared with plants without Epichloë when transplanted in southern Finland where the growing season is longer than in their native environment, although no individuals with Epichloë have been found in natural F. rubra populations in this region. Fitness comparisons of local host plants genotypes with or without Epichloë indicated that abiotic factors did not seem to impose selective pressure on Epichloë symbiosis, especially at native sites. In another study with Spanish F. rubra at the same experimental site in Spain, plants with Epichloë had greater phosphorus content than plants without Epichloë [51], potentially yielding fitness differences in a longer term. Local grazing pressures by large vertebrate grazers not tested here are likely to contribute more to selective advantage of symbiosis, as Epichloë occur at high frequencies at collection sites of the studied geographic origins with heavy grazing in northern Finland (reindeer), Faroe Islands (sheep) and Spain (cattle). Even in the absence of fitness benefits, mathematical models based on metapopulation theory have predicted that vertically transmitted Epichloë species can be maintained in populations even in the absence of fitness benefits to the host and when Epichloë is not transmitted to all developing seeds [19]. However, as fitness effects of Epichloë on F. rubra have been found to change depending on plant age and from year to year [16,18,49,51], studies examining survival and germination success of seeds with or without Epichloë could provide more evidence for selective advantage depending on the environment. Differences in germination success could also contribute to resulting frequencies of Epichloë occurrence in grass populations, if seedlings with or without Epichloë are more successfully recruited [24]. Possible fitness consequences for the fungal symbiont Selective forces driving evolution of the fungal partner Epichloë are tightly correlated with host fitness, and persistence and vegetative reproduction of the host enables survival and growth of Epichloë. Local adaptation of the host plant in our study has strong implications for fitness of the fungal symbiont Epichloë, as nonlocal fungal genotypes are selected against when survival of the nonlocal hosts is low and reduced probability to flower results in prevention of vertical transmission via seed. This scenario is possible because the fungal symbiont Epichloë is entirely dependent on the grass and unable to switch between hosts due to predominant vertical transmission. Therefore, local adaptation of the host grass will benefit both symbiotic partners and causes natural selection acting against nonlocal host plant genotypes to also decrease performance of nonlocal Epichloë strains. Microbial local adaptation to their host's internal environment can be tested by reciprocal inoculation between host and microbe origins. Most studies on microbial local adaptation to date have been conducted on host-pathogen systems [52,53]. In mutually beneficial symbiotic interactions, local mycorrhizal fungi have been shown to contribute to host fitness in local and nonlocal environments [8,54]. Our present study included only plants naturally occurring with or without Epichloë, as this allowed determining how selection acts on natural genotype combinations in the wild. However, in this system it is also possible to grow the same host plant genotypes with and without Epichloë where the symbiont has been experimentally removed but requires careful control of how the removal treatment (heating seeds or fungicide application) could affect host plant fitness. Studies involving experimental inoculation of selected Epichloë strain in seedlings without Epichloë, would enable testing for different grass-Epichloë genotype combinations, and even three-way interactions (host genotype x Epichloë genotype x environment) in the wild. Also, in order to better estimate the role of production of anti-herbivore compounds in natural grass-Epichloë populations, studies are currently on the way to characterize alkaloid production profiles of Epichloë originating from different regions. Conclusions Our study shows that adaptive evolution in contrasting climatic environments has resulted in local adaptation across the European range in the perennial host grass F. rubra. We found that large-scale abiotic environments did not result in strong differences in fitness between genotypes naturally occurring with or without Epichloë in the absence of high herbivory pressure. In the case of tight fitness linkage, however, it should be noted that selection against nonlocal host genotypes indirectly also decreases fitness of nonlocal symbiont genotypes and thus possibly contributing to the evolution of the symbiont. Future studies should strive for combining reciprocal transplantation experiments with reciprocal inoculations to unravel more complex interactions between host and symbiont genotypes and natural environments.
v3-fos-license
2018-04-03T02:15:51.794Z
2010-01-01T00:00:00.000
22139392
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://actamedica.lfhk.cuni.cz/media/pdf/am_2010053010003.pdf", "pdf_hash": "3b3d6bf9c3322e8adeb3f6ee4c6c85af492c91ab", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42293", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3b3d6bf9c3322e8adeb3f6ee4c6c85af492c91ab", "year": 2010 }
pes2o/s2orc
NON-STEROIDAL ANTI-INFLAMMATORY DRUG INDUCED INJURY TO THE SMALL INTESTINE Non-steroidal anti-inflammatory drugs (NSAIDs) represent the group of most commonly used drugs worldwide. The target group for use of NSAIDs comprises the elderly population with higher morbidity and mortality and a higher risk of drug toxicity (91). This fact together with the aging of the population in developed countries means increasing medical and economical problems in this context. In a large prospective analysis of adverse drug reactions in 18,820 patients in the United Kingdom, NSAIDs were responsible for 1.9 % (363) of all hospital admissions (29.6 % of all drug-related adverse events) in the period of time studied (67). The most frequent were gastrointestinal, nervous system, renal, and allergic adverse effects. Gastrointestinal toxicity is widely recognised, especially in the gastroduodenal area. Over the past decade, an increasing quantity of data has been gathered documenting small bowel involvement and its importance, showing its previous underestimation. This is related to advancements in small bowel evaluation, especially in small bowel endoscopy. The goal of this review is to discuss current knowledge of the range of NSAID-induced small intestinal injury, its clinical features, diagnosis and management. History Introduction Non-steroidal anti-inflammatory drugs (NSAIDs) represent the group of most commonly used drugs worldwide.The target group for use of NSAIDs comprises the elderly population with higher morbidity and mortality and a higher risk of drug toxicity (91).This fact together with the aging of the population in developed countries means increasing medical and economical problems in this context.In a large prospective analysis of adverse drug reactions in 18,820 patients in the United Kingdom, NSAIDs were responsible for 1.9 % (363) of all hospital admissions (29.6 % of all drug-related adverse events) in the period of time studied (67).The most frequent were gastrointestinal, nervous system, renal, and allergic adverse effects.Gastrointestinal toxicity is widely recognised, especially in the gastroduodenal area. Over the past decade, an increasing quantity of data has been gathered documenting small bowel involvement and its importance, showing its previous underestimation.This is related to advancements in small bowel evaluation, especially in small bowel endoscopy.The goal of this review is to discuss current knowledge of the range of NSAID-induced small intestinal injury, its clinical features, diagnosis and management. History The first description of NSAID (aspirin)-induced gastropathy identified by endoscope was presented by Douthwaite and Lintott in 1938 (22).Small bowel damage due to indomethacin management was observed for the first time in humans in the 70s (80).Many cases of small bowel perforation (65) and other clinical manifestations of small bowel enteropathy were published (7,8,9,10,11,12,56) in the 80s.Most morphology data were acquired from autopsy (4) and surgical studies at that time (57).Because of the relative inaccessibility of the small intestine, initial endoscopy data were drawn from sonde enteroscopy, and were not published until the early 90s (64).The capsule endoscopy era begins in the year 2000 (44) and is linked with an information boom concerning NSAID-induced enteropathy. Epidemiology According to the data published in the ARAMIS (Arthritis, Rheumatism, and Aging Medical Information System) database, up to 1.3 % of patients treated with NSAIDs are hospitalised for severe gastrointestinal complications in the USA and Canada, a 1-year mortality rate is seen in about 0.11-0.22% in this population (75).NSAID-induced enteropathy is defined as acute or chronic small bowel dysfunction or structural damage related to NSAIDs.Epidemiology data are acquired from different sources, thus the presented prevalence is different according to the specific diagnostic method applied and target population.Small bowel ulcers were observed in 8 % of NSAIDtreated patients in comparison to 0.6 % of controls (with no history of NSAID use) in a prospective, autopsy-based study (713 cases) (4).The endoscopically evident changes (including ulcers) were diagnosed by means of sonde or push enteroscopy in 41-66 % (62,63,64) and by means of capsule and double balloon enteroscopy in 16-88 % of NSAID users (30,32,36,59,60,81).Other tests of small bowel damage and malfunction (faecal occult blood test, assessment of intestinal inflammation and permeability) confirm NSAID-enteropathy in 19-72 % (7,8,9,10,11,12,56,83). Pathogenesis The pathogenesis of NSAID-enteropathy is more multifactorial and complex than formerly assumed but has still not been fully uncovered.The small bowel mucosa is exposed to the effects of NSAID several times in total.Initially, the local effect of the drug before and during its absorption, then the systemic effect, and finally the repetitive local effect after its enterohepatic circulation in some drugs plays an important role in the pathogenesis (20,69).The use of enteric-coated, sustained-release, or slow-release NSAIDs may have shifted the damage to the distal parts of the gastrointestinal tract (small intestine and colon). NSAIDs have a direct toxic effect on enterocytes (the result of high local drug concentration after peroral admission) well-described by the so-called "three hit hypothesis" (25).The first hit is represented by building of NSAIDs into the biological membranes and affecting their functions (the majority of NSAIDs are liposoluble weak acids).The increment in intracellular drug concentration leads to disruption of mitochondrial energy metabolism (uncoupling of oxidative phosphorylation) and adenosine triphosphate depletion (79).The second step is leakage of intracellular calcium and production of free oxygen radicals leading to the disruption of intra and intercellular integrity (tight junctions).The last hit is the consequence of increased intestinal permeability.Intraluminal content (such as bile acids, luminal bacteria and their degradation products, food macromolecules and other toxins) overcomes the weakened intestinal mucosal barrier and leads to inflammation (69).In experimental studies, NSAIDs did not induce enteropathy in germ-free rats or rats after bile duct ligation (46,70).Nitric oxide formed by inducible isoform of nitric oxide synthase is often mentioned as another important factor in pathogenesis of NSAID-induced enteropathy.Experimental findings indicated that induction of a calcium-independent nitric oxide synthase involves intraluminal bacteria spectrum and leads to small bowel microvascular injury in the NSAID-treated rats (89).The NSAID-induced inhibi-tion of local hydrogen sulphide production can also be associated with small bowel injury (24). The second important pathogenetic mechanism in NSAID-induced enteropathy is the systemic effect represented by the prostaglandin depletion (cyclooxygenase -COX inhibition).The pathogenesis of enteropathy was initially thought to be associated with COX-1 inhibition only.However, it has been proven that selective COX-1 inhibition (or absence) does not lead to a gastrointestinal lesion, and selective COX-2 inhibition (or absence) leads to ileocaecal mucosa damage, different from "classical" NSAID-enteropathy (43,76).Small bowel injury is induced by a combination of COX-1 inhibition with restricted mucosal blood flow and COX-2 inhibition probably through an unknown immunological effect (90). All systemic and local pathogenetic mechanisms lead, according to inflammation intensity, to erythema, erosions and ulcers.The extensive fibroproduction during healing can cause strictures. Clinical manifestation NSAID-induced enteropathy usually remains clinically asymptomatic, its endoscopic diagnostics were not feasible previously and therefore NSAID-induced enteropathy was underestimated for a long time.Clinically evident serious events (bleeding, ileus and perforation) are infrequent, but potentially life-threatening (65). a) occult gastrointestinal bleeding The symptoms of NSAID-induced enteropathy are nonspecific, the most frequent sign is obscure occult gastrointestinal bleeding.Bleeding correlates quantitatively with intestinal inflammatory activity and its intensity ranges from 2 to 10 ml per day. b) overt gastrointestinal bleeding Acute and overt gastrointestinal bleeding is a relatively rare symptom.The sources of bleeding are ulcers and erosions.NSAID-induced enteropathy is related to the 5-10 % of patients evaluated for obscure overt gastrointestinal bleeding (82). c) NSAID-induced protein loosing enteropathy Another possible clinical manifestation of NSAID-induced enteropathy is protein-losing enteropathy (7).The amount of protein loss through inflamed small bowel mucosa is usually mild to medium and can remain up to 16 months after NSAID discontinuation (20). d) jejunal and ileal dysfunction Jejunal dysfunction can cause diarrhoea or can resemble celiac disease with malassimilation.The malabsorption is mostly milder and only rarely associated with malnutrition.Vitamin B12 and bile acid malabsorption can manifest itself through ileal dysfunction (14,25). e) small intestinal perforation and obstruction Vasculitis was incorrectly considered the cause of small bowel perforation in some patients with rheumatoid arthritis treated with NSAIDs in the past.The case reports especially described small bowel perforations in patients treated with indomethacine.The typical symptoms can by disguised by the analgetic and anti-inflammatory effect of NSAIDs. There is no significant correlation between the type of small bowel lesions and particular clinical manifestation (73) except diaphragmatic disease (5,6,40,91).Diaphragm-like small intestinal strictures are a rare but very typical (pathognomonic) sign of NSAID-induced injury to the small bowel.These circumferential, purely fibrous, stenosing lesions are multiple (up to several tens), thin (1 to 4 mm thickness) and might cause severe small intestinal obstruction. Diagnosis In most patients, increased intestinal permeability and mucosal inflammation can be found in non-invasive laboratory tests.Those tests allowed the first objective confirmation of NSAID-induced enteropathy in the past.Above all, these days, diagnostics of NSAID-enteropathy is based on endoscopy.This fact is associated with enteroscopy development in recent years and is resulting in a rapid increase in sharing of information covering clinical procedure. a) standard laboratory tests Standard laboratory tests allow quick and easy identification of suspected small bowel injury in NSAIDs users, but are unusable for exact diagnostics because of their low specificity.Other possible causes associated with these findings should be excluded in differential diagnosis first.Positive faecal occult blood tests and sideropenic anaemia, as well as hypoalbuminaemia can be present.Sideropenic anaemia is observed in about 1-5 % of patients treated with NSAIDs.The aetiology of anaemia might be complex (35), resulting from chronic gastrointestinal bleeding, reduced iron and vitamin B12 absorption, malnutrition or features anaemia of chronic disease (in patients with rheumatoid arthritis) (77).A low serum albumin level was found in about 5-10 % of patients with rheumatoid arthritis (7). b) small bowel permeability evaluation Increased intestinal permeability in NSAIDs users with rheumatoid arthritis and osteoarthritis was discovered accidentally by Bjarnason in 1984 (12).Studies in healthy volunteers confirmed a rapid increase in intestinal permeability already within 12-24 h after NSAID ingestion (10).Although the spectrum of tests used in determination of small bowel permeability is relatively wide, its availability in clinical practice is low.The three most commonly used orally-ingested probes are saccharides (lactulose, mannitol), ethylene glycol polymers (polyethylene glycol) and non-degradable radionuclides (51Cr-EDTA) with consequential detection of its urinary excretion.Detected prevalence rates depend on the sensitivi-ty of the selected method and vary from 60 to 80 % (10,12,76,78).The main limitation of these methods is their nonspecificity and wide spectrum of different diseases, malnutrition, drugs and diets influencing intestinal permeability. c) evaluation of intestinal inflammation Intestinal inflammation is detectable after several days (61) on NSAID therapy in 44-70 % of patients and persists up to 16 months after discontinuation of treatment.It can be assessed by increased faecal excretion of 111 Indium and scintigraphic detection of its accumulation in the small bowel after intravenous administration of 111 Indium labelled leucocytes (8,12,76).The other possibility for small bowel inflammation testing is assessment of calprotectin (non-degradable protein produced mainly by neutrophils, monocytes and macrophages) in faeces indicating migration of these cells into the intestine (83). The main disadvantage of both methods is low specificity for NSAID-enteropathy and the need for further investigation to exclude other possible causes of small bowel inflammation before a final diagnosis of NSAID-induced enteropathy could be set. d) enteroscopy The major advantage of enteroscopic methods is direct visualisation of small intestinal mucosa and identification of even tiny lesions along with the possibility of biopsy sampling in standard enteroscopies. A wide spectrum of small bowel lesions is observed in patients treated with NSAIDs. According to the severity of involvement, oedematous mucosa, focal and/or diffuse erythema (Fig. 1 a,b), red spots (Fig. 2), denuded area with loss of villous architecture, numerous lymphangiectasias (Fig. 3), petechiae, mucosal breaks (erosions -Fig.4, aphthous lesions -Fig.5 or ulcers -Fig.6a,b), strictures and intraluminal blood (Fig. 7) are found.A few case reports of villous atrophy mimicking coeliac sprue (in patients treated with mefenamic acid or sulindac) have been reported (26,45).No clear correlation between duration of NSAIDs' ingestion, NSAIDs' dose and enteropathy severity has been proven in studies published so far (59).Lesion localisation is influenced by the chemical and pharmacological attributes of the drugs administered -NSAIDs with enterohepatic circulation and slow release causing more distal lesions in the ileum and caecum.Smalls-intestinal diaphragms (Fig. 8) are rarely presented, but are the only one typical NSAID-induced lesion, which can be presented clinically as a small intestinal obstruction.This was first identified as a consequence of NSAIDs management by Lang et al. in 1988 (56).The diaphragms are multiple thin rings, comprised by mucosa and submucosa with profound fibrosis (with active inflammatory infiltrates at the top).The most typical localisation of these lesions is the ileum, jejunum and caecum.Prolonged treatment and use of high doses of NSAIDs are the main risk factors for diaphragm disease (1,56,72,94). The relative safety of COX2 selective NSAIDs in comparison to the non-selective ones indicated in short-term capsule endoscopy studies remains still controversial.The small bowel lesions compared in those studies are often thin and small mucosal breaks with problematical clinical significance.At another site, no difference was found in major lower gastrointestinal adverse events in a large randomised study comparing etoricoxib (60 or 90 mg daily) or diclofenac (150 mg daily) for an average of 18 months (54).One of the most surprising items of information from capsule endoscopy studies is the presence of small bowel lesions in about 7-41 % of healthy subjects or the controls, mostly small erosions and red spots.This fact somewhat complicated interpretation of results of the studies, because of the likely clinical insignificance of part of these findings identified by capsule endoscopy in NSAIDs users.Another possible limitation is the frequently insufficient differential diagnostics before or after capsule endoscopy to exclude other possible parallel causes of the described lesions (Crohn's disease, vasculitis, ischaemic enteritis etc.). The last but not least of the problems represents lack of use of standardised terminology in some studies when describing small intestinal lesions. Standard enteroscopy methods Sonde and push enteroscopy used to diagnose NSAIDenteropathy in the 1990s were replaced by double (39,93) or single balloon enteroscopies in the past decade.Inclusion of these methods in the endoscopy armamentarium introduced a revival of small bowel investigation and allowed an exponential accrual of our knowledge about small intestine diseases.Single balloon enteroscopy or spiral enteroscopy were introduced as alternatives to double balloon enteroscopy over the past few years (3,84), but their value in NSAID-induced enteropathy diagnostics is still to be determined. The importance of double balloon enteroscopy lies in differential diagnostics of lesions identified by means of capsule endoscopy with possible biopsy sampling and in the therapeutic potential of this method in case of complications (stenosis dilation, extraction of retained capsule endoscopy, control of bleeding) (50,68).Histology investigation demonstrates only non-specific findings in small bowel mucosa in patients with NSAID-induced enteropathy.Histology confirmation is therefore not routinely needed (15).Double balloon enteroscopy is often indicated for ob-scure gastrointestinal bleeding in patients treated with NSAIDs, small bowel lesions are detected in up to 51 % of these patients compared with only 5 % in controls (60).As regards other enteroscopy methods, push enteroscopy (due to its limited yield) and intra-operative enteroscopy (due to its invasive character) are used only rarely (2,51). Differential diagnosis Correct diagnosis of NSAID-induced enteropathy can usually be made in conjunction with a good history and exclusion of other small intestinal diseases.The diaphragm is the only one pathognomonic lesion of NSAID-induced enteropathy.The other enteroscopy findings are more or less non-specific and thus differential diagnostics in NSAIDs users is a key problem before an efficient management choice is made.Crohn's disease must above all be excluded.Enteroscopy findings of segmental localised longitudinal ulcers and inflammatory polyps (cobblestone pattern), ulcerated or fibrous strictures and non-specific mucosal inflammatory signs define typical Crohn's enteritis.Although diagnostic when present, non-caseating granulomas are rarely detected in small bowel biopsy samples, so the histology is not the leading diagnostic method in patients with small bowel Crohn's disease.The other disorders we must take into consideration are infection diseases, especially in immunosuppressed and/or malnourished patients (tuberculosis, Yersinia, Cytomegalovirus and others), tumours (especially lymphoma endoscopically mimicking inflammation), Behcet's disease, ischaemic and radiation enteritis.NSAIDs are not the only ones drugs to cause small bowel lesions, the others (potassium chloride, warfarin, bisfosfonates and cytostatics) must be excluded, too.An interesting problem could be represented by differential diagnostics between NSAID-enteropathy and vasculitis present in patients with connective tissue or collagenous diseases in which both situations are possible.Small bowel involvement is present in some vasculitis (Churg-Strauss syndrome, periateritis nodosa and Henoch-Schönlein purpura) more frequently. Management and prevention In spite of relatively intense research, there is still no effective, safe and tolerable drug treatment available in the market for management of NSAID-enteropathy.The main and most important management for patients with NSAIDenteropathy is still withdrawal of NSAIDs (41).Results of capsule endoscopy studies testify to some reduction of toxicity using COX-2 selective inhibitors, however the increasing information about adverse cardiovascular effects limits their usage (29,30).Moreover the reduced prevalence of small bowel lesions in selective COX-2 inhibitors compared to non-selective NSAID users has not been confirmed over a longer time period (more than 3 months) (59). The most evidence-supported management of NSAIDinduced enteropathy is antibiotic treatment.Bacterial over- growth in the ileum was identified in experimental animals on chronic NSAIDs therapy.The concurrent administration of purified Escherichia coli liposacharide and NSAIDs facilitated ulcers in small intestine (34).Healing of small bowel indomethacin-induced lesions was observed after gram-negative bacteria eradication.Neutralisation of cytokines (TNF alpha, MCP-1) has a positive effect, too (88).It is clearly proven that antibiotic therapy (tetracycline, kanamycin, metronidazole, neomycin plus bacitracin) reduced NSAID-induced enteropathy (13,17,18,53,55).The positive effect of metronidazole on small intestinal NSAID-induced lesion healing and reduction of increased intestinal permeability in experimental rats has been repeatedly presented (58).The anti-oxidising effect aside from the antimicrobial one has also been considered (16,17,18,92). Owing to the key role of small bowel flora in NSAID-induced enteropathy pathogenesis, the use of probiotics/prebiotics altering intestinal microbiology and modulating the immune function seems to be reasonable.Despite this, study results are inconclusive.While some prebiotics were effective (21,42), the tested probiotics failed to reduce the NSAID-induced increase of intestinal permeability in humans (31) and some increased the risk of small bowel injury in rats (48). Sulphasalazine has been confirmed as producing antibacterial activity (71), with a preventative effect against an increase in intestinal permeability and anti-inflammatory activity in many previous studies (37). Other disease-modifying antirheumatic drugs (like penicilamine, cloroquine and gold salts) were ineffective in the treatment of NSAID-enteropathy (38). The efficacy of prostaglandin analogues (misoprostol) in preventing upper-GI injury from NSAIDs has been proven.Results from some studies also indicate its possible positive role in prevention and therapy in NSAID-induced injury to the small bowel (27,87). Potential anti-inflammatory and anti-oxidative effects of protein pump inhibitors (lansoprazole, presented in some works with experimental animals) (52), have not been confirmed in human capsule endoscopy studies (29,30). Although the NO-donating NSAIDs have produced significantly less gastrointestinal injury, than original NSAIDs from which they are derived, in published studies (19,85,86), they are not affordable for the clinical practice yet.Hagiwara et al. investigated the preventive effect of 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors on NSAID-induced ulcers in the small intestine of rats.Fluvastatin, which was reported to have antioxidative activity (but not the other HMG-CoA reductase inhibitors pravastatin and atorvastatin), provided a protective effect against the formation of NSAID-induced ileal ulcers in rats (33).Other drugs like repamide and tacrolimus were also tried in healthy volunteers and in experimental studies on NSAID-enteropathy with promising results (49,66). Endoscopic and/or surgical treatment can be indicated in case of complications (strictures, bleeding, perforation) (47). Conclusions In conclusion, NSAID therapy causes small bowel lesions in a significant section of patients.Although the clinical importance of NSAID-enteropathy is often limited, it can lead to severe complications.The most frequent signs are anaemia and/or hypalbuminaemia.Enteroscopy (capsule endoscopy or double balloon enteroscopy) has become the most sensitive and most frequently used diagnostic method in identifying of mucosal breaks, small intestinal diaphragms or other types of small bowel lesions.Despite the progress in diagnostics and increased information on pathogenesis and epidemiology, the prevention and management of NSAID-induced enteropathy still remains controversial.Double balloon endoscopy has an important role in the management of complications of NSAID-enteropathy (especially bleeding and strictures). Supported by research project MZO 00179906 from the Ministry of Health, Czech Republic. Fig. 1a, b : Fig. 1a,b: Non specific inflammatory changes of the jejunal mucosa -focal mucosal erythema (arrows) in a patient with long term NSAID therapy.Capsule endoscopy. Fig. 2 : Fig. 2: Several red spots (arrow) in the jejunum of a patient with rheumatoid arthritis and long term NSAID therapy.Capsule endoscopy. Fig. 4 : Fig. 4: Small linear erosions (arrow) in the proximal ileum in a patient treated with NSAID.Capsule endoscopy. Fig. 7 : Fig. 7: Fresh blood in the distal jejunum in a chronic NSAID user.Capsule endoscopy. Fig. Fig. 6a,b,c: Small roundish (a) and linear (b) jejunal ulcers in a NSAID user (arrows, capsule endoscopy).c: Scars (arrows) in the area of healed ulcers of the proximal jejunum (double balloon enteroscopy).
v3-fos-license
2021-06-26T06:17:16.777Z
2021-06-25T00:00:00.000
235635407
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40620-021-01082-2.pdf", "pdf_hash": "b328b8ac6d24d2fbba10db6446a6cba46a1dece7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42294", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a66fc69a5a86ddebb894a7002715152ce2ae4a4a", "year": 2021 }
pes2o/s2orc
Where are we now? Emerging opportunities and challenges in the management of secondary hyperparathyroidism in patients with non-dialysis chronic kidney disease Abstract Rising levels of parathyroid hormone (PTH) are common in patients with chronic kidney disease (CKD) not on dialysis and are associated with an elevated risk of morbidity (including progression to dialysis) and mortality. However, there are several challenges for the clinical management of secondary hyperparathyroidism (SHPT) in this population. While no recognised target level for PTH currently exists, it is accepted that patients with non-dialysis CKD should receive early and regular monitoring of PTH from CKD stage G3a. However, studies indicate that adherence to monitoring recommendations in non-dialysis CKD may be suboptimal. SHPT is linked to vitamin D [25(OH)D] insufficiency in non-dialysis CKD, and correction of low 25(OH)D levels is a recognised management approach. A second challenge is that target 25(OH)D levels are unclear in this population, with recent evidence suggesting that the level of 25(OH)D above which suppression of PTH progressively diminishes may be considerably higher than that recommended for the general population. Few therapeutic agents are licensed for use in non-dialysis CKD patients with SHPT and optimal management remains controversial. Novel approaches include the development of calcifediol in an extended-release formulation, which has been shown to increase 25(OH)D gradually and provide a physiologically-regulated increase in 1,25(OH)2D that can reliably lower PTH in CKD stage G3–G4 without clinically meaningful increases in serum calcium and phosphate levels. Additional studies would be beneficial to assess the comparative effects of available treatments, and to more clearly elucidate the overall benefits of lowering PTH in non-dialysis CKD, particularly in terms of hard clinical outcomes. Graphic abstract Introduction Chronic kidney disease (CKD) is a major and growing global public health burden that is associated with significant morbidity and has continued to rise in rank among the leading causes of death over the last 3 decades [1]. Progression of CKD is associated with increasing risk of death, cardiovascular events, and hospitalisation [2,3]. In 2017, CKD was estimated to affect 9.1% of the global population; however, only a minority had advanced renal dysfunction. CKD and its effect on cardiovascular disease were estimated to have resulted in 2.6 million deaths and 35.8 million disabilityadjusted life years [1]. By 2040, CKD is predicted to be the fifth leading cause of years of life lost globally [4]. Complex mineral metabolism disturbances and loss of homeostasis are common in CKD and are associated with declining kidney function. Recent findings from an end-stage kidney disease longitudinal analysis of the Chronic Renal Insufficiency Cohort Study (n = 847) found that abnormalities in mineral metabolism intensified approximately 5 years before end-stage kidney disease, or at CKD stage G3 [5]. A common and early complication of CKD is secondary hyperparathyroidism (SHPT), characterised by elevated serum parathyroid hormone (PTH) and parathyroid hyperplasia, that develops as a consequence of the mineral metabolism disturbances of several biochemical parameters (including increases in fibroblast growth factor-23 , and reductions in 25-hydroxyvitamin D [25(OH)D] and 1,25-dihydroxyvitamin D [1,25(OH) 2 D], and hypocalcaemia and hyperphosphataemia) [6][7][8][9]. A review of the pathogenesis of SHPT in CKD is beyond the scope of this article, and the reader is referred to review articles on this subject (e.g., Cunningham, 2011) [6]. The characteristic mineral metabolism disturbances and rising PTH levels of SHPT independently predict risk of fractures, vascular events, progression to dialysis and death [2,3,10,11]. As such, approaches to manage SHPT have formed an important focus of treatment in CKD. Use of calcimimetics, calcitriol, and/or active vitamin D analogues (alone and in combination) has been the mainstay of treatment of SHPT for patients on haemodialysis for decades (targeting PTH levels of 2-9 × upper limit of normal), with parathyroidectomy remaining a valid treatment option, especially in cases when PTH-lowering therapies fail [7]. By contrast, the optimum management of SHPT treatment in non-dialysis CKD is not as clearly understood. For example, as reflected in recent guidelines, studies have called into question the routine use of calcitriol and active vitamin D analogues for the management of SHPT in CKD stage G3a-G5 due to increased risk of hypercalcaemia [7]. Studies also indicate that, at variance with the dialysis setting, knowledge amongst physicians about mineral and bone disorder management in non-dialysis CKD is scarce [12]. However, recently published data, particularly regarding the role of vitamin D, alongside new therapeutic advances, are highly relevant and offer new insights into the management of SHPT in non-dialysis CKD. In light of emerging evidence, the aim of this review is to reassess the opportunities and challenges in the management of SHPT in patients with non-dialysis CKD specifically, with a focus on the role of vitamin D. What's new? Insights into the rationale for SHPT and elevated PTH as a therapeutic target in non-dialysis CKD While the adverse effects of SHPT are well recognised in CKD patients on dialysis (stage G5D), elevations in PTH characteristic of SHPT manifest frequently in non-dialysis CKD and from as early as CKD stage G2 [13]. SHPT (PTH > 65 pg/mL) affects approximately 40% of patients with CKD stage G3 (with the percentage rising from stage G3a-G3b), rising to approximately 80% in CKD stage G4 [14]. Recent studies demonstrate that SHPT is associated with the risk of cardiovascular events regardless of CKD stage [11], and in patients with non-dialysis CKD, PTH is a predictor of risk of fractures, vascular events, progression to dialysis and death [2,15]. In an analysis by Geng et al. [2] of electronic health records (between 1985 and 2013) from over 5000 adults with baseline CKD stage G3-G4 (mean follow-up of 23 ± 10 years), PTH was found to be an independent predictor of fracture, vascular events, and death (Fig. 1). The risk of vascular events and death were lowest when baseline PTH levels were 69 and 58 pg/mL, respectively. However, unlike vascular events and death, no baseline threshold of PTH was identified for fracture risk, and the risk of fracture continued to rise in parallel with rising PTH [2]. A recent multicentre prospective cohort study from the Fukuoka Kidney Disease Registry (3,384 non-dialysis CKD patients) explored the relationship between PTH concentrations and the prevalence of atrial fibrillation. PTH was evaluated as a potential risk factor and assessed in quartiles (Q1 5-46, Q2 47-66, Q3 67-108, Q4 109-1660 pg/mL). Higher PTH concentrations (Q2-Q4) were significantly and incrementally associated with an increased prevalence of atrial fibrillation in this patient group. Using Q1 as the reference group the adjusted odds ratios for the prevalence of atrial fibrillation were 1.33 (0.76-2.34), 1.82 (1.06-3.13), and 1.99 (1.08-3.64), for Q2-Q4, respectively (P = 0.016) [15]. Untreated, SHPT results in continually increasing PTH levels. In randomised controlled trials of patients with non-dialysis CKD, PTH levels continued to increase in placebo-treated or untreated patients over the duration of the studies [16][17][18][19] [20]. In addition, a recent retrospective analysis of 13,772 incident haemodialysis patients demonstrated that PTH levels of ≥ 250 pg/mL were independently associated with a more rapid decline in residual kidney function; however, higher PTH levels may have just reflected progressively impaired kidney function [10]. In renal transplant patients, elevated PTH levels pre-transplant have been shown to be independently associated with a significant risk for graft failure censored for death [21], as well as being a risk factor for post-transplant nephrocalcinosis [22]. Parathyroidectomy in post-transplant patients has also been associated with acute graft failure [23]. Data such as these suggest that the effectiveness of an intervention decreases as CKD progresses. Indeed, analysis of the Chronic Renal Insufficiency Cohort Study cohort (n = 3683) followed patients with CKD stage G2-G4 over a median of 9.5 years, and revealed patients spent progressively less time in each successive stage of CKD, with a median of 7.9, 5.0, 4.2, and 0.8 years in CKD stages G3a, G3b, G4, and G5, respectively [24]. Parathyroid hyperplasia and sustained elevations in PTH with SHPT progression due to delayed treatment are accompanied by progressive reductions in sensitivity to calcium and vitamin D regulation [6] and therefore a risk of treatment resistance later in the disease course. Parathyroidectomy may need to be considered if patients become unresponsive to SHPT treatment, have persistently elevated PTH levels, and refractory hypercalcaemia or hyperphosphataemia [6,25]. However, parathyroidectomy can be associated with post-surgical complications, including severe hypocalcaemia [25]. Although optimal PTH levels for patients with CKD stage G3a to G5 are not clearly defined, the potential adverse consequences of prolonged PTH elevations are reflected by the fact that the Kidney Disease: Improving Global Outcomes (KDIGO) CKD-mineral and bone disorder (CKD-MBD) guidelines recommend regular monitoring of PTH levels starting in CKD stage G3a, in order to identify patients with progressively rising or persistently elevated PTH levels above the upper limit of normal, so that at-risk individuals can be recognised and evaluated for modifiable risk factors [7]. However, studies indicate that adherence to mineral and bone disorder monitoring recommendations in non-dialysis CKD may be suboptimal, and that competing priorities in CKD may frequently distract from regular monitoring of mineral and bone disorder in these patients [12,26]. A large prospective cohort study from the Chronic Kidney Disease Outcomes and Practice Patterns Study [CKDOPPS] involving 7658 patients with CKD also identified significant variations in upper target PTH levels among nephrologists [27]. Vitamin D in the management of SHPT in non-dialysis CKD Vitamin D insufficiency is highly prevalent among patients with CKD, being more common than in the general population [28,29] and affecting 71-84% of patients with CKD stage G3-G4, respectively (insufficiency defined in the study as ≤ 75 nmol/L; ≤ 30 ng/mL) [29]. Low levels of vitamin D are independently associated with an increased risk of CKD progression, morbidity and mortality in nondialysis CKD [30]. Low levels of vitamin D are also frequently linked to elevations in PTH in non-dialysis CKD as indicated by early data from 3488 patients enrolled in the CKDOPPS, a prospective cohort study of patients with estimated glomerular filtration rate (eGFR) < 60 mL/ min/1.73 m 2 from national samples of nephrology clinics in Brazil, France, Germany and the US [31]. These data reflect the prominent role of vitamin D in the pathogenesis of SHPT. Vitamin D has an important physiological role for tissue homeostatic mechanisms, including potentially pleiotropic effects [32]. In the setting of normal kidney function, low levels of vitamin D are detected by the parathyroid glands, with a consequent increase in the production and release of PTH [6,33]. In the setting of CKD, these elevations in PTH are part of an adaptive process that gradually become maladaptive in response to declining kidney function, causing abnormalities in several biochemical parameters including impaired phosphate excretion, increased FGF-23, hypocalcaemia and failure to bioactivate vitamin D; the combined effect of these multiple pathways is to promote the progression of SHPT as detailed in Fig. 2 [6]. The known pathophysiology, together with recent data, illustrate the rationale for treatment of SHPT and vitamin D insufficiency/deficiency in non-dialysis CKD, and the KDIGO guidelines for the management of CKD-mineral and bone disorder recommend that patients with CKD stage G3-G4 and progressively rising or persistently elevated PTH levels above the upper limit of normal should be evaluated for vitamin D deficiency as one of the modifiable risk factors [7]. Other modifiable risk factors in this context include hyperphosphataemia and hypocalcaemia [7]. In the setting of CKD stage G3a-G5D, 25(OH)D levels might be measured, with repeated testing depending on baseline values and therapeutic interventions; however, as previously noted, adherence to CKD-mineral and bone disorder monitoring recommendations may be suboptimal [12,26]. Vitamin D deficiency/insufficiency should be corrected using recommended treatment strategies [7,28,34]. Exploring the benefits of current and emerging approaches for the management of SHPT in non-dialysis CKD The term vitamin D represents native or nutritional vitamin D, these include both vitamin D2 (ergocalciferol) and vitamin D3 (cholecalciferol). Both vitamin D2 and D3 are hydroxylated in the liver (by the cytochrome P450 enzymes CYP2R1, CYP27A1) to calcifediol [25(OH)D]. The conversion to calcitriol [1,25(OH) 2 D; the active form of vitamin D] then occurs via 1-α-hydroxylation (by the cytochrome P450 enzyme CYP27B1) mainly in the kidney, but also at other extrarenal sites such as the parathyroid glands. Active vitamin D is then catabolised to its biologically inert forms ( Fig. 3) [33,35,36]. In patients with renal impairment, levels of both 25(OH) D and 1,25(OH) 2 D are reduced as CKD progresses, with active vitamin D reduced not only due to impaired synthesis in the kidney, but also as a result of the down-regulation of 1-α-hydroxylase by serum FGF-23, which becomes elevated in response to an increased phosphate balance. Indeed, studies have suggested that the efficiency of vitamin D hydroxylation declines with declining renal function [37,38]. Elevations in serum FGF-23 in CKD also lead to up-regulation of 25(OH)D and 1,25(OH) 2 D catabolism via the cytochrome P450 enzyme CYP24A1, leading to vitamin D inactivation [6,39]. Extra-renal activation of 25(OH)D to 1,25(OH) 2 D may play an important role in active vitamin D production among CKD patients in whom renal function is impaired [35,40]. Nutritional vitamin D Nutritional vitamin D supplements are available as vitamin D2 (ergocalciferol) and vitamin D3 (cholecalciferol). Cholecalciferol has been shown to be more effective in elevating and maintaining serum 25(OH)D levels in healthy adults than ergocalciferol at equimolar doses, with a longer halflife [41]. The half-lives of ergocalciferol and cholecalciferol are affected by vitamin D binding protein concentration and genotype [28]. In one study of healthy men (n = 36) the mean half-life of ergocalciferol was 13.9 (2.6) days, significantly shorter than cholecalciferol (15.1 [3.1] days; p = 0.001) [42]. Nutritional vitamin D supplements are not specifically indicated for SHPT in non-dialysis CKD, and while many studies have explored their therapeutic potential, the evidence supporting a positive impact on serum 25(OH)D and PTH in non-dialysis CKD, has been largely based on data extrapolated from observational studies in patients with CKD stage G3-G5D and renal transplant recipients with mixed results [44]. Indeed, more recent studies suggest that nutritional vitamin D supplements do not consistently and reliably lower PTH in non-dialysis CKD patients, even at higher doses [45,46]. A 2016 meta-analysis of non-dialysis CKD patients treated with nutritional vitamin D (cholecalciferol or ergocalciferol) versus placebo across four randomised controlled trials demonstrated that although nutritional vitamin D increased 25(OH)D levels and lowered PTH when compared to placebo, data were based on a small population of 122 patients and there was substantial heterogeneity in effect sizes between studies; it therefore concluded that additional data were needed [45,46]. In a study of patients with non-dialysis CKD (n = 95), high-dose (8000 IU/day) cholecalciferol was shown to increase calcitriol [1,25(OH) 2 D] levels, and although further increases in PTH were not seen in the cholecalciferol group, which may have to be regarded as a "partial response", PTH levels were not reduced from baseline and the proportion of patients achieving a 30% decrease in PTH levels did not differ from placebo [16]. In a subsequent 2020 meta-analysis of non-dialysis CKD patients treated with nutritional vitamin D (cholecalciferol or ergocalciferol) across 14 randomised controlled trials (N = 745) only a small reduction in PTH was observed from baseline in nutritional vitamin D-treated patients [17]. Changes in PTH relative to placebo or untreated patients appear to be driven by PTH increases in the comparator groups rather than decreases in the treatment group, with substantial heterogeneity in effect sizes again observed between studies [16,17]. The complex and variable nature of nutritional vitamin D absorption, distribution and activation may reduce its effect on 25(OH)D levels and contribute to its limited ability to reduce PTH [34]. Nutritional vitamin D also has a propensity to be deposited in adipose tissue due to its lipophilic properties, and this mechanism likely plays a significant role in reducing the amount of nutritional vitamin D that can be presented to the liver for conversion to 25(OH)D [43,47]. Indeed, vitamin D insufficiency is common in obese individuals; studies have shown that low levels of serum 25(OH)D can fail to recover after nutritional vitamin D supplementation in these subjects [48]. Together these effects may mitigate the impact of vitamin D supplementation on available levels of active vitamin D. Immediate-release calcifediol The potential ability of immediate-release calcifediol to reduce serum PTH has long been recognised, although treatment was associated with increases in serum calcium and phosphate and early results were variable [49,50]. Calcifediol is readily absorbed and results in a more rapid increase in serum 25(OH)D compared to oral cholecalciferol [51]. Based on the results of nine randomised controlled trials comparing cholecalciferol with calcifediol, calcifediol was over three times more potent than cholecalciferol [51]. However, immediate-release calcifediol is not indicated for SHPT and is not able to provide clinically meaningful reductions in PTH in CKD patients [40]. Active vitamin D/analogues Active vitamin D including calcitriol, and active vitamin D analogues including paricalcitol and alfacalcidol, are variously indicated for the prevention and/or treatment of SHPT in non-dialysis CKD, although licensing and availability differs between countries [40]. Active vitamin D and its analogues suppress PTH [53], however, their mechanism of action bypasses the physiological regulation of vitamin D metabolism [40]. Active vitamin D and its analogues may, therefore, also lead to surges of 1,25(OH) 2 D following administration, which can induce vitamin D catabolism via CYP24A1 (24-hydroxylase), causing excessive increases in 24,25(OH)D3 and 1,24,25(OH)D3, respectively. Importantly, active vitamin D and its analogues are also associated with an increased risk of hypercalcaemia and risk of accelerated vascular calcification [54][55][56]. Indeed, recent studies of CKD stage G3-G4 patients with SHPT treated with paricalcitol (PRIMO and OPERA studies) failed to demonstrate improvements in hard outcomes (left ventricular mass and function) but found an increase in the risk of hypercalcaemia [57,58]. In PRIMO, hypercalcaemia occurred in 22.6% of patients and was the main reason given for study withdrawal [57]. The OPERA study reported a higher incidence of hypercalcaemia (43.3%) despite the use of a lower daily dose of paricalcitol (1 μg/day), although concomitant use of calcium-based phosphate binders was noted in a high proportion of patients [58]. In a recent meta-analysis of six randomised controlled trials in 799 non-dialysis CKD patients treated with paricalcitol or alfacalcidol versus placebo, the PRIMO and OPERA studies accounted for a large proportion of the observed episodes of hypercalcaemia; however, even when they were excluded in a sensitivity analysis, there was still a significantly increased risk of hypercalcaemia in patients treated with active vitamin D or its analogues versus placebo [54]. This risk of hypercalcaemia prompted a re-evaluation of the risk-benefit profile of these agents in non-dialysis CKD, and guidelines no longer recommend routine use of calcitriol or vitamin D analogues in patients with CKD stage G3a-G5 [7]. Participants in the PRIMO and OPERA trials had moderately increased PTH levels, which were potentially 'overcorrected', thus therapy with vitamin D analogues may be reserved for patients with CKD stage G4-G5 with progressive and severe SHPT [7]. [59]. The efficacy and safety of oral ERC in patients with CKD stage G3-G4 was demonstrated in two Phase 3 clinical trials [19,59]. In these studies, 429 patients with CKD stage G3-G4, SHPT and vitamin D insufficiency were treated with 30 µg ERC or placebo daily for 12 weeks, 30 or 60 µg ERC or placebo for 14 weeks then 30 or 60 µg ERC for up to 52 weeks (extension study). A steady increase in serum 25(OH)D levels was seen in both studies (p < 0.0001 versus placebo), with 33% and 34% of patients in each study achieving the primary endpoint of a ≥ 30% reduction in PTH from baseline at Week 26 (versus 8% and 7%, respectively, with placebo) [19,59]. In the open-label extension phase of the trial, patients who were switched from placebo to ERC experienced a decline in plasma PTH levels at a similar rate to those seen with active treatment in the blinded studies. For those patients who continued on ERC through the randomised and open-label phase, the gradual decreases in plasma PTH continued and were maintained over one year of therapy. A further analysis of data revealed that ERC produced exposure-dependent reductions in plasma PTH and bone turnover markers at mean serum total 25(OH) D levels ≥ 50 ng/mL [60]. In addition, plasma PTH levels were progressively suppressed with higher serum total 25(OH)D levels, regardless of CKD stage. Gradual elevation of mean serum 25(OH)D with ERC to levels as high as 92.5 ng/mL over a 52-week period did not increase mean serum 1,25(OH) 2 D levels above the upper limit of normal (62 pg/mL) [60]. These findings support the hypothesis that 25(OH)D can be activated extra-renally by CYP27B1 in parathyroid and many other tissues. Declining kidney function and its resultant effect on declining expression of renal CYP27B1 did not seem to lead to less conversion of 25(OH) D to 1,25(OH) 2 D [59,60]. Changes in plasma PTH versus baseline were significant at the end of treatment (p < 0.05) for subjects with 25(OH)D ≥ 50.8 ng/mL. It should be noted that, for subjects with 25(OH)D ≥ 50.8 ng/mL, reductions in PTH appeared to attenuate as mean serum total 25(OH)D approached the highest levels (92.5 ng/mL) [60]. Extended-release calcifediol Treatment-emergent adverse events were comparable between the treatment and placebo arms of the ERC Phase 3 trials, with minimal changes in serum calcium and phosphate, and hence a low risk for hypercalcaemia and hyperphosphataemia. Gradual elevation of 25(OH)D with ERC to levels as high as 92.5 ng/mL (231.3 nmol/L) over a 26-week period had no adverse effects on safety parameters, and mean serum 1,25(OH) 2 D levels did not increase above the upper limit of normal (62 pg/mL) [59]. Emerging real-world data supports the tolerability and effectiveness of ERC in routine clinical practice. Recent retrospective analyses of medical chart data from 18 US nephrology clinics included patients with CKD stage G3-G4, a history of SHPT and vitamin D insufficiency, who received different interventions including ERC (n = 174), active vitamin D or its analogues (n = 55) and nutritional vitamin D (n = 147). Serum 25(OH)D levels of ≥ 30 ng/mL were achieved by approximately 70% of patients, with about 40% achieving a ≥ 30% reduction in PTH-similar values to those seen in clinical trials, despite higher baseline PTH levels and the use of a lower daily ERC dose [61]. In the same dataset, patients treated with active vitamin D analogues had a small, but statistically significant increase in serum calcium levels, which was not seen with ERC or nutritional vitamin D. In addition, nutritional vitamin D was more commonly used in less severe CKD (69% stage G3 versus 31% stage G4) while ERC and active vitamin D were used to treat more severe CKD (ERC used in 46% stage G3 versus 53% stage G4, and active vitamin D in 38% stage G3 versus 62% stage G4) [62]. Other potential therapeutic options in non-dialysis CKD Calcimimetics act by suppressing PTH secretion through activation of the parathyroid calcium-sensing receptor or amplification of the glands' sensitivity to extracellular ionised calcium [63,64]. While demonstrated to be highly effective in reducing PTH levels, calcimimetics are only indicated for CKD patients on haemodialysis, with studies of these agents in CKD Stage G3-G4 showing an increased risk of hypocalcaemia and hyperphosphataemia in these patients [7,64,65]. As discussed previously, parathyroidectomy can be a highly effective treatment for SHPT, but is associated with a risk of severe hypocalcaemia, and potentially, persistence or recurrence of SHPT due to residual or autotransplanted parathyroid tissue [6,25]. There is also evidence that, at least in patients with CKD stage G5D, parathyroidectomy carries with it significant risks of morbidity, hospitalisation and mortality, predominantly related to sepsis and acute coronary syndrome [66]. Guidelines therefore suggest parathyroidectomy be reserved for patients with CKD stage G3a-G5D and severe SHPT which is resistant to medical or pharmacological therapy [7]. What are the current clinical challenges in the management of SHPT in non-dialysis CKD? Despite advances in our understanding, the optimal management of SHPT in non-dialysis CKD is challenging in clinical practice. The difficulties around lack of data to support clinical decision-making are acknowledged by the most recent KDIGO CKD mineral and bone disorder guideline update, which states that despite the recent completion of key clinical trials "large gaps of knowledge still remained" [7,67]. We consider here four key questions within the context of recently published data. How should we identify patients with non-dialysis CKD suitable for treatment of SHPT in clinical practice? While PTH measurement is recognised as being very important for the follow-up of patients with CKD, insight on such measurement and its clinical relevance in nondialysis CKD continues to evolve [9]. Modest increases in PTH may represent an appropriate adaptive response to declining kidney function due to phosphaturic effects and increasing bone resistance to PTH [8] and there remains an absence of clinical data from which to derive thresholds above which PTH levels should be considered maladaptive and at which treatment should therefore be initiated. However, regular monitoring and treatment of underlying modifiable risk factors (such as vitamin D deficiency) may help determine adaptive versus maladaptive changes. Guideline recommendations have therefore been revised to reflect the transition of the parathyroid to a maladaptive response, with the recommendation to identify patients with PTH levels 'persistently' above the upper limit of normal (65 pg/mL) and 'progressively rising', emphasising that treatment of SHPT should not be initiated in response to a single elevated value but should be based on trends [8]. Current guidelines recommend regular monitoring of PTH in patients with non-dialysis CKD from CKD stage G3a, in order to identify these individuals, with monitoring intervals based on baseline PTH levels and CKD progression [7]. However, despite recommendations, studies indicate that knowledge of CKD-mineral and bone disorder management in non-dialysis CKD may be scarce and that competing priorities in CKD, such as management of comorbid disease, can frequently distract from CKDmineral and bone disorder monitoring in non-dialysis CKD patients [12,26]. For example, a large study following 799,300 patients with CKD Stage G3-G5 concluded that laboratory testing for CKD-mineral and bone disorder biochemical markers was suboptimal in relation to KDIGO guidelines [12]. Further evaluations to assess the possible impact of persistent PTH elevations, for example, by bone density testing (dual energy X-ray absorptiometry scan), may help to identify the presence of pathologically relevant effects of persistently elevated or progressively rising PTH. While bone densitometry does not distinguish between high and low bone turnover, the gold standard for making this distinction is a bone biopsy, which is both invasive and difficult, so it is not routinely performed, particularly in the setting of high PTH levels [68]. Measurement of markers such as bone-specific alkaline phosphatase could potentially identify patients with increased bone turnover. Bonespecific alkaline phosphatase is essential for biomineralisation, and recent findings also demonstrate that it has a crucial role in the pathogenesis of vascular calcification, identifying it as a promising predictor of mortality in CKD [69]. While not currently available for non-dialysis CKD, for patients with CKD on dialysis there are established criteria for assessing patients with 'unclear' significance of SHPT. An integrated approach in dialysis patients may include measurement of bone turnover markers, such as bone-specific alkaline phosphatase. In dialysis patients with very 'low' or very 'high' PTH, bone-specific alkaline phosphatase measures could be helpful to better differentiate the type of bone disease (low versus high turnover). In dialysis patients with intermediate PTH and bone-specific alkaline phosphatase, a bone biopsy may be necessary to diagnose the type of bone disorder. However, it is anticipated that non-dialysis CKD patients with such changes are likely to be relatively rare, and no such approaches are currently available for this patient group or are not easily implemented into routine management for SHPT. What levels of PTH should we be aiming for following treatment of SHPT in non-dialysis CKD? While recommended target levels for PTH in dialysis patients (2-9 × upper limit of normal) have been set out in treatment guidelines, similar targets for non-dialysis CKD patients are unclear for the reasons outlined above [7]. The clinical endpoint most frequently used in clinical trials of SHPT in non-dialysis CKD is ≥ 30% reduction in PTH from pre-treatment baseline levels [18,19,70,71]. While this was agreed with regulatory bodies to be the best available clinical and biological marker to determine a statistically significant change from baseline, studies using this endpoint cannot offer further insight into the specific PTH target levels we should be aiming for in this population. In addition, there is a lack of data linking the achievement of specific PTH levels following treatment intervention with hard outcomes (for example fracture risk and cardiovascular disease) in non-dialysis CKD. There are of course recognised challenges associated with designing trials that provide conclusive results for such endpoints in patients with a progressive and complex disease like CKD. For example, clinical studies of the duration required are not always feasible in a progressive disease like CKD, as patients might require additional treatments such as dialysis, which could confound the results. Studies assessing the impact of SHPT treatment on surrogate endpoints for cardiovascular risk have been performed in an effort to overcome these challenges. However, the PRIMO and OPERA trials of paricalcitol treatment of SHPT in CKD stage G3-G4 did not identify any significant differences between the active and placebo arms in terms of surrogate endpoints of cardiovascular risk (left ventricular mass index-an intermediate endpoint for cardiovascular events), although there were fewer cardiovascular-related hospitalisations in the paricalcitol versus placebo arms [57,58]. Factors such as sample size, study duration and baseline imbalances between the randomised groups are thought to have potentially impacted the results [57,58]. Novel surrogates for hard outcomes are gaining support and offer a potential avenue to gain further insight into the potential benefits of PTH reduction in non-dialysis CKD. One surrogate gaining interest in recent years is the T50 test, a blood test that has been developed to determine the calcification propensity in blood [72]. Vascular calcification is frequently observed at high rates in patients with CKD and may be a central mediator of cardiovascular sequelae [73]. The T50 test provides an estimate of the efficiency of an individual's anticalcification system to inhibit the formation of calcium phosphate nanocrystals [74]. A shorter serum T50 (i.e., accelerated precipitation time) has been associated with increased all-cause mortality in pre-dialysis CKD [74]. In CKD stage G2-G4 patients, a lower T50 score was significantly associated with atherosclerotic cardiovascular disease events, end-stage kidney disease, and all-cause mortality, but the association was not independent of kidney function (Chronic Renal Insufficiency Cohort study) [75]. In haemodialysis patients, associations between lower T50 and higher risk of death, myocardial infarction, and peripheral vascular events are also observed (EVOLVE study) [76]. Further prospective interventional studies are needed to determine whether these associations can be causally linked. Given the lack of PTH target levels in non-dialysis CKD, treatment modifications in clinical practice are largely based on the wanted or unwanted effects of vitamin D substitution (normo-, hypo-, hyper-calcaemia and -phosphataemia). As stated above, consecutive measurements of bone densitometry might indicate a trend in changing bone morphology which may prompt a change in treatment but are no substitute for histological diagnosis. What levels of vitamin D should we be aiming to achieve in patients with SHPT in non-dialysis CKD? Guidelines have suggested that vitamin D deficiency and insufficiency be corrected using treatment strategies recommended for the general population [7,9]. However, recent studies suggest that higher levels of 25(OH)Dexceeding those generally recommended for the general population-may be needed to control PTH in non-dialysis CKD patients [60,77]. In a cross-sectional analysis of 14,289 unselected patients with CKD, in CKD stages G3-G5, progressively higher 25(OH)D pentiles contained progressively lower mean PTH levels with no evidence of a decreasing effect of 25(OH)D to lower PTH until 25(OH) D levels of 42-48 ng/mL (105-120 nmol/L) [77]. Progressively higher 25(OH)D concentrations were not associated with increased rates of hypercalcaemia or hyperphosphataemia. This suggests that currently recommended 25(OH) D levels (generally > 30 ng/mL) may be too low as a target for treating SPHT in CKD [77]. Further support for a higher target level comes from a post-hoc analysis of ERC Phase 3 trials, which suggested that mean 25(OH)D levels of ≥ 50.8 ng/mL are required for reductions in PTH and bone turnover markers in CKD stage G3-G4 [60]. In addition, the VITALE study demonstrated that higher levels of 25(OH)D [43.1 (12.8) ng/mL] lowered PTH and reduced fracture risk in kidney transplant patients with 25(OH) D insufficiency compared with lower levels of 25(OH)D [25.1 (7.4) ng/mL] [78]. As noted in the discussion of vitamin D metabolism earlier in this manuscript, extrarenal activation of 25(OH)D to 1,25(OH) 2 D may play an important role in active vitamin D production among CKD patients in whom renal function is impaired [35,40]; however, this depends on adequate circulating levels of 25(OH)D and may require levels well above those traditionally considered to represent 'sufficiency' in the general population [60]. Several professional organisations have provided recommendations for diagnostic thresholds within their guidelines. The most widely recognised and commonly cited clinical threshold for serum 25(OH)D 'sufficiency' in the general population is > 30 ng/mL (> 75 nmol/L) [43]. This threshold is based on studies in which PTH levels were maximally suppressed by vitamin D supplementation, but it should be noted that none of these studies included patients with CKD. The US Institute of Medicine expert committee noted in their 2011 report that people are at risk of vitamin D deficiency at serum 25(OH)D concentrations < 12 ng/mL (30 nmol/L) and some are potentially at risk for inadequacy at levels ranging from 12 to 20 ng/mL (30-50 nmol/L) in the general population, but commented that these levels could not necessarily be extended to disease states such as CKD [34,79]. The range of 30-100 ng/mL (75-250 nmol/L) for 25(OH)D sufficiency is cited by the Endocrine Society based on studies in various populations, with a threshold of 100-150 ng/ mL (250-375 nmol/L) suggested based on safety concerns [28]. A recent consensus statement from the 2nd International Conference on Controversies in Vitamin D states that existing data are insufficient to define 'low' or 'high' vitamin D status thresholds [80]. However, despite incomplete knowledge of the role of vitamin D in many target tissues, serum 25(OH)D concentrations < 20 ng/ml (50 nmol/L) are likely to have adverse effects on health [80]. Supplemental vitamin D was shown to have a protective effect (e.g., on bone mineral density and arterial function) in patients with vitamin D insufficiency (defined as serum 25(OH)D levels < 50 nmol/L) in the ViDA study [81], whereas vitamin D supplementation had no impact on healthy adults in the VITAL study [82]. However, neither study included patients with vitamin D deficiency or at levels of insufficiency commonly seen in patients with CKD. This further suggests that vitamin D guidelines based on the general population may not be applicable to patients with vitamin D insufficiency (such as those with CKD). The 2009 KDIGO guidelines have also noted previous discussions exploring whether definitions of vitamin D sufficiency may be linked to an adequate response in PTH, and the ranges at which there is no further reciprocal reduction in serum PTH upon vitamin D supplementation [7]. Indeed, the post-hoc analyses of ERC Phase 3 trials suggested that reductions in PTH may start to attenuate above 25(OH)D levels of 50.8 ng/mL [60]. The National Kidney Foundation (NKF) Statement from 2018 states that 25(OH)D levels of 20-50 ng/mL represent a 'modest target' and that 'adequacy' is defined as 'no evidence of counter-regulatory hormone activity'. They also note that 25(OH)D levels > 30 ng/mL might be required for extra-renal 1,25(OH) 2 D generation [79]. The question then arises as to whether there are any safety concerns associated with raising vitamin D levels to above 50 ng/mL in non-dialysis CKD [60]. Is there an upper tolerability limit for vitamin D, and what is the evidence for this in a CKD versus a healthy population [60,80]? Observational studies have noted a reverse J-shaped association of serum 25(OH)D with cardiovascular disease mortality, with the highest risk at the lowest levels [80]. There is limited evidence on the potential risks and benefits of higher vitamin D levels in the general population and in CKD. Data from the Phase 3 ERC studies showed that a gradual elevation of the mean serum total 25(OH)D with ERC to levels as high as 92.5 ng/mL over a 26-week period had no adverse effects on mean serum calcium or phosphorus [60]. In addition, there were no adverse effects on FGF-23 or eGFR and mean serum 1,25(OH) 2 D did not increase above the upper limit of normal (62 pg/mL). Extension of these studies to 52 weeks of ERC treatment also demonstrated no increased risks related to these parameters [19,59]. Further studies are required to determine the optimal vitamin D requirement in non-dialysis CKD, and an emerging body of real-world evidence with ERC may help to inform this question. How do we choose a therapeutic option for SHPT in non-dialysis CKD? With current guidelines no longer recommending the routine use of calcitriol and active vitamin D due to increased risk of hypercalcaemia, and now a growing body of evidence suggesting that the current targets for vitamin D repletion may not be generalisable to CKD (levels of 25(OH)D ≥ 50 ng/mL may be required to control PTH) [60,77], the optimal treatment strategies for patients with SHPT in non-dialysis CKD remain to be clearly defined. Nevertheless, the associations between elevated PTH levels and morbidity and mortality [2] indicate a need for effective management of SHPT without delaying treatment until these elevations become severe and progressive in CKD stage G4-5 and at which point the benefits of using calcitriol/active vitamin D may be more balanced against the risks of hypercalcaemia [7]. While there has been much interest to explore the therapeutic potential of nutritional vitamin D, a recent metaanalysis of randomised controlled trials suggests that nutritional vitamin D supplements in non-dialysis CKD do not reliably and consistently lower PTH even at higher doses, and the average 25(OH)D levels in treated patients do not reach > 50 ng/mL in the majority of randomised controlled trials, implying a limited potential of nutritional vitamin D to reach the 25(OH)D levels suggested as needed to effectively control SHPT [17]. These findings may be explained by the complex and variable nature of nutritional vitamin D absorption, distribution and activation that may limit its potential to achieve vitamin D > 50 ng/mL and contribute to its limited ability to reduce PTH [34]. The combined data from two Phase 3 clinical trials and a subsequent extension study demonstrate that oral ERC 30 or 60 μg is effective for treating SHPT and correcting underlying vitamin D insufficiency in adult patients with CKD stage G3 or G4. ERC further produced exposure-dependent reductions in plasma PTH and bone turnover markers when mean serum total 25(OH)D ≥ 50 ng/mL, with no adverse effects on safety parameters including serum calcium and phosphate [60]. There are currently no head-to-head studies comparing the relative safety and efficacy of treatments for SHPT in CKD. Given the differences in study designs and study populations, direct comparisons between treatments cannot currently be made and comparative clinical studies are required to more clearly define the relative benefits of different approaches. Emerging data from a recent meta-analysis suggest that compared to paricalcitol, ERC is equally effective at reducing PTH in CKD stage G3-G4 but is associated with only minimal changes in serum calcium levels [83]. Similarly, a recent real-world study in CKD stage G3-G4 found that, compared to other vitamin D therapies (active vitamin D and nutritional vitamin D), ERC significantly reduced PTH and resulted in greater increases in 25(OH)D levels, without increases in serum calcium seen in patients treated with active vitamin D [62]. Differences in effectiveness with regard to PTH reduction and 25(OH)D levels, and in hypercalcaemia in these analyses are likely explained by the lack of pharmacological surges with ERC that are associated with nutritional vitamin D and active vitamin D/ its analogues, and the potential benefits of avoiding negative feedback from a 'spike' in 25(OH)D [52]. Steady-state 25(OH)D levels were reached after 12 weeks of dosing in the pivotal studies of ERC in CKD stage G3-G4, and averaged 50-56 ng/mL with 30 μg daily, and 69-67 ng/mL with 60 μg daily, in the two studies, respectively. The levels remained stable throughout the 52-week treatment period. Gradual elevation of mean serum 25(OH)D to these levels had minimal impact on mean serum calcium, phosphorus, FGF-23 or eGFR and did not increase mean serum 1,25(OH) 2 D above the upper limit of normal (62 pg/mL) [19,60]. While not designed as a comparative efficacy study, an ongoing open-label, Phase 4 study (NCT03588884) will investigate the effects of ERC, immediate-release calcifediol, high-dose cholecalciferol, or paricalcitol + low-dose cholecalciferol in CKD stage G3-G4 patients with SHPT and vitamin D insufficiency. The primary outcome of this study is to evaluate pharmacokinetic/pharmacodynamic profiles, but safety and efficacy will also be assessed and may provide some insights into the relative roles of the different approaches. Conclusions Despite advances in our understanding, the optimal management of SHPT in non-dialysis CKD remains challenging. While there is an increasing recognition of the need to identify and treat patients with SHPT earlier in the course of the disease, target levels of PTH are unclear, as are the levels of vitamin D required to achieve PTH reduction. Advances in treatment include the use of ERC as an additional therapeutic option. As there are currently no head-to-head studies comparing the relative safety and efficacy of treatments for SHPT in non-dialysis CKD, direct comparisons between treatments cannot currently be made. Comparative clinical studies are required to more clearly define the relative benefits of different treatment approaches, and further research possibly with novel surrogates is needed to more clearly identify their impact on hard clinical outcomes. Funding Medical writing support was provided by Elements Communications and funded by Vifor Fresenius Medical Care Renal Pharma. Declarations Conflict of interests Markus Ketteler has received lecture fees and consulting honoraria from Amgen, Kyowa Kirin, Ono Pharmaceuticals, Vifor Fresenius Medical Care Renal Pharma and Vifor Pharma. Patrice Ambühl has no financial interests to declare. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
v3-fos-license
2021-01-07T09:07:53.875Z
2020-12-01T00:00:00.000
233182352
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.jnma.com.np/jnma/index.php/jnma/article/download/5287/3313", "pdf_hash": "8ad182e266eda53bd3cde0844966d62fa45973e7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42295", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6d5128c33fdff2f44f5af5b0f0dc0d12e00ad012", "year": 2020 }
pes2o/s2orc
Prevalence of Self Induced Abortion by Self-Administration of Abortive Pills among Abortion-related Admissions in a Tertiary Care Centre ABSTRACT Introduction: Each year, unsafe medical abortion costs the lives of thousands worldwide. Despite the legalization of abortion in Nepal in 2002, many still seek services from unauthorized sources. This has led to grave consequences including death. Our objective is to find out the prevalence of self-induced abortion by self-administration of abortive pills and related complications. Methods: It is a descriptive cross-sectional study carried out among abortion-related admissions in a tertiary care centre from June 15, 2018 to March 15, 2020. Ethical approval was taken from the institutional review committee (076/077/51). Data was collected using pre-designed profoma and analysed in Statistical Package for the Social Sciences version 26. Point estimate at 95% Confidence Interval (CI) was calculated along with frequency and proportion for binary data. Results: Out of 223 cases enrolled, 37 (16.6%) (9.6-23.6 at 95% Confidence Interval) were self-induced abortion cases by self-administration of abortion pills. The mean gestational age at time of intake of pills was 7+6±3+1 week of gestation. The majority were diagnosed with incomplete abortion 14 (37.8%) followed by septic abortion 8 (21.6%). A surgical evacuation was performed in 25 (67.6%). Anaemia was observed in 19 (51.3%) with severe anaemia in 4 (10.8%). Blood transfusion was carried out in 14 (37.8%). Post abortive contraception was accepted by only 16 (42.3%). Conclusions: Medical abortion is safe if done under supervision but self-induced abortion by self-administration of abortion pills has high complications rate. Therefore, further studies exploring different dimension of the serious issue is need of time. INTRODUCTION As of 2010-2014, an estimated 55.9 million abortions occur each year globally. 1 At least 8% of maternal deaths occur worldwide from unsafe abortion with around 22,800 women dying yearly from its complications. Almost all abortion-related deaths occur in developing countries, with the highest being in Africa. 2 In Nepal abortion was legalized in 2002, and comprehensive services were made available in 2004. 3 Still many women today are receiving unsafe services from unauthorized providers. Medical abortion with mifepristone and misoprostol is safe for termination of pregnancy up to 63 days if practiced under medical supervision. 4 Despite clear guidelines, due to easy and illegal accessibility, many women self-administer these drugs. Some consider it as a method of birth spacing and depend on it without knowing its complications ranging from severe haemorrhage to death. 5 Free Full Text Articles are Available at www.jnma.com.np Therefore, this study aims to find the prevalence of self-induced abortion following self-administration of abortive pills in a tertiary care center. METHODS This study is a descriptive cross-sectional study was done in patients diagnosed with abortion from June 15 2018 to March 15 2020 at KIST Medical College & Teaching Hospital. Ethical approval was taken from the institutional review committee (076/077/51). All case records of in-patients mentioning women admitted with complications following self-administration of abortion pills were included while cases with complications following medical abortion done in government-accredited authorized center and service provider were excluded from the study. Data was retrieved by reviewing patients' records from the medical records department and was entered into a self-designed proforma. Convenience sampling was done and the sample size (n) was calculated as, Where, n= required sample size Z= 1.96 at 95% Confidence Interval (CI) p= prevalence, 50% e= margin of error, 7% The data was entered and analysis was carried out through Statistical Package for the Social Sciences (SPSS) version 26. The data was collected throughout the study period to meet the sample size for the study. RESULTS There were a total of 223 abortion cases admitted for abortion-related complications in the study period of 21 months. Out of them, 37 (16.6%) were self-induced abortion cases done by self-administration of abortion pills. Most of them were married; 33 (89.2%) and in the 20-25 years age group; 18 (48.7%) ( Table 1). Out of 37 patients, 25 (67.6%) presented to the emergency room while 12 (32.4 %) presented to out patient department. Per vaginal bleeding 30 (81%) was the most common presenting symptom followed by pain abdomen 29 (78.3%). There was a history of the passage of fleshy mass in 17 (45.9%) patients and per vaginal discharge in 5 (13.5%). Seven (18.9%) complained of dizziness, 5 (13.5%) had a fever, 3 (8.1%) presented with loss of consciousness while 1 (2.7%) patient also had a history of abnormal body movement. The duration of symptoms ranged from 1 day to 30 days with a mean of 8.4±8.2 days. Gestational age (GA) has been calculated from last menstrual period using Naegele's formula. It is found that abortive medication was used as early as 30 days (4 +2 weeks of Gestation (WOG)) and as late as 114 days (16+2 WOG). Mean GA at the time of selfadministration of abortive medication is 48.3±22 days (7+6±3+1 WOG). Mean GA at the presentation after an abortive attempt to our facility was 64.5±20.99 days (9+2±3 WOG) with a patient presenting as early as at 37 days (5+2 WOG) GA and as late as at (17+4 WOG) GA. The total duration from abortion attempts to presentation was a minimum of 1 day and a maximum of 52 days with a mean of 12.83±13.2 days. Seven (18.9%) cases have a previous history of induced abortion. Out of them, five (13.5%) had a history of one previous induced abortion while two (5.4%) had a history of two induced abortions. Most abortions were attempted before 9 weeks of gestational age in 27 (79.97%) cases (Table 2). On examination, minimum systolic blood pressure was 80 mmHg and the maximum was 130 mmHg with a mean of 102±13.3 mmHg. Likewise, the minimum mean arterial pressure was 60 mmHg, the maximum was 96.7 mmHg with a mean of 77.8.±10.7 mmHg. JNMA I VOL 58 I ISSUE 232 I DECEMBER 2020 Pallor was present in 15 (40.5%) patients. Hemoglobin level measured showed a minimum of 5.3 gm/dl and a maximum of 13.5 gm/dl with a mean value of 10.45±2.36 mg/dl. Fourteen (37.8%) received blood transfusion (Table 3). Majority of women 37.8% (14) were diagnosed with incomplete abortion followed by septic abortion which was present in 21.6% (8). Five (13.5%) patients were diagnosed with ectopic pregnancy of which one was very rare case of ovarian molar ectopic pregnancy (Table 4). The total duration of hospital stay was a minimum of 1 day and a maximum of 7 days with a mean of 2.97±1.44 days. The minimum duration from last pregnancy was 5 months and the maximum duration was 132 months with mean 38.4±34.1 months. All the patients received antibiotic treatment. None of the patients needed critical care admission or inotropic support. Post abortive contraception counselling was provided to everyone but only 16 (43.2%) accepted it ( Table 5). DISCUSSION In 21 months, 37 patients presented with various complications following self-administration of abortive pills. The minimum age of the women was 17 years and the maximum was 42 years with a mean age of 25.3±5.36 years. The majority (83.8%) were from 20-30 year of age and those greater than 30 years of age were approx 8%. However, in a study conducted by Giri et al. 6 52% of women were of 20-29-year age group and 44% were from the 30-39 year age group. In our study 37.8% were primigravida and 62.2% were multigravida. Out of the total, 16.2% were Gravida 2, and 45.9% were Gravida 3 or more including Gravida 5 (2). The percentage of Primigravida in our study was higher in comparison to other studies. In the study done by Giri et al. 6 79% of women were multigravida and 21% were primigravida, a similar finding was seen in K. Nivedita et al. 7 In the study conducted by Singh M et al. 8 10% were primigravida, 15% were gravida 2, 75% were gravida 3 or more including 17% of the total who were more than gravida 5. Free Full Text Articles are Available at www.jnma.com.np Gestational age (GA) was calculated from the last menstrual period (LMP) using Naegele's formula. It was found that abortive medication was used as early as 30 days and as late as 114 days. Mean GA at the time of self-administration of abortive medication is 48.3±22 days. The majority (70.27%) of the women had self-administered abortive pills within approved GA of fewer than 9 weeks It was similar to other studies by Giri et al. 6 and Jethani M et al. 9 which was 60%. At 9 to 12 WOG, 18.9% of women had taken abortive medication while 5.4% had taken medication beyond 12 WOG. In the study conducted by Giri et al. 6 60% of women had consumed abortion pills within approved nine weeks gestation while 19% had consumed after nine weeks and 21% after twelve weeks. The unmarried women belonged to the age group 20-24 years and constituted 10.8% in our study. It was close to the study done by K. Nivedita et al. 7 where 12.5% were unmarried but contrary to our study unmarried age group was 15-19 years. In our study, 94.6% of patients were admitted for 1-5 days. The maximum duration of hospital stay in our study was of 7 days. In a study conducted by K. Nivedita et al, 7 duration of hospital stay was 1-5 days which was 75% and 1 patient had a hospital stay of more than 10 days. In our study, 37.8% of women were admitted with a diagnosis of incomplete abortion which was less in comparison to a study conducted by Giri et al. 6 where 60% of cases were of incomplete abortion. Jethani M et al. 9 reported 57.45%, Goyal N et al. 10 reported 75%, and K. Nivedita et al. 7 reported 70% of cases with an incomplete abortion. Septic abortion comprised of 21.6% in our study. On the contrary, a study done by K. Nivedita et al. 7 reported 7.5% cases with septic abortion, Giri et al. 6 showed 6.5%, Jethani M et al. 9 reported 4.3% and Goyal N et al. 10 showed 5% patients with sepsis. In our study, ectopic pregnancy was diagnosed in 13.5% of pregnancy which is higher when compared to other studies, which shows 6.5% in Giri et al. 6 9 with anemia in 57.45% of which 11.7 % was severe anemia and K. Nivedita et al. 7 with anemia in 50% with severe anemia in 12.5%. However, in the study conducted by Giri et al. 6 anemia was present in 79% of patients and 37.5% of the patient had severe anemia. Blood transfusion was received by 37.8% in our study. Giri et al. 6 reported blood transfusion needed in 52%, Jethani M et al. 9 reported 20.21% patients, and K Nivedita et al. 7 blood transfusion was needed in 15% of the patients. Most of the patients i.e 42.9% required a two-pint blood transfusion in our study while Giri et al. 6 showed 29%. In our study, 13.5% of patients required three pints or more of blood transfusion. In a study conducted by Giri et al. 6 22.9% required three or more pints of blood transfusion. Post abortive contraception was accepted by 43.2% of whom 50% received Depo Provera, 43.8% Jadellae, and 6.2% received IUCD. In the study conducted by Singh M et al. 8 contraceptive counseling was done in all patients during their hospital stay, and patients accepted contraceptives in the hospital itself in the form of OC pill, IUCD, DMPA injection, CHHAYA, condoms, and tubal ligation was accepted by 16% cases. CONCLUSIONS Safe abortion services are every women's reproductive rights. Medical Abortion is safe if done under supervision by authorized personal in authorized institution following authorized standard protocols. But self-induced abortion done by self-administration of abortion pills has high complications rate. Severity of complications is too high to underestimate yet less has been explored and known about this issue. Developing countries like Nepal has high prevalence rate of Selfinduced abortion done by self-administration of abortion pills, this implies to multifactorial causation like lack of awareness of abortion services and abortion laws, lack of awareness on contraception and family planning, associated social stigma, easy availability of abortion pills over the counter and many more.
v3-fos-license
2018-12-12T04:06:35.010Z
2014-03-09T00:00:00.000
56078856
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/amp/2014/327590.pdf", "pdf_hash": "107e3a5cac3e15cd4acc7748891655d88539b6df", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42296", "s2fieldsofstudy": [ "Physics", "Engineering" ], "sha1": "107e3a5cac3e15cd4acc7748891655d88539b6df", "year": 2014 }
pes2o/s2orc
Simulation of Impinging Cooling Performance with Pin Fins and Mist Cooling Adopted in a Simplified Gas Turbine Transition Piece The gas turbine transition piece was simplified to a one-four cylinder double chamber model with a single row of impinging holes in the outer wall. Heat transfer augmentation in the coolant chamber was achieved through the use of pin fin structure and mist cooling, which could increase the turbulence and heat transfer efficiency. The present research is focused on heat transfer and pressure characteristics of the impinging cooling in the coolant chamber using FLUENT software. With the given diameter of impinging hole, pin fin diameter ratios D/d have been numerically studied in ranges from 1 to 2. Three different detached L were simulated. The impinging cooling performance in all cases was compared between single-phase and two-phase (imported appropriate mist) flow in the coolant chamber. All the simulation results reveal that the factors of L andD/d have significant effects on the convective heat transfer. After the pin fin structurewas taken, the resulting temperature decrease of 38.77 K atmost compared with the result of structure without pin fins. And with the mist injecting into the cooling chamber, the area weighted average temperature got a lower value without excess pressure loss, which could satisfy the more stringent requirements in engineering. Introduction Low installation cost and higher output have made gas turbine a popular power plant in the modern industries.Gas turbines have been widely used in the domain of aerospace and ship propulsion for more than half a century.To meet the increasing requirement of power, research and improvement of heat efficiency of gas turbines have become the hottest research focus.To improve the thermal efficiency and output of gas turbines, the temperature of working fluid is usually elevated higher than the metal melting.Spontaneously, the components of gas turbine, such as combustion chamber, combustor transition pieces, and turbine blades, need to be protected from the high temperature gas by cooling technology.A wide variety of cooling technologies including film cooling and impinging cooling have been successfully used in cooling of these hot components for a few decades. Impinging jets are used in many applications for providing high heat/mass transfer rates.Compared to other heat or mass transfer arrangements that do not employ phase change, the jet impingement device offers efficient use of the fluid and high transfer rates.In turbine applications, jet impingement may be used to cool several different sections of the engine such as the combustor case, turbine case/liner, and the critical high temperature turbine blades [1]; also the transition piece used this cooling method mostly.General applications and performance of impinging jets had been discussed in a number of reviews [2][3][4].The jet impingement angle has an effect on heat transfer and was studied frequently [2,3].Besides, some other parameters also have important effects on the impinging characteristics.Dano et al. [5] researched the effects of nozzle geometry on the flow characteristics and heat transfer performance.Cheong et al. [6] experimentally measured local heat transfer coefficients under an impinging jet with low nozzle-to-plate, /, and spacings. Meanwhile, pin fin cooling is often employed to protect the hot sections from thermal degradation while extending the durability.It is a commonly used method in the trailing edge of the gas turbine blades and many channel flows [7][8][9][10].Horbach et al. [8] describe an experimental study on trailing edge film cooling using coolant ejection.The experimental test investigated the effects of different pin fins geometric 2 Advances in Mathematical Physics configurations, and the result shows that the elliptic pin fins have a strong effect on discharge behavior as well as on cooling effectiveness and heat transfer. In the literature [9,10], experimental and numerical studies of heat transfer performance in channels with pin fins were conducted by air.The results show that the channels with pin fins had a heat transfer coefficient that was twice that of the channels without pin fins.The numerical computations showed the same trends as experimentally observed by the heat transfer enhancement through pin fin structure adopted.Heat transfer from pin fin parameters is a subject of high importance with many engineering applications. Most of the above works are based on geometric parameters of the cooling structure; in real applications, new cooling techniques are another way to get enhancement of the conventional impinging cooling.Wang and Li [11] proposed a promising technology to enhance air film cooling with mist (small water droplets) injection.Each droplet plays the part of cooling sink, and it flies a distance before it completely vaporizes.And the droplet evaporation plays an important role in reducing the temperature near the hot wall. Li and Wang [12] conducted their first numerical simulations of air/mist film cooling.They showed that injecting some appropriate mist in the air could enhance the cooling effectiveness to about 30-50%.After that they [13] continued a more fundamental study on investigating the effect of various models on the computational results including the turbulence models, dispersed phase modeling, and different forces models (Saffman, thermophoresis, and Brownian). Whereas all the studies mentioned above about mist cooling focus on film cooling style and the previous literatures about impinging cooling and pin fin cooling structure mainly focus on gas turbine blades structure, electronic equipment, and many other channel flow.Although Yu et al. [14] introduce the mist into impinging cooling technology, the cooling structure does not contain pin fin structure.And those literatures about pin fins focus on some pin fin structures, such as cross-sectional shape, detached space between the pin tip and the end wall, which is a limited work about the impinging characteristics through combining pin fin structures, mist cooling, and impingement. Based on the aforementioned cooling structure and mist film cooling mechanisms, limited work is given to investigate the impinging cooling performance in this study.A model of a one-fourth cylinder is designed which could simplify the impinging structure and performance used in gas turbine transition piece.The objective of this paper is to use CFD simulation to investigate impinging cooling performance with pin fin importing to the coolant chamber.Two significant effect factors in combination with impinging cooling and pin fin structure were investigated using numerical simulations.All the cases were simulated in two situations; one is only air in the coolant and another is air with mist together in the cooling chamber, which is helpful in getting a better cooling effectiveness. Accordingly, the main objectives of the investigation are as follows.(1) / analysis: with the given diameter of impinging hole, = 10.26 mm, pin fin diameter ratio / has been numerically studied in three different values, 1, 1.5, and 2. (2) Detached space analysis: three different detached spaces from pin fin array to impinging hole array (mark as = 34, 51, and 68 mm) were simulated.(3) Comparison of results: the temperature of the inner wall, cooling effectiveness, and contours of velocity in different cases.(4) Discuss the impinging cooling performance between single-phase and two-phase flow in the cooling chamber with import appropriate mist (small droplets) into the coolant (air). Numerical Model 2.1.Geometric Configuration.The transition piece was simplified to a one-fourth cylinder, which could simulate the transition piece's structure and performance [15,16].The discrete coolant jets, forming a protective film chamber on the side of transition piece, are drawn from the upstream compressor in an operational gas turbine engine.The coolant flows fed through internal passages with surface holes.From the supply plenum, the coolants ejected through three discrete impinging holes over the external boundary layer against the local high thermal conduction on the other side of the transition piece.In the downstream of these impinging holes, three pin fins were brought into the cooling chamber.A schematic diagram of the flow domain along with boundary conditions and dimensions is given in Figure 1. As shown in the figure, the one-fourth cylinder model has two layers of chambers with a length of 1050 mm, and the outer chamber is right the domain full of coolant, which is our major research object.The dimensions of the chambers are, respectively, defined as an outer radius and an inner radius of 200 mm and 162 mm.In the diagram, one side of the coolant chamber is closed; contrarily, both sides of the mainstream chamber as the inner chamber are opened in which gas could flow through from one side to the other.The three holes distributed uniformly in one row along the circumferential direction in the surface of the outer wall.The distance between holes and the pin fins is marked as a variable parameter .The diameter of all the impinging holes is about 10.26 mm, that is, = 10.26 mm.Three groups of pin fin diameter ratio / (where means pin fin diameter) have been constituted, which is set to be 1, 1.5, and 2. Governing Equations. The present impinging and pin fin cooling study involves a flow which is steadied, Newtonian, three-dimensional, incompressible, and turbulent.For solving this state of fluid, mass, energy, and species, transport equations need to be solved.The continuity, momentum, and energy equations are given by [16].where is the total pressure, is the stress tensor, ⃗ and are the gravitational body force and external body forces, is the viscous heating dissipation, and the heat flux is given by Fourier's law. Turbulence Model. A wide used turbulence model is the realizable - turbulence model which is a relatively recent development model.The term "realizable" means that the model satisfies certain mathematical constraints on the Reynolds stresses, consistent with the physics of turbulent flows [17].The benefit of selecting this model is that it more accurately predicts the spreading rate of jets and it also provides superior performance for rotation, separation, and recirculation flows.In this model, the equation is the same as in RNG model; however is not a constant and varies as a function of mean velocity field and turbulence (0.09 in loglayer of (/) = 3.3 and 0.05 in shear layer of (/) = 6).The equation is based on a transport equation for the meansquare vorticity fluctuation [18] as follows: where 1 = max[0.43,/( + 1)] and 2 = 1.0.This model is used with standard wall functions to predict flow structure and heat transfer over the inner wall cooling surface. Dispersed-Phase Model/Water Droplets (Mist). Based on the Newton's second law, the droplet trajectory is traced by where is the droplet mass and V is the droplet velocity.Σ is the combined force on the droplet particle, which normally includes the hydrodynamic drag, gravity, and other forces. The energy balance for any individual droplet can be given as the following equation: where ℎ is the latent heat.The convective heat transfer coefficient (ℎ) can be obtained with an empirical correlation [19]. The mass change rate/vaporization rate in (4) is governed by concentration difference between droplet surface and air stream as follows: where is the mass transfer coefficient and is the vapor concentration at the droplet surface, which is evaluated by assuming that the flow over the surface is saturated. ∞ is the vapor concentration of the bulk flow.When the droplet temperature reaches the boiling point, the following equation can be used to evaluate its evaporation rate [20]: where is the gas/air heat conductivity and is its specific heat.The instantaneous turbulence effect on the dispersion of particles can be considered by using stochastic tracking.The droplet trajectories are calculated with the instantaneous flow velocity ( + ), and the velocity fluctuations are then given as follows: where is a normally distributed random number.This velocity will apply during the characteristic lifetime of the eddy te, a time scale calculated from the turbulence kinetic energy and dissipation rate.After this time period, the instantaneous velocity will be updated with a new value until a full trajectory is obtained.A more detailed discussion about the stochastic method and the two stages of evaporation is given by Li and Wang [21], and more numerical details are given in FLUENT [17]. Boundary Condition Setup. In the case of impinging cooling simulation, boundary conditions are applied to specify the flow and thermal variables.Figure 1(a) discloses the boundary conditions used in the model in which the coolant and gas are moving along, respectively, in the two layers of chambers in opposite directions.In the cooling chamber, the coolant is considered to be two situations. One is single-phase analysis.The coolant is only air, while velocity and temperature are set on the jet holes, with pressure on the exit.Another situation is two-phase flow in the cooling chamber, which is considered mist added in the air.Additional boundary conditions for mist injections are specified.The size of droplets is uniformly given as 5 microns, and the masses of water droplets are 0.003 kg/s.The similar conditions have been successfully used in film cooling technology by Subbuswamy and Li [22].The solid walls are assigned as "reflect" boundary condition, which enables the droplets to elastically rebound once hitting the wall.The outer inlet is specified as "escape" boundary condition for droplets so that they can enter into the cooling chamber from the inlet surface. In the chamber of mainstream, the gas flow is a mixture of O 2 , H 2 O, CO 2 , and N 2 , as well as some rare gasses, which is confirmed based on several real applications.Uniform mass flux rate of 31.46 kg/s is assigned to the inlet of the gas chamber, and other details about boundary conditions are listed in Table 1.Assumption of the solid wall of the model is formed with a material of Nimonic 263, which could get the information from the Internet [23]. Meshing and Simulation Procedures To conduct numerical simulations, structured grids are used in this study.As shown in Figure 2, the structured meshes are generated to two domains, which is on behalf of the coolant air chamber and gas chamber separately.In the 3D cases, the grid sensitivity research started from 150,000 meshes until the temperature result changed less than 1% when the total number of the cells for the 3D domains is about 500,000.All the cases concerning different parameters and conditions are meshed with the same proper setup on the boundary and got the similar total mesh number.Local grid refinement is used near the holes and pin fin regions.For all cases, all nodes on the inner wall surface have the plus value smaller than 300. This study is using a commercial CFD code based on the control-volume method, ANSYS-FLUENT 12.1, which is used in order to predict temperature, impingement cooling effectiveness, and velocity fields.All runs were made on a PC Results and Discussion Based on the reliable computation model, the results obtained with different pin fin diameter ratios / and different detached spaces from pin fin array to impinging holes array are presented in order to validate the CFD simulation mentioned above so that the performance of introducing pin fin to impinging structure would be studied well.In order to be convenient for analysis description, cases for different and / are written as Case 1 to Case 10 shown in Table 2. Flow Structure. Figure 3 shows a sequence of the streamwise velocity magnitude contours along the -axis in various cases.The geometric center of the impinging jet in the coolant chamber is represented by the red region, which is coolant holes at = 525 mm.After the coolant strikes the inner wall, there are vortexes formed.The jet impingement and the vortex formed out of the coolant flow cool the surface of the inner wall.The impinging flow does not detach from the wall but creates three regions around the impinging jet, which are called free jet region, stagnation flow region, and wall jet region, just as is mentioned in [24].The coolant-affected wall jet region is displayed as light-colored area, which can be recognized around the red free jet region from each of the Cases 1 to 10.And the cold film layer with a thickness of about half jet diameter blankets the inner wall surface quite efficiently around the wall jet region.Case 1 was set to be a cooling structure without pin fins.Comparing Case 1 with other cases in Figure 3, it can be obviously found that most of those cases have more faster velocity areas in the coolant chamber than that in Case 1.With speed increasing, higher convective heat efficiency can be obtained due to more heat that had gone away with the coolant. In the red dashed wireframe of Figure 3, local region comparison among Cases 2-4, which have the same / = 1, presents the different flow structure around the pin fin under the circumstances of three sorts of .Note that after the coolant strikes the inner wall, it meets the pin fin and is obstructed and separated by it; thus vortexes take shape between the outer wall and inner wall.So that coefficient of heat transfer were increased with the increasing of the tempestuously flow of the vortexes.But in contour of Case 4, velocity in the upstream flow of the pin fins is not faster than that in the downstream.That is because when the pin fins are shifted downstream from the geometric center of the jet, the space between impinging hole and pin fin gets further so that the fast coolant could not get in touch with the pin fin.In a nutshell, the rising of distance clearly reduces the role of pin fins in forming turbulence for / = 2. But, for Cases 8-10 while / = 2 as well as Cases 5-7 when / = 1.5, the median of could be the best because turbulence formed is more sufficient when = 51 seen in the contours. Inner Wall Temperature and Cooling Effectiveness. Figure 4 shows a comparison of temperature distribution on the inner wall with a group of changing parameters, which is the conventional model in actual situations.This figure shows that the temperature at the stagnation flow region is much the same among the cases with pin fins, while the lower temperature area (blue region) is larger than that in Case 1, which illustrates that the combination of impinging holes and pin fins has a good effect on reducing temperature in the stagnation flow region.For all the Cases from 1 to 10, the different colors distribution represents corresponding temperature which is described in detail on the temperature legend.Overall, it could be concluded that the yellow area in Cases 2-10 is larger than that in Case 1; that means the pin fin structure imported to the impingement cooling style enhances the impinging cooling effectiveness.One important factor that must be considered in the design of pin fin is the detached spaces from pin fin array to impinging holes array.The pin fins should be placed in a suitable position in the downstream flow in order to acquire enough turbulence.Therefore, Figures 5(a), 5(b), and 5(c) provided temperature values along direction at Line 1, which is in the middle of the inner wall just as shown in Figure 5.They represent temperature comparison of three kinds of in different / cases separately.It can be observed from Figure 5(a) that the temperature becomes lower with the decrease in detached space, and the lowest temperature is obtained when = 34 mm.The = 34 mm cases also can be proved to be the best distance in Figure 5(c).But in Figure 5(b) there is a little variation in the upstream where belongs to 0-0.5 m, and the cases with distance of = 51 mm could get a lower average temperature. The cooling effectiveness () [22] is used to examine the performance of impinging cooling.The definition of is where is impinging cooling effectiveness, is mainstream temperature, is absolute temperature on the inner wall, and is the temperature of coolant.The definition provides an appropriate parameter for investigating impinging cooling when different pin fin structures are employed.Figure 6 shows the value profile of , through the comparison of Cases 2, 3, and 4, and it is obvious that appropriate increase in the diameter of pin fins brings more area of higher .Since the contact area between the coolant and the pin fin is increasing along with the / augmenting, bring about more heat transmisson.But it cannot fairly show the cooling enhancement induced by pin fins because Case 6 seems to have the highest value of cooling effectiveness.In other words, for different cases, a weighted variable should be imported to evaluate the comprehensive temperature on the inner wall. Consequently, ave is defined as area weighted average temperature on the inner wall; its expression is represented as follows: where is the area of the inner wall, is the number of wall mesh, and is the total number of mesh.The area weighted average temperature is used to study the value of comprehensive temperature, and the value of ave is able to evaluate the performance of impinging cooling.For all the ten cases without mist, the minimum value of Tave is 1174.46K appearing in Case 6, which agrees with the description of cooling effectiveness. Effect of Mist Added in Impinging Coolant.Mist, with uniform droplet size and mass of 5 m and 0.003 kg/s, is injected to the cooling chamber together with the air from the inlet surface.As documented and mentioned in the introduction, injecting mist into the air can improve the effect of cooling.The small water droplets' trajectories from the coolant inlet can be seen in Figure 7. Figure 7 shows that the droplets impact the inner wall which moved with the coolant jet from the impinging hole.All the droplets evaporate in the domain between = 350 mm and = 700 mm along the -axis in the cooling chamber.However, the droplets exit in different times in different cases and have some transformation in different pin fin structures.Therefore, the distance moved by droplets could generate different performance of cooling and temperature distribution. To explain the effect of mist added in the impinging cooling, comparison of the ave in different cases is given in Figure 8 including all the cases with and without mist.Further data analysis in Figure 8 shows that after the mist is taken into impinging cooling chamber the value of Tave has a decreased range from 0.19 K to 26.86 K compared with the situation without mist.Through the combination of mist cooling and impinging cooling, the inner wall obtained 8.96 K of the average ave decrease.It is observed that the best case without mist is the Case 6 indeed as mentioned in previous section, and the value is 1174.46K, which has a decrease of 38.77K than Case 1.As the same, when droplets are injected into the coolant, Case 6 still obtained the lowest area weighted average temperature. A more data analysis reveals that there is no obvious pressure drop while importing various pin fin structures and taking mist into the coolant.In consequence it is a good method to get a lower temperature through using pin fin structure and mist cooling without excess pressure loss. Conclusion A one-fourth cylinder double chamber model with a single row of impinging holes in the outer wall was simplified from the gas turbine transition piece, and its operating conditions selected in this paper are featured by high temperature, high pressure, and high velocity of coolant.A complete three-dimensional numerical simulation is employed with different pin fin structures adopted in the impinging cooling.Mist cooling technology taken into the impinging cooling is tested and investigated.And it has been proven that the pin fins could enhance the cooling effectiveness without excess pressure loss.The temperature could obtain a lower value when injecting mist into the cooling chamber.The optimal combination of the pin fin parameters and mist imported brings the optimum impinging cooling efficiency without excess pressure drop, and the lowest value of Tave has a decrease of 42.65 K, which is a considerable value in cooling the hot components.That is to say the procedure and result of the CFD simulation in this paper have some practical value Figure 5 : Figure 5: Comparison curves of in Line 1. Figure 7 : Figure 7: Distributions of droplets particle track in different cases with uniform droplet mass and diameter. Y Z X Table 2 : Cases number for different geometric parameters.
v3-fos-license
2022-06-25T15:13:48.502Z
2022-06-22T00:00:00.000
250006169
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8247/15/7/778/pdf?version=1655899547", "pdf_hash": "b53311687750933d37210bbf8e5112976eb832db", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42297", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "13866313a08fbbd724458fba18c5f0e9a25bd352", "year": 2022 }
pes2o/s2orc
Development of (4-Phenylamino)quinazoline Alkylthiourea Derivatives as Novel NF-κB Inhibitors For many inflammatory diseases, new effective drugs with fewer side effects are needed. While it appears promising to target the activation of the central pro-inflammatory transcription factor NF-κB, many previously discovered agents suffered from cytotoxicity. In this study, new alkylthiourea quinazoline derivatives were developed that selectively inhibit the activation of NF-κB in macrophage-like THP−1 cells while showing low general cytotoxicity. One of the best compounds, 19, strongly inhibited the production of IL-6 (IC50 = 0.84 µM) and, less potently, of TNFα (IC50 = 4.0 µM); in comparison, the reference compound, caffeic acid phenethyl ester (CAPE), showed IC50s of 1.1 and 11.4 µM, respectively. Interestingly, 19 was found to block the translocation of the NF-κB dimer to the nucleus, although its release from the IκB complex was unaffected. Furthermore, 19 suppressed the phosphorylation of NF-κB-p65 at Ser468 but not at Ser536; however, 19 did not inhibit any kinase involved in NF-κB activation. The only partial suppression of p65 phosphorylation might be associated with fewer side effects. Since several compounds selectively induced cell death in activated macrophage-like THP−1 cells, they might be particularly effective in various inflammatory diseases that are exacerbated by excess activated macrophages, such as arteriosclerosis and autoimmune diseases. Introduction Chronic inflammatory and autoimmune diseases are characterized by dysregulated production of pro-inflammatory cytokines and macrophage infiltration, which might contribute to detrimental tissue remodeling and destruction [1][2][3][4]. The commonly prescribed anti-inflammatory drugs have potentially serious side effects: NSAIDs are known for their potential to induce gastrointestinal ulceration and bleeding [5]. Many antirheumatic drugs, such as methotrexate, sulfasalazine, and hydroxychloroquine, may cause an increased risk of infection and hepatotoxicity [6,7]. Anti-inflammatory glucocorticoids are effective, but the therapy has metabolic and cardiovascular side effects, and long-term treatment is limited due to the development of resistance [8,9]. Antibodies blocking the major proinflammatory cytokine TNFα can cause new-onset neurologic symptoms in rheumatoid arthritis patients, associated with demyelinating lesions of the CNS [10,11], and may elicit immunogenic responses [12]. Moreover, a significant proportion of rheumatoid arthritis patients and up to 40% of the patients with inflammatory bowel diseases fail to achieve a long-term clinical response [13,14]. Instead of targeting TNFα and other single cytokines, recent drug development approaches aimed at the inhibition of the central pro-inflammatory transcription factor NF-κB, which would result in the dampening of the inflammatory response as a whole (see Figure 1 for illustration). Most of these approaches focused on the inhibition of the upstream activator kinases IKKα/β, leading to the prevention of IκB phosphorylation, which normally keeps the inhibitory complex with the NF-κB dimer intact ( Figure 1). Several ATP-competitive and allosteric IKKα/β inhibitors have been described in the last few decades, including 2-amino-3,5-diarylbenzamides [15], imidazo [1,2-a]quinoxaline derivatives [16,17], 4-Phenyl-7-azaindoles [18], and thiazolidine-2,4-diones [19,20]. The best compound, 8h (from Ref. [15]), was highly selective for the IKKα/β kinases in a screening panel of 150 kinases and displayed a cell free potency against IKKβ of 100 nM, while the translocation of NF-κB to the nucleus was blocked at 2 µM. However, the selectivity of IKKβ, the actual activator of NF-κB release, over the closest homologue, IKKα, was only four-fold. In this respect, the earlier-discovered imidazo [1,2-a]quinoxaline BMS-345541 offered some advantage as it was more selective for IKKβ over IKKα, with cell-free IC 50 s of 0.3 and 4 µM, respectively. It was proposed to bind to a yet-unidentified allosteric site on the catalytic subunit. The IC 50 s for the suppression of IL-1β and TNFα production in the THP−1 macrophage model cell line were in the range of 5 µM. Notably, BMS-345541 was also effective in the inhibition of TNFα in mice after administration of LPS. The 4-Phenyl-7-azaindoles were among the most potent IKKβ inhibitors, with compound 16 from the work of Liddle et al. displaying an IC 50 of 40 nM, an excellent selectivity over IKKα, and, further, 36 screened kinases [18]. In a cellular reporter gene assay, 16 inhibited the NF-κB activation with an IC 50 of 0.8 µM [18]. Elkamhawy and co-workers reported thiazolidine-2,4-dione-based IKKβ inhibitors with an irreversible, allosteric mode of action [19,20]. The best compounds, 6v [21] and 7a [20], inhibited IKKβ in the cell free assay with IC 50 values of 0.4 and 0.2 µM, respectively, while the potencies to suppress the TNFα production in rat RAW 264.7 macrophage-like cells were 1.7 and 6.3 µM, respectively. Further, 7a was also active in an in vivo sepsis model in mice [20]. However, the inhibition of IKKβ leads to severe on-target toxicities, including potential tumor-promoting effects, thus lacking clinical success so far (reviewed in Ref. [22]). Many of these compounds lack drug-likeness and raise concerns with respect to cytotoxicity as they originated as chemical defense mechanisms using cytotoxic secondary metabolites [27]. Some natural compounds known to exhibit pleiotropic activities were reported to also inhibit NF-κB, e.g., curcumin [28,29] and caffeic acid phenethyl ester (CAPE) [30]. CAPE inhibited the activation of NF-κB in cells with an IC 50 between 10 and 20 µM, for which the Michael reaction acceptor and the catechol motif in the structure were required [31]. The canonical pathway of NF-κB activation with the most common points of pharmacological intervention. The pathway is activated by signals from various immune-related receptors. In stimulated pro-inflammatory cells, ligand-receptor interactions initially activate the kinase TAK1. Subsequently, TAK1 activates the IKK complex by phosphorylation, leading to IκBα phosphorylation and ubiquitin-dependent proteasomal degradation. Usually, the released NF-κB dimer is rapidly translocated to the nucleus, where it initiates the expression of specific genes; however, phosphorylations on Ser536, Ser468, and other sites on the RelA subunit further regulate stability, DNA-binding activity, and nuclear import. Some classes of inhibitors that block distinct steps in this pathway are indicated by red bars. An inhibition of the proteosome, which also affects the degradation of IκBα [18] (Figure 1), besides many other protein targets, is exerted by several FDA-approved anti-cancer drugs; however, this strategy induces severe adverse effects, which even concerns the secondgeneration proteasome inhibitors, such as carfilzomib and ixazomib [32][33][34][35]. In general, many inhibitors of the NF-κB signaling failed in the clinic because of adverse toxic effects [36]. We previously reported a series of benzylthiourea quinazoline derivatives as the first bispecific EGFR/NF-κB inhibitors, which exhibited anti-cancer activity in vitro and in vivo ( Figure 2, compound A). Basically, these compounds might potentially also be evaluated as anti-inflammatory compounds; however, EGFR kinase inhibition does not contribute to the treatment of inflammatory diseases; rather, it is associated with known on-target side effects, such as cutaneous toxicity [33]. In keratinocytes, EGFR kinase inhibitors cause the induction of growth arrest and apoptosis, which eventually stimulate inflammation [34]. Hence, a potential anti-inflammatory agent should avoid the inhibition of EGFR kinase [35]. Of note, some of the previously published benzylthiourea quinazoline derivatives (compounds B and C, Figure 2) showed a tendency for predominant inhibition of NF-κB, suggesting that it might be possible to further enhance this activity while reducing the binding affinity to EGFR kinase. In the present study, we optimized the scaffold using a diversification strategy, resulting in novel NF-κB inhibitors with largely reduced EGFR kinase inhibitory activity and low general cytotoxicity. A biological evaluation in macrophage-like THP−1 cells revealed great potential as anti-inflammatory agents. Thiourea quinazoline derivatives from our previous series [35]. a NF-κB inhibition values were derived from a cell-based activity reporter assay, whereas the IC 50 s for EGFR kinase inhibition were obtained in a cell-free kinase assay (at 100 µM ATP). The codes in brackets denote the old compound names from Ref. [35]. Compound Design Our goal was to reduce the binding affinity of the benzylthiourea quinazoline derivatives to the ATP binding pocket of EGFR kinase while retaining the best NF-κB inhibitory activities of the dual inhibitors in the IC 50 range from 0.3 to 0.7 µM [37]. Although the introduction of a bulky substituent in the 4-position of the aminophenyl group in the published compounds B and C had reduced the inhibition of EGFR kinase compared with A, further modifications at this position did not seem promising because the potency against NF-κB did not increase in parallel but rather dropped with the bulkiness of the substituent in C. Therefore, we decided to explore whether modification at the opposite molecule end could also lead to reduced affinity against EGFR kinase, hopefully without impairing the NF-κB inhibition. To this end, we investigated the potential binding mode of the previously published compound A in EGFR kinase using molecular docking to evaluate whether substitutions at the benzyl ring might impede binding to the ATP pocket. In addition, the benzyl ring was virtually replaced by a bulky t-butyl group to generate a probe compound, whose binding to EGFR kinase was also analyzed by molecular docking (this virtual compound was synthesized later as compound 20, Scheme 1). Such alkyl thiourea derivatives might offer several advantages compared with substituted benzyl derivatives: a potentially lower molecular weight and overall lipophilicity, as well as a reduced aromatic ring count, which might improve the drug-likeness. The docking results with compound A showed that the benzyl ring could bind at several positions of the pocket rim but was always in close distance (≤4 Å) to hydrophilic moieties, consisting either of polar side chains and/or of carbonyl groups from the glycinerich loop. A representative binding pose is depicted in Figure 3A. The quinazoline ring system and the 4-aminophenyl superimposed well with the corresponding moieties of gefinitib in the original co-crystal structure (overlay not shown). In this pose, the benzyl group was predicted to be within a close distance to Cys797 and Arg841 (3.89 and 3.93 Å, respectively), while, in other poses, enabled by the rotational flexibility of the benzyl group, the carbonyl groups of Leu718 or Gly719 were approached (indicated by the orange semicircle). Hence, we surmised that substituents at the phenyl ring would diminish the overall binding affinity by a steric collision with one or more of the named residues. To this end, we planned to introduce a variety of non-polar substituents since polar moieties had been found to be detrimental to the NF-κB inhibitory activity in our previous study [35]. When the t-butyl-modified probe compound (later named 20) was docked into the EGFR kinase ATP pocket, the increased bulkiness at the thiourea group inevitably caused even more close distances to the pocket residues, as shown in Figure 3B. With the highaffinity interactions of the 4-phenylamino quinazoline core being retained, the t-butyl moiety was forced to bind in an unfavorable hydrophilic environment formed by the carbonyl of Leu718 and the Asp800 carboxylate (distances: 3.97 Å and 3.51 Å, respectively). It should be noted that these distances were already maximized at the expense of an energetically unfavorable eclipsed conformation between a t-butyl methyl group and the thiourea hydrogen. Altogether, the alkyl modification strategy seemed very promising with respect to the abolishment of EGFR kinase inhibition. Chemistry For the generation of our quinazoline derivatives, we adopted a flexible approach that we previously developed, which would facilitate the derivatization at the last step. We started with the synthesis of the quinazoline nucleus, which was conducted through two steps. The first step involved the formation of formimidate derivative a (Scheme 1) by refluxing 2-amino-5-nitrobenzonitrile with triethyl orthoformate (TEOF) in the presence of drops of acetic anhydride (Ac 2 O). This was followed by a cyclization step to yield the quinazoline nucleus, whereby the formimidate derivative a was refluxed in acetic acid with substituted anilines to yield the nitroquinazoline derivatives b1-c1. The reduction of the nitroquinazoline derivatives was completed by refluxing with stannous chloride (SnCl 2 ) in methanol under a nitrogen atmosphere to yield the aminoquinazoline derivatives b2-c2. The reduction step was clearly confirmed by the appearance of a signal in the proton NMR spectra at 5.62 ppm, corresponding to the two protons of the "NH 2 ". The synthesis of novel thiourea derivatives was achieved through one of two strategies; the first approach (utilized to synthesize compounds 1-11 and 17-20) involved reacting the aminoquinazoline with thiophosgene (S=CCl 2 ), yielding the isothiocyanate derivative, which was then stirred with the respective amines to produce thiourea derivatives. The rest of the novel compounds (12)(13)(14)(15)(16) were synthesized via directly reacting the aminoquinazoline derivative with the corresponding isocyanate/isothiocyanate in DMF. The formation of the thiourea was confirmed with the appearance of a clear downfield signal for the (C=S) in the carbon NMR spectra at around 181 ppm. Moreover, the methylene (-CH 2 -) bridge available in compounds (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14) was observed in the proton NMR spectra as two protons appearing in the range of 4.37-4.95 ppm. . Design concept to abolish the EGFR inhibitory activity based on the predicted binding modes of compounds A (A) and 20 (B) in the ATP pocket of EGFR kinase. Molecular docking was performed to the coordinates of the gefitinib-EGFR cocrystal structure (PDB code: 4WKQ) as previously described using MOE [36,37]. The ATP binding pocket is shown as a transparent Connolly surface with color-encoded lipophilic (brown) and hydrophilic areas (cyan), and the ligands are drawn as orange sticks. The binding mode of the 4-aminophenyl quinazoline core superimposed well with the corresponding portion of gefitinib in the original cocrystal structure (overlay not shown for clarity). In all poses obtained, the benzyl ring in A and the t-butyl group in 20 were positioned in unfavorable close distances to polar residues at the pocket rim; these distances are shown as orange dashed lines in the depicted representative poses, and the residues are indicated. In other poses, the rotamers of the benzyl group in A closely approached the carboxylate of Asp800 or the carbonyls of Leu 718 or Gly719. The radius covered by the benzyl group is illustrated by the orange semicircle. The unfavorable eclipsed conformation of 20 (B) is indicated by the double arrow. The glycine-rich loop backbone is represented as a cord. Brown dashed lines: CH-π interactions; blue dashed lines: H-bonds. Benzylthiourea Quinazoline Derivatives We decided to explore the effect of diverse benzyl substitutions on the NF-κB and EGFR inhibitory activity. In this endeavor, a meta-chloro aniline residue was kept constant at the 4-quinazoline site, which ensured a good basal activity for NF-κB inhibition and was not too bulky, thus avoiding strong steric constraints upon binding. Unfortunately, decorating the benzyl moiety with different substituents in varying positions did not sufficiently diminish EGFR kinase inhibition (compounds 1-7, Table 1), suggesting that the substituted benzyl group can adopt binding modes in which the phenyl ring protrudes freely into the solvent so that steric clashes of the substituent(s) with the binding pocket are largely avoided. Although this also implies a loss of the contribution of the benzyl group to the binding, the remaining interactions of the 4-amino quinazoline core still accounted for a rather high binding affinity to the EGFR kinase ATP pocket. Several para substitutions of the phenyl ring were detrimental to the activity against NF-κB too (compounds 1, 3, 4, and 7). Only with compound 8 (Table 1), the EGFR kinase inhibition was reduced to 40% at 150 nM while still displaying significant NF-κB inhibitory activity. Bioisosteric replacement of the benzyl ring did not result in any improvement either (compounds 9-11) ( Table 1). Altogether, the benzyl moiety at the thiourea was found unsuitable to enhance the NF-κB suppression over the EGFR kinase inhibition; however, the cytotoxicity remained at a low level, with many NF-κB inhibitory benzyl derivatives (cf. A, 2, 6 and 8) proving that NF-κB suppression by this scaffold and cytotoxicity are not necessarily coupled. Next, we tried further benzyl modifications, such as benzyl urea and benzoyl (thio)urea motifs (12-16, Table 2); the rationale behind this was to increase the polarity of the benzyl and the thiourea linker by introducing carbonyl groups, which could lead to repulsion from hydrophobic areas of the EGFR kinase binding site. However, while we could observe a tendency towards weaker inhibition of EGFR kinase (13 and 15), the inhibitory activity against NF-κB was affected as well ( Table 2). Alkylthiourea Quinazoline Derivatives Pursuing our second design strategy, we finally replaced the benzyl group by several alkyl and cycloalkyl moieties (Table 3), also including the t-butyl group (20) originally tested as a virtual compound ( Figure 3B). Indeed, this strategy led to compounds with clearly reduced EGFR kinase inhibition, acceptable NF-κB suppression, and low cytotoxicity (19)(20). In summary, our design strategy to introduce bulk alkyl groups at the thiourea linker led to several new compounds with greatly reduced effects on EGFR kinase while being low µM inhibitors of NF-κB in the reporter gene assay, in particular 19 and 20. In the next step, we decided to evaluate the most promising compounds in a more meaningful inflammatory cell model using macrophage-like THP−1 cells. THP−1 is a human leukemia monocytic cell line that can be differentiated to a macrophage-like phenotype using, e.g., phorbol 12-myristate 13-acetate (PMA); these differentiated THP−1 (dTHP−1) cells are an established model to assess the effects of drugs or toxins on the pro-inflammatory macrophage activities by measuring the mRNA expression or release of cytokines [38,39]. Following differentiation, THP−1 cells can be stimulated, e.g., by bacterial lipopolysaccharides (LPS), similar to primary monocyte-derived human macrophages [40]. Evaluation of the Anti-Inflammatory Activities in Macrophage-like dTHP−1 Cells The most promising compounds from the present study were selected for an evaluation of their macrophage-modulatory activities (Table 4). We chose compounds 19 and 20, exhibiting the most balanced activity profile in the precedent assays, but also included further compounds, such as 2, 6, and 18, which still displayed moderate inhibition of EGFR kinase in the cell free assay but were optimal in the other criteria, in particular regarding cytotoxicity. Furthermore, we also included 7 and 17, which were weak inhibitors of the NF-κB activity in the reporter gene assay (IC 50 > 10 µM), to analyze whether the screening data from the NF-κB reporter assay in HEK293 cells would correlate with the potency to suppress cytokine release from macrophage-like cells. a Percentage of inhibition (Inh %) at 10 µM concentration in NF-κB luciferase-transfected HEK293 cells. Concentration necessary for 50% inhibition (IC 50 ). Results are presented as mean ± SEM. (n = 3). * p < 0.05, ** p < 0.01, *** p < 0.001 compared with the DMSO + TNFα (50 ng/mL) group. b Inhibition by the reference compound, CAPE, at 10 µM: 72.8% (± 5.7 SEM.). c Percentage of cell viability at 10 µM concentration. Results are presented as mean ± SEM. (n = 3). * p < 0.05, ** p < 0.01, *** p < 0.001 compared with the DMSO group. Following the differentiation of the THP−1 cells by PMA, the compounds were added to the medium and the inflammatory response of the cells was induced by LPS. After 24 h, the levels of TNFα and IL-6 released to the medium were analyzed (Table 4). Somewhat unexpectedly, we observed a strong suppression of IL-6 production by all the compounds, including those that had shown rather poor activity in the NF-κB activity reporter assay (7 and 17). Further, 6 was identified as the most potent compound, exhibiting an almost complete blockage of the IL-6 release at the 7.5 µM screening concentration and a sub-micromolar IC 50 of 0.36 µM. In contrast, it appeared more challenging to inhibit the production of TNFα because it is the predominant cytokine released by dTHP−1 cells and produced earlier and in considerably larger amounts than IL-6 following stimulation by LPS [41], although both IL-6 and TNFα are NF-κB-dependent genes. Hence, it could be expected that a more drastic inhibition of the NF-κB activation pathway would be required to significantly reduce the TNFα levels over 24 h. Our compounds were, accordingly, less potent to inhibit the TNFα production; nevertheless, several compounds achieved single-digit µM potencies, with 6 and 19 being the most potent (IC 50 = 4.4 and 4.0 µM, respectively), also outperforming the reference compound, CAPE. Interestingly, 17 and 7, classified as weak inhibitors in the NF-κB activity reporter assay, were also the least potent to suppress the TNFα expression in the dTHP-1 cells. In addition, three of the four compounds selected because of their good activity in the NF-κB reporter assay (6, 19, and 20) were also the most potent inhibitors of TNFα release. Thus, some correlation could be observed, but it was not stringent, which might be ascribed to the different background of transcriptional co-factors and further protein complex compositions in the NF-κB signaling pathway in HEK293 cells versus vs. dTHP−1 cells. In parallel, we also analyzed whether our NF-κB inhibitors would affect the viability of the LPS-stimulated dTHP−1 cells (Table 5). In the absence of LPS at the test concentration of 7.5 µM, the compounds exhibited only moderate to low cytotoxicity toward the macrophage-like cells; however, after the activation by LPS, some compounds showed a pronounced cytotoxic effect, particularly 6 and 19. In contrast, 17 and 7, which were the weakest NF-κB inhibitors, did not affect the cell viability at 7.5 µM, 7 not even at 15 µM. The reference compound, CAPE, also triggered enhanced cell death in the presence of LPS but to a lower extent, paralleling the less potent inhibition of TNFα production. Thus, it was evident that the strength to selectively kill activated macrophage-like dTHP−1 cells correlated with the potency to inhibit NF-κB, as indicated by the suppression of TNFα release. How can this selective cytotoxicity be explained? Macrophages are intrinsically resistant to TNFα-induced cell death; however, it was previously shown that, when NF-κB is inhibited in macrophages, TNFα alters the lysosomal membrane permeability, leading to the release of cathepsin B, with the subsequent loss of the inner mitochondrial transmembrane potential (∆Ψm) and cell death [42]. Thus, since the TNFα release is not completely blocked, small amounts of this cytokine in the medium can trigger cell death when the NF-κB pathway is sufficiently inhibited. Since, particularly, 6 and 19 showed only little or no toxicity against HEK293 and A549 cells (Tables 1 and 3), they might be applicable to selectively deplete activated macrophages in inflammatory disorders, where the presence of a tissue macrophage is generally associated with a poor clinical outcome. To close in on the point of intervention of our compounds within the NF-κB signaling pathway, we started by analyzing whether one of the best compounds, 19, inhibits the translocation of the NF-κB dimer (consisting of p65/RelA and p50) to the nucleus. After the stimulation of dTHP−1 cells by LPS, it could be observed that 19 clearly inhibited the nuclear translocation of NF-κB ( Figure 4A,B). This result confirmed that the inhibition of cytokine expression was due to the retention of the NF-κB dimer in the cytoplasm, thus preventing the NF-κB-dependent transactivation of the target genes. Up to this point, the mechanism of action of our compounds was indistinguishable from that of the reference compound, CAPE. The normalized phospho-NF-κB p65 (S468) and IκB levels were quantified. All data are expressed as mean ± SEM (n = 3); ** p < 0.01 and *** p < 0.001 compared with the DMSO + LPS group. Compound 19 Does Not Inhibit IκB Degradation and NF-κB Release A frequently exploited intervention site targeted by many previous compounds is the IκB kinase (IKK), which phosphorylates the inhibitory protein IκBα, thus triggering its degradation and the release of the NF-κB dimer. A further phosphorylation site of IKK, and of several other kinases too, is Ser536 at p65/RelA (reviewed in Ref. [43]). The phosphorylation of Ser536 may promote the nuclear translocation of NF-κB independent from the regulation by IκBα [44]. To analyze these crucial NF-κB signaling components, dTHP−1 cells were treated by 19, stimulated by LPS, and the cell proteins isolated and subjected to Western blot analysis. The results are shown in Figure 5. It turned out that 19 neither blocked the activation of IKKα/β nor the subsequent degradation of IκB ( Figure 5A,C,D). Furthermore, the phosphorylation of p65 at Ser536 was not affected by 19 either (Figure 5A,B). Thus, it could be concluded that, unlike the established IKK inhibitors, 19 did not block the release of the NF-κB dimer. In contrast, the pleiotropic agent CAPE effected a slight reduction in the p65 phosphorylation ( Figure 5A,B), suggesting a partial inhibition of the kinase phosphorylating p65-Ser536 in THP−1 cells. Compound 19 Inhibits the Phosphorylation of p65-Ser468 So far, no evidence had emerged as to why the NF-κB was released from the IκB complex but still retained in the cytosol. Hence, we analyzed another regulatory site in p65, Ser468, because there was evidence from the literature that this site was phosphorylated only after the release of p65/RelA [45]. In Jurkat T cells, the IKK-related kinase IKKε was identified as the Ser468 kinase, and the expression of a kinase-dead IKKε mutant entailed a strong reduction in the NF-κB translocation to the nucleus [45]. Indeed, the treatment of the dTHP−1 cells with 7.5 µM of 19 led to a significantly diminished phosphorylation of p65-Ser468 ( Figure 4C,D), suggesting that this inhibitory effect could be related to the retention of the NF-κB dimer in the cytoplasm (see Figure 1 for illustration). Compound 19 Has No Effect on the Kinases Involved in NF-κB Activation Since we found that 19 led to a reduction in the Ser468 phosphorylation on p65/RelA, it was straightforward to explore whether 19 was directly inhibiting a kinase involved in NF-κB activation. This was a theoretical possibility, although our original dual EGFR/NF-κB inhibitor had not significantly affected any kinase other than EGFR in a panel of 106 kinases representing all the branches of the kinome [35]. While the EGFR kinase inhibitory activity was largely abolished here, it could not be excluded that our modifications may have generated affinity to another kinase. The most important kinases to be checked were those reported to phosphorylate p65-Ser468 in different cell types, comprising GSK3β, IKKβ, and IKK So far, no evidence had emerged as to why the NF-κB was released from the IκB complex but still retained in the cytosol. Hence, we analyzed another regulatory site in p65, Ser468, because there was evidence from the literature that this site was phosphorylated only after the release of p65/RelA [45]. In Jurkat T cells, the IKK-related kinase IKKε was identified as the Ser468 kinase, and the expression of a kinase-dead IKKε mutant entailed a strong reduction in the NF-κB translocation to the nucleus [45]. Indeed, the treatment of the dTHP−1 cells with 7.5 µM of 19 led to a significantly diminished phosphorylation of p65-Ser468 ( Figure 4C,D), suggesting that this inhibitory effect could be related to the retention of the NF-κB dimer in the cytoplasm (see Figure 1 for illustration). Compound 19 Has No Effect on the Kinases Involved in NF-κB Activation Since we found that 19 led to a reduction in the Ser468 phosphorylation on p65/RelA, it was straightforward to explore whether 19 was directly inhibiting a kinase involved in NF-κB activation. This was a theoretical possibility, although our original dual EGFR/NF-κB inhibitor had not significantly affected any kinase other than EGFR in a panel of 106 kinases representing all the branches of the kinome [35]. While the EGFR kinase inhibitory activity was largely abolished here, it could not be excluded that our modifications may have generated affinity to another kinase. The most important kinases to be checked were those reported to phosphorylate p65-Ser468 in different cell types, comprising GSK3β, IKKβ, and IKKɛ [45][46][47]. In addition, all the kinases described in the literature to control the canonical and non-canonical pathway of NF-κB activation were also included in the screening panel. The screening revealed that 19 did not appreciably inhibit any of these kinases (Table S1). Hence, it was rather unlikely that a kinase was targeted by 19 directly, unless it was a kinase whose role in the NF-κB activation pathway is as yet unknown. Instead, the inhibitory effect of 19 on the Ser468 phosphorylation might be indirect, e.g., through interference with proteins complexes regulating the Ser468 kinase activity. Chemistry Solvents and reagents were obtained from commercial suppliers and used as received. Melting points were determined on a Stuart SMP3 melting point apparatus. All final compounds had a percentage purity of at least 95%, as measured by HPLC. A Spec-traSYSTEM (ThermoFisher Scientific, Waltham, MA, USA) or an Ultimate 3000 (Ther-moFisher Scientific) LC system was used, either consisting of a pump, an autosampler, and a UV detector. Mass spectrometry was performed on an MSQ plus electrospray mass spectrometer (ThermoFisher, Dreieich, Germany). An RP C18 column was used as the stationary phase. Three different methods were used in which the solvent system consisted of water containing 0.1% TFA or FA (A) and 0.1% TFA or FA in acetonitrile (B). HPLC Method 1: flow rate 1 mL/min; the percentage of B started at an initial of 5%, was increased up to 100% during 15 min, kept at 100% for 5 min, and flushed to 5% in 4 min and maintained for 1 min. Method 2: flow rate 0.9 mL/min; the percentage of B started at an initial of 5%, was increased up to 100% during 10 min, kept at 100% for 1 min, and flushed to 5% in 1 min. Method 3: flow rate 0.7 mL/min; the percentage of B started at an initial of 5% for 2 min, was increased to 98% during 6 min, kept at 98% for 2 min, and flushed to 5% in 2 min. Chemical shifts were recorded as δ values in ppm units and referenced against the residual solvent peak (DMSO-d6, δ = 2.50). Splitting patterns describe apparent multiplicities and are designated as s (singlet), brs (broad singlet), d (doublet), [45][46][47]. In addition, all the kinases described in the literature to control the canonical and non-canonical pathway of NF-κB activation were also included in the screening panel. The screening revealed that 19 did not appreciably inhibit any of these kinases (Table S1). Hence, it was rather unlikely that a kinase was targeted by 19 directly, unless it was a kinase whose role in the NF-κB activation pathway is as yet unknown. Instead, the inhibitory effect of 19 on the Ser468 phosphorylation might be indirect, e.g., through interference with proteins complexes regulating the Ser468 kinase activity. Chemistry Solvents and reagents were obtained from commercial suppliers and used as received. Melting points were determined on a Stuart SMP3 melting point apparatus. All final compounds had a percentage purity of at least 95%, as measured by HPLC. A SpectraSYS-TEM (ThermoFisher Scientific, Waltham, MA, USA) or an Ultimate 3000 (ThermoFisher Scientific) LC system was used, either consisting of a pump, an autosampler, and a UV detector. Mass spectrometry was performed on an MSQ plus electrospray mass spectrometer (ThermoFisher, Dreieich, Germany). An RP C18 column was used as the stationary phase. Three different methods were used in which the solvent system consisted of water containing 0.1% TFA or FA (A) and 0.1% TFA or FA in acetonitrile (B). HPLC Method 1: flow rate 1 mL/min; the percentage of B started at an initial of 5%, was increased up to 100% during 15 min, kept at 100% for 5 min, and flushed to 5% in 4 min and maintained for 1 min. Method 2: flow rate 0.9 mL/min; the percentage of B started at an initial of 5%, was increased up to 100% during 10 min, kept at 100% for 1 min, and flushed to 5% in 1 min. Method 3: flow rate 0.7 mL/min; the percentage of B started at an initial of 5% for 2 min, was increased to 98% during 6 min, kept at 98% for 2 min, and flushed to 5% in 2 min. Chemical shifts were recorded as δ values in ppm units and referenced against the residual solvent peak (DMSO-d 6 , δ = 2.50). Splitting patterns describe apparent multiplicities and are designated as s (singlet), brs (broad singlet), d (doublet), dd (doublet of doublet), t (triplet), q (quartet), m (multiplet). Coupling constants (J) are given in hertz (Hz). General Synthetic Procedures and Experimental Details Procedure A, procedure for synthesis of compounds b2-c2. 2-Amino-5-nitrobenzonitrile (5 g, 30.6 mmol) was held at reflux in triethyl orthoformate (50 mL) for 16 h in the presence of acetic anhydride (10 drops). The reaction was then concentrated under vacuum, and the remaining residue was poured onto ice water, at which point a precipitate formed. The precipitate was filtered under vacuum and left to dry to provide compound a. Compound a (1.1 g, 5 mmol) was held at reflux for 1 h with the corresponding aniline derivative in 8 mL glacial acetic acid. A precipitate was formed during the reaction, which was filtered while hot, and the precipitate was then washed with Et 2 O to provide the corresponding nitroquinazoline derivatives (b1-c1). Consequently, the desired nitroquinazoline derivative (b1-c1) (5 mmol) was mixed with stannous chloride (5.625 gm, 25 mmol) in MeOH (20 mL), and then the mixture was stirred at reflux for 30 min under a nitrogen atmosphere. Excess MeOH was removed under reduced pressure; the remaining residue was dissolved in EtOAc (200 mL) and made alkaline with an aqueous solution of NaHCO 3 . The resulting mixture was filtrated under vacuum followed by separation of the organic phase from the aqueous phase. The aqueous phase was extracted with EtOAc (2 × 20 mL), the organic fractions were combined, dried over anhydrous MgSO 4 , and concentrated under reduced pressure to obtain the corresponding aminoquinazoline derivative (b2-c2). The selected aminoquizaoline derivative b2 (2 mmol) was added to water (20 mL), into which concentrated HCl (1 mL) was then added and stirred at 0 • C. Thiophosgene (0.253 gm, 2.2 mmol) was then added dropwise in a well-ventilated hood to the stirred solution; stirring continued for 3 h, after which the formed precipitate was filtered and washed with Et 2 O to provide compound b3. Afterwards, a mixture of the isothiocyanate derivative (1 mmol) and the corresponding amine derivative (1 mmol) was stirred at room temperature for 5 h in DMF (10 mL). The solution was then poured onto ice water, at which point a precipitate formed that was then filtered. The solid was then purified by column chromatography to provide the final compounds. Procedure C, procedure for synthesis of compounds 12-16. Cell Viability Cells were incubated with the WST-1 reagent (Thermo Fisher Scientific, Waltham, MA, USA) at 37 • C for 2 h. The cell viability was determined by spectrophotometer at 450 nm. ELISA Assay The dTHP−1 cells were treated with indicated compounds for 3 h and then activated with or without 100 ng/mL lipopolysaccharide (LPS; E. coli 0111:B4) for another 24 h. The levels of human IL-6 and TNFα in supernatants were measured by DuoSet ® ELISA Development System (R&D systems, Minneapolis, MN, USA) according to the manufacturer's protocol. Immunofluorescent Staining Cells were fixed with 4% formaldehyde and then incubated with 5% goat serum. The intracellular location of NF-κB p65 was determined using primary antibodies against NF-κB p65 (Cell Signaling Technology, Beverly, MA, USA) and FITC-conjugated secondary antibody. Nuclei were counterstained with Hoechst (1 µg/mL). Images were obtained by fluorescent microscopy (OLYMPUS IX 81; Olympus, Tokyo, Japan). Conclusions In this study, we demonstrated that our previous dual EGFR kinase/NF-κB inhibitors could be modified at the 4-aminophenyl and the thiourea function to retain only the NF-κB inhibitory activity. Several of these new inhibitors, including 19 and 20, were active in all the assays relying on NF-κB inhibition while exhibiting low to moderate cytotoxicity in the non-immune cell lines HEK293 and A549. Our most potent compounds inhibited the production of IL-6 in the submicromolar range and, to a somewhat lower extent, also of TNFα, which are both key factors in often the same inflammatory disorders. IL-6 is a pleiotropic pro-inflammatory cytokine, which is implicated in the pathophysiology of numerous chronic inflammatory and auto-immune diseases, such as multiple sclerosis, rheumatoid arthritis, and inflammatory bowel and pulmonary diseases (see Ref. [48] for a recent review). Thus, IL-6 is a pivotal target for the development of therapeutics against complex inflammatory diseases. Monoclonal antibodies neutralizing IL-6 peptides are successfully used to treat various immunoinflammatory diseases; however, their high cost, invasive route of administration, and high rate of immunogenicity remain major limitations. Small molecules that cause a decrease in IL-6 production have been identified mostly by phenotypic screening; however, the mechanisms of action were not investigated, and many of these compounds may have pleiotropic effects [48]. TNFα, which is primarily produced by macrophages, is another pivotal inflammatory mediator in numerous chronic inflammatory and autoimmune diseases [49][50][51][52][53][54]. Similar to the targeting of IL-6, anti-TNFα mAbs are used to treat inflammatory conditions, such as rheumatoid arthritis, juvenile arthritis, inflammatory bowel diseases, and psoriasis (reviewed in Refs. [55,56]). In light of their joint pro-inflammatory activities, the reduction in both IL-6 and TNFα, along with other cytokines through the inhibition of the central transcription factor NF-κB, as demonstrated for our compounds, might be even more effective in the treatment of the chronic inflammatory diseases for which either IL-6-and TNFα-mAbs are currently used. In inflammatory bowel diseases, for instance, up to 40% of the patients do not respond to anti-TNFα treatments [13]. Moreover, while mAbs do not affect the cytokine-producing cells, several of our inhibitors, particularly 6 and 19, selectively induced cell death in the LPS-activated macrophages, significantly stronger than the reference compound, CAPE. This could be an interesting additional effect of the novel compounds, worthy of being explored in future studies as macrophage infiltrates are especially associated with tissue damage and inflammation in metabolic syndrome [57], inflammatory brain disorders, and autoimmune diseases [57,58]. In addition, macrophages are implicated in the destabilization of atherosclerotic plaques, leading to acute coronary syndromes and sudden death. Elimination of macrophages from plaques through pharmacological intervention may, therefore, represent a promising strategy to stabilize vulnerable, rupture-prone lesions [59]. Thus, in various conditions of chronic inflammation, it could be a therapeutic advantage to selectively induce cell death in activated TNFα-producing macrophages compared to a temporary reduction in or neutralization of their secreted cytokines. With respect to the mechanism of action, we found that 19 inhibited the p65 nuclear translocation while not affecting IκB phosphorylation and degradation. Further investigating the regulatory mechanisms that were targeted by the compounds in dTHP−1 cells, we could demonstrate that 19 caused a reduction in the Ser468 but not the Ser536 phosphorylation levels. Our results provided evidence for an independent regulation of these phosphorylation events, although activating stimuli, such as LPS and TNFα, induce both the Ser536 and Ser468 phosphorylation in different immunocompetent cell types [45,[60][61][62]. Since detailed knowledge on the regulation of Ser-468 phosphorylation on p65/RelA is lacking, it was not possible in the frame of this study to conduct further research aiming at identifying the direct target protein of our compounds. However, 19 and other congeners from our study might be useful tools to analyze the role and regulation of Ser468 phosphorylation in future studies. In conclusion, our 4-aminophenyl quinazoline thiourea compounds represent a new class of inhibitors that display a combined mode of action on cytokine release and macrophage depletion. Thus, they may have potential for the treatment of various inflammatory diseases that are exacerbated by excess activated macrophages, such as arteriosclerosis and autoimmune diseases. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ph15070778/s1, Table S1: Selectivity profiling of compound 19 against the protein kinases known to regulate the NF-κB pathway. Figure S1: 1 H-NMR and 13 C-NMR spectra of newly synthesized compounds. Conflicts of Interest: The authors declare no conflict of interest. Compound 19 Inhibits the Phosphorylation of p65-Ser468 So far, no evidence had emerged as to why the NF-κB was released fro complex but still retained in the cytosol. Hence, we analyzed another regulat p65, Ser468, because there was evidence from the literature that this site was p lated only after the release of p65/RelA [45]. In Jurkat T cells, the IKK-related ki was identified as the Ser468 kinase, and the expression of a kinase-dead IKKε m tailed a strong reduction in the NF-κB translocation to the nucleus [45]. Indeed ment of the dTHP−1 cells with 7.5 µM of 19 led to a significantly diminished p lation of p65-Ser468 ( Figure 4C,D), suggesting that this inhibitory effect could to the retention of the NF-κB dimer in the cytoplasm (see Figure 1 for illustratio Compound 19 Has No Effect on the Kinases Involved in NF-κB Activatio Since we found that 19 led to a reduction in the Ser468 phosphorylation on it was straightforward to explore whether 19 was directly inhibiting a kinase in NF-κB activation. This was a theoretical possibility, although our original dual E κB inhibitor had not significantly affected any kinase other than EGFR in a pa kinases representing all the branches of the kinome [35]. While the EGFR kinase activity was largely abolished here, it could not be excluded that our modifica have generated affinity to another kinase. The most important kinases to be che those reported to phosphorylate p65-Ser468 in different cell types, comprisin IKKβ, and IKKɛ [45][46][47]. In addition, all the kinases described in the literature the canonical and non-canonical pathway of NF-κB activation were also inclu screening panel. The screening revealed that 19 did not appreciably inhibit an kinases (Table S1). Hence, it was rather unlikely that a kinase was targeted by 1 unless it was a kinase whose role in the NF-κB activation pathway is as yet Instead, the inhibitory effect of 19 on the Ser468 phosphorylation might be ind through interference with proteins complexes regulating the Ser468 kinase acti Chemistry Solvents and reagents were obtained from commercial suppliers and u ceived. Melting points were determined on a Stuart SMP3 melting point appa final compounds had a percentage purity of at least 95%, as measured by HPLC traSYSTEM (ThermoFisher Scientific, Waltham, MA, USA) or an Ultimate 30 moFisher Scientific) LC system was used, either consisting of a pump, an aut and a UV detector. Mass spectrometry was performed on an MSQ plus electros spectrometer (ThermoFisher, Dreieich, Germany). An RP C18 column was u stationary phase. Three different methods were used in which the solvent sy sisted of water containing 0.1% TFA or FA (A) and 0.1% TFA or FA in aceto HPLC Method 1: flow rate 1 mL/min; the percentage of B started at an initial o increased up to 100% during 15 min, kept at 100% for 5 min, and flushed to 5% and maintained for 1 min. Method 2: flow rate 0.9 mL/min; the percentage of B an initial of 5%, was increased up to 100% during 10 min, kept at 100% for 1 flushed to 5% in 1 min. Method 3: flow rate 0.7 mL/min; the percentage of B sta initial of 5% for 2 min, was increased to 98% during 6 min, kept at 98% for 2 flushed to 5% in 2 min. Chemical shifts were recorded as δ values in ppm units enced against the residual solvent peak (DMSO-d6, δ = 2.50). Splitting pattern apparent multiplicities and are designated as s (singlet), brs (broad singlet), d
v3-fos-license
2020-04-09T09:14:13.352Z
2020-04-07T00:00:00.000
216177513
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2220-9964/9/4/225/pdf", "pdf_hash": "f00953e6070c988b5209d7cc260b531aa950dc45", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42299", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "sha1": "5b9977ef6e4cd9612443fe083b805a6e754ff1c3", "year": 2020 }
pes2o/s2orc
Effectiveness of Sentinel-2 in Multi-Temporal Post-Fire Monitoring When Compared with UAV Imagery Unmanned aerial vehicles (UAVs) have become popular in recent years and are now used in a wide variety of applications. This is the logical result of certain technological developments that occurred over the last two decades, allowing UAVs to be equipped with different types of sensors that can provide high-resolution data at relatively low prices. However, despite the success and extraordinary results achieved by the use of UAVs, traditional remote sensing platforms such as satellites continue to develop as well. Nowadays, satellites use sophisticated sensors providing data with increasingly improving spatial, temporal and radiometric resolutions. This is the case for the Sentinel-2 observation mission from the Copernicus Programme, which systematically acquires optical imagery at high spatial resolutions, with a revisiting period of five days. It therefore makes sense to think that, in some applications, satellite data may be used instead of UAV data, with all the associated benefits (extended coverage without the need to visit the area). In this study, Sentinel-2 time series data performances were evaluated in comparison with high-resolution UAV-based data, in an area affected by a fire, in 2017. Given the 10-m resolution of Sentinel-2 images, different spatial resolutions of the UAV-based data (0.25, 5 and 10 m) were used and compared to determine their similarities. The achieved results demonstrate the effectiveness of satellite data for post-fire monitoring, even at a local scale, as more cost-effective than UAV data. The Sentinel-2 results present a similar behavior to the UAV-based data for assessing burned areas. Introduction In recent years, forest fires (i.e., large and destructive fires that spread over a forest or area of woodland) have received increasing attention due to their effects on climate change and ecosystems. Forest fires occur regularly, vary in scale and impacts and are inherent to terrestrial ecosystems [1]. Weather, topography and fuel are the three major components that define the fire environment and are directly related with the evolution of land use [2]. Portugal is one of the southern European countries most affected by forest fires, but it is also affected by rural fires [3]. In other words, not only do fires over forests affect the country, but the combination of environmental factors and human settlement may also cause harm to people or damage property or the environment [4]. Several factors contribute to the country being so severely affected: the Mediterranean climate, which benefits fuel accumulation and dryness along with the existence of flammable vegetation types; high ignition density; poor fire-suppression capabilities; and institutional instability [5]. Thus, forest fire impacts are attracting more and more attention not only from the scientific community, but also from public entities worldwide [5]. In the Portuguese case, this awareness is increasing, especially in the north and in the center of the country [6]. In this context, remote sensing platforms are being used as a capable tool for mapping burned areas, evaluating the characteristics of active fires and characterizing post-fire ecological effects and regeneration [7]. In the past decade, the use of unmanned aerial vehicles (UAVs) has increased for agroforestry applications [8] and are now being used for forest fire prevention [9], canopy fuel estimation [10], fire monitoring [11,12] and to support firefighting operations [13]. Likewise, studies using UAV-based imagery in post-fire monitoring have been concerned with surveying [14], calibrating satellite-based burn severity indices [15], assessing post-fire vegetation recovery [16], mapping fire severity [17,18], studying forest recovery dynamics [19] and sapling identification [20]. Despite being a cost-effective and a very versatile platform for remote sensed data acquisition that is capable of carrying a wide set of sensors, its usage in surveying big areas can be constrained due to legal [21] and technological limitations such as its autonomy and payload capacity [8]. On the other hand, traditional remote sensing platforms such as satellites continue to be widely used to obtain data with increasingly improved spatial, temporal and radiometric resolutions. Satellites still offer a quick way to evaluate forest regeneration in post-fire areas. However, lower spatial resolutions (compared with UAV data) often mean that satellites are used for studies only at regional or national scales [22][23][24][25][26]. The Copernicus Programme, from the European Union's Earth Observation Programme, was created with the goal to achieve a global, continuous, autonomous, high-quality, wide-range Earth observation capacity. The different satellite missions belonging to this program make it possible to obtain accurate, timely and easily accessible information to improve the management of the environment, as well as to understand and mitigate the effects of climate change and ensure civil security. Therefore, access to medium-and high-resolution satellite data with a high temporal resolution are accessible for free [27], namely, the Sentinel-2 Multispectral Instrument (MSI) [28]. A wide range of spectral bands are available from visible to shortwave infrared (SWIR) which allows, in a post-fire monitoring context, severity determination of fire disturbances along with multi-temporal monitoring for burnt areas. This type of data is ideal for monitoring fire disturbances in Mediterranean regions that affect several crops and have extents ranging from some hectares to several square kilometers [29]. In this specific context, Sentinel-2 MSI data were used for exploring spectral indices of burn severity discrimination [30][31][32][33][34], as well as to assess burn severity in combination with Landsat data [35,36]. They were also used to take into account the available multi-temporal data in order to evaluate burned areas at a national level [37] and to assess post-fire vegetation recovery mapping of an island [38]. In this study, we evaluated an area that was severely affected by a fire disturbance in 2017 with an estimated extent greater than 300 ha. The area is located in north-eastern Portugal, and forested areas composed of maritime pine (Pinus pinaster) were significantly affected along with houses, wood storage buildings, agricultural structures and vehicles. This was therefore a fire that could be considered small, its analysis and monitoring could be possible to carry out using aerial high-resolution data acquired by a UAV. Every year, thousands of fires similar to this occur in Portugal, covering the north and center of the country in particular with small patches of burnt areas. To assess the effectiveness of satellite data in studying this specific type of area, Sentinel-2 MSI data were used to characterize the area before the fire disturbance, allowing an assessment of the fire's severity and multi-temporal analysis to be performed (2017-2019). Moreover, to compare the spatial information provided by the Sentinel-2 MSI (10-m spatial resolution), a UAV flight campaign was carried in part of the study area to acquire multispectral data with a very-high resolution. This is precisely the central question of this study: what is the potential use of new generation free-access satellite images (Sentinel-2) to monitor small-scale burnt areas. To the best of our knowledge, this is the first study that uses freely available satellite data to analyze a burnt area of relatively small dimensions and conclude that the results were in line with those obtained by high-resolution data acquired by a UAV. Although more studies are needed that cover different areas with different complexities and different vegetation covers, this study allowed us to conclude that satellite data have great potential, in certain cases, to replace high aerial-resolution data acquired by UAVs. This would allow analyzing post-fire areas (even small ones) at the national level, representing considerable savings in time and money. Study Area The study area, highlighted in Figure 1, is located in the north-eastern region of Portugal within the municipality of Sabrosa (41 • 20'40.4 [39]. The area is characterized by a warm and temperate climate, an average annual temperature of 13.1 • C and an annual precipitation averaging 1139 mm. July and August are the months with the highest mean temperatures (21 • C) and lower precipitation (28 mm). This area was selected due to its easy accessibility and representativeness, since the species in the area are common for the region. It is mostly populated by maritime pine, deciduous species such as Quercus pyrenaica and Castanea sativa Mill. and some riparian species, shrubland communities and parcels used for agriculture and silviculture purposes. Moreover, the burned area was greater than 100 ha, which fit the majority of the fire events (93%) that occurred in Portugal during 2017 [6]. have great potential, in certain cases, to replace high aerial-resolution data acquired by UAVs. This would allow analyzing post-fire areas (even small ones) at the national level, representing considerable savings in time and money. Study Area The study area, highlighted in Figure 1, is located in the north-eastern region of Portugal within the municipality of Sabrosa (41°20'40.4"N, 7°36'04.5"W), near the villages of Parada do Pinhão and Vilarinho de Parada. This area was severely affected by a wildfire that began at 12:59 p.m. on 13 August 2017 and was reported as extinguished at 03:16 a.m. on 14 August 2017 [39]. The area is characterized by a warm and temperate climate, an average annual temperature of 13.1 °C and an annual precipitation averaging 1139 mm. July and August are the months with the highest mean temperatures (21 °C) and lower precipitation (28 mm). This area was selected due to its easy accessibility and representativeness, since the species in the area are common for the region. It is mostly populated by maritime pine, deciduous species such as Quercus pyrenaica and Castanea sativa Mill. and some riparian species, shrubland communities and parcels used for agriculture and silviculture purposes. Moreover, the burned area was greater than 100 ha, which fit the majority of the fire events (93%) that occurred in Portugal during 2017 [6]. Remote Sensing Dataset The satellite imagery data used in this study were acquired by the Sentinel-2 MSI. Spectral data products provided by MSI ranged from the visible to the shortwave infrared (SWIR) parts of the electromagnetic spectrum. In total, there were 13 available spectral bands (B) at different spatial resolutions: (1) at 10 m-B2 (490 nm), B3 (560 nm), B4 (665 nm) and B8 (842 nm); (2) at 20 m-B5 (705 nm), B6 (740 nm), B7 (783 nm), B8a (865 nm), B11 (1610 nm) and B12 (2190 nm); and (3) at 60 m-B1 (443 nm), B9 (940 nm) and B10 (1375 nm) [28]. Data were obtained from the Copernicus Open Access Hub with an absence of clouds over the study area from June 2017 to October 2019. These epochs were selected due to being related to the last available period before the fire disturbance (June, July and August 2017), including the first cloud and smoke-free post-fire data (September 2017). The imagery was atmospherically corrected using the Sen2Cor [40]. Remote Sensing Dataset The satellite imagery data used in this study were acquired by the Sentinel-2 MSI. Spectral data products provided by MSI ranged from the visible to the shortwave infrared (SWIR) parts of the electromagnetic spectrum. In total, there were 13 available spectral bands (B) at different spatial resolutions: (1) at 10 m-B2 (490 nm), B3 (560 nm), B4 (665 nm) and B8 (842 nm); (2) at 20 m-B5 (705 nm), B6 (740 nm), B7 (783 nm), B8a (865 nm), B11 (1610 nm) and B12 (2190 nm); and (3) at 60 m-B1 (443 nm), B9 (940 nm) and B10 (1375 nm) [28]. Data were obtained from the Copernicus Open Access Hub with an absence of clouds over the study area from June 2017 to October 2019. These epochs were selected due to being related to the last available period before the fire disturbance (June, July and August 2017), including the first cloud and smoke-free post-fire data (September 2017). The imagery was atmospherically corrected using the Sen2Cor [40]. Regarding UAV data, the senseFly eBee (senseFly SA, Lausanne, Switzerland) was used to acquire both RGB and multispectral imagery. A Canon IXUS 127 HS sensor with 16.1 MP resolution was used for RGB imagery acquisition, and the Parrot SEQUOIA sensor was used for multispectral imagery acquisition. The multispectral sensor comprised a four-camera array with 1.2 MP resolution acquiring green (530-570 nm), red (640-680 nm), red edge (730-740 nm) and near infrared (NIR) (770-810 nm) imagery. Its radiometric calibration was performed using a target prior to the flight. Two flights with the same mission plan (one per sensor) were performed on 11 July 2019. The RGB flight was performed at a 425-m height, covering an area of 230 ha, with a spatial resolution of 0.12 m. The imagery overlap was 80% front and 60% side, for a total acquisition of 91 georeferenced images (related to a ground system of geographic coordinates) distributed through eight strips (approximately 11 images per strip). As for the multispectral flight, it was carried out at a 215-m height, covering approximately 150 ha, with a spatial resolution of approximately 0.25 m; it had an 80% front overlap and 60% side overlap, for a total acquisition of 260 images per spectral band (12 strips with approximately 22 images per strip). A pre-processing of UAV-based imagery is required before it is ready for use. Thus, Pix4Dmapper Pro version 4.4.12 (Pix4D SA, Lausanne, Switzerland) was used for the photogrammetric processing of the UAV imagery, and common tie points were identified in the provided imagery according to their geolocation and internal and external camera parameters. This enabled the computation of dense 3D point clouds that were further interpolated using inverse distance weighting (IDW) to obtain the following orthorectified outcomes: an orthophoto mosaic from the RGB imagery, digital elevation models (DEMs) and four radiometric bands from the multispectral imagery that could then be used for the computation of vegetation indices. DEMs were not used in the scope of this study, and the orthophoto mosaic was used for visual inspection only. Data Processing and Analysis Both satellite and UAV multispectral datasets were used to compute vegetation indices. Sentinel-2-based vegetation indices were used to assess the fire severity and to perform the post-fire multi-temporal analysis of the study area. Similar vegetation indices were computed using UAV data for a single epoch, allowing a comparison of both sets of results. Computation of Spectral Indices The satellite data were used to compute the normalized burn ration (NBR) [41] as in Equation (1). This index relates to vegetation moisture content by combining the NIR (B8) and SWIR (B12) parts of the electromagnetic spectrum [42], and is generally accepted as a standard spectral index to assess burn severity [41,43]. Moreover, the normalized difference vegetation index (NDVI) [44] was calculated using a NIR band (B8) and a RED band (B4) from Sentinel-2 MSI data. NIR and RED bands from the UAV-based multispectral data were also used to compute the equivalent index, as in Equation (2). NDVI is widely used to analyze the vegetation condition in different contexts [8]. Post-Fire Multi-Temporal Analysis The multi-temporal analysis performed in this study relied on the time series data provided by the Sentinel-2 MSI. From the available data, a set of four epochs was selected for each year (2017 to 2019), corresponding each one to the months of June, July, August and September, with the dates of the selected data presented in Table 1. This period was selected (June, July and August 2017) in order to include data prior to the fire disturbance, along with the same months in following available years (2018 and 2019). Some data outside these periods were affected by clouds and had to be discarded. Moreover, it was decided to not consider any data from October to May in order to avoid false assumptions from the natural seasonal behavior of the species in the study area (e.g., the absence of leaves in deciduous tree species in the winter time, and potential interference of undergrowth vegetation in winter and spring time). The selected months assured that the trees were fully developed and that undergrowth vegetation interference was minimal [45]. Table 1. Days corresponding to the Sentinel-2 data selected for multi-temporal analysis. June, July and August 2017 correspond to data before the fire disturbance. The difference normalized burn ration (dNBR), calculated by subtracting the post-fire raster data from the pre-fire raster as in Equation (3), was used to perform the burn severity level classification as proposed by the United States Geological Survey (USGS) [46,47], enabling an understanding not only the severity of the burned areas, but also of unburned areas within the study region. Pre-and post-fire NBRs were the NBR of a date before and after the fire disturbance, respectively. In burned areas, the NBR showed higher values before the fire and lower values after the fire. The dNBR was the difference between the NBRs of both epochs: positive values represented areas with a higher fire severity, while values close to or lower than zero represented unburned areas and/or vegetation regrowth. For each classified severity level, the mean NDVI value was calculated per analyzed month. The mean NDVI value was also estimated for the whole burned area. Year To evaluate the post-fire recovery, a similar analysis was performed using the difference NDVI (dNDVI) by subtracting the NDVI of first post-fire (September 2017) from the NDVI values from each analyzed month from 2018 and 2019. This way, positive values represented an increase in the NDVI and, consequently, a potential recovery zone, while the inverse was true for values close to or less than zero. The data analysis was carried in the opensource geographical information system (GIS) QGIS (version 3.4.12-Madeira) and functions from the Geographic Resources Analysis Support System (GRASS GIS) [48] and from the System for Automated Geoscientific Analyses (SAGA GIS) [49] were also used. Sentinel-2 MSI and UAV Comparison The Sentinel-2 MSI data acquired on 9 July 2019 were compared to the UAV-based multispectral imagery (two days difference). The NDVI maps produced from both datasets were compared. The UAV-based NDVI at its original spatial resolution (0.25 m), its resampling to half the resolution and its resampling to same resolution as the Sentinel-2 MSI (5 and 10 m, respectively) were used for this comparison. A total of 116 ha (~35%) of the burned area ( Figure 1) was evaluated. This is precisely the most complex area, containing a greater variety of tree species, agricultural fields and infrastructure. The resampling of the UAV NDVI was performed using the "r.resamp.stats" function from GRASS GIS in QGIS, by specifying the grid cell sizes (5 × 5 m and 10 × 10 m) and assigning the aggregated mean values to each cell. The correlation among the different NDVIs (UAV-based and satellite-based) was performed using the "r.covar" function from GRASS GIS. Moreover, the geospatial variability of the Sentinel-2 NDVI was compared with the UAV NDVI at the three different spatial resolutions. The mean values of each evaluated NDVI were quantified in a 50 × 50 m grid. The size of this grid, representing five times the Sentinel-2 resolution, was selected to smooth the transition zones of vegetation cover. Then, the local bivariate Moran's index (MI) [50] and the bivariate local indicators of spatial association (BILISA) [51] were applied as in Anselin [52] to assess the spatial relationship between the NDVIs computed from both datasets. The local bivariate MI was used to assess the correlation between a defined variable (satellite NDVI) and a different variable in the nearby areas (UAV NDVI). BILISA was used to measure the local spatial correlation, forming maps of clusters with similar behaviors and enabling an assessment of their spatial variabilities and dispersion. These cluster maps were divided into four classes based on the correlation of a value with its neighborhood: high-high (HH); low-low (LL); high-low (HL); and low-high (LH). This analysis was made using GeoDa software (version 1.14.0) [53]. The required weights map was defined using an eight-connectivity approach (3 × 3 matrix) and 999 random permutations were used in the BILISA execution. Sentinel-2 Post-Fire Monitoring The fire severity map calculated using the dNBI from the pre-fire NBI (August 2017) and the first post-fire NBI (September 2017) are presented in Figure 2. From the 361 ha representing the study area, 42% (151 ha) presented a high severity, 44% (160 ha) showed a moderate severity and 38 ha (11%) presented a low severity. Only 3% of the area (12 ha) was estimated not to have been affected by the fire disturbance. A visual inspection of these areas allowed us to conclude that unburned and low-severity areas represented infrastructures, or corresponded to bare soil or fields used for agriculture along with some tree stands. Moderate severity areas included shrubland communities, agriculture terrains and trees, while high-severity areas mostly included highly density forest stands. and the bivariate local indicators of spatial association (BILISA) [51] were applied as in Anselin [52] to assess the spatial relationship between the NDVIs computed from both datasets. The local bivariate MI was used to assess the correlation between a defined variable (satellite NDVI) and a different variable in the nearby areas (UAV NDVI). BILISA was used to measure the local spatial correlation, forming maps of clusters with similar behaviors and enabling an assessment of their spatial variabilities and dispersion. These cluster maps were divided into four classes based on the correlation of a value with its neighborhood: high-high (HH); low-low (LL); high-low (HL); and low-high (LH). This analysis was made using GeoDa software (version 1.14.0) [53]. The required weights map was defined using an eight-connectivity approach (3 × 3 matrix) and 999 random permutations were used in the BILISA execution. Sentinel-2 Post-Fire Monitoring The fire severity map calculated using the dNBI from the pre-fire NBI (August 2017) and the first post-fire NBI (September 2017) are presented in Figure 2. From the 361 ha representing the study area, 42% (151 ha) presented a high severity, 44% (160 ha) showed a moderate severity and 38 ha (11%) presented a low severity. Only 3% of the area (12 ha) was estimated not to have been affected by the fire disturbance. A visual inspection of these areas allowed us to conclude that unburned and low-severity areas represented infrastructures, or corresponded to bare soil or fields used for agriculture along with some tree stands. Moderate severity areas included shrubland communities, agriculture terrains and trees, while high-severity areas mostly included highly density forest stands. The Sentinel-2 multi-temporal data enabled us to characterize the study area throughout the analyzed period. Figure 3 presents the pre-and post-fire NDVI (August and September 2017, Figure 3a,b) and the NDVI from September of the two subsequent years (2018 and 2019, Figure 3c,d). The fire disturbance is clearly observable from the NDVI data and some forestry recovery is noticeable in the north, north-eastern and south-western parts of the study area. This is especially distinguishable in 2019 (Figure 3g). The Sentinel-2 multi-temporal data enabled us to characterize the study area throughout the analyzed period. Figure 3 presents the pre-and post-fire NDVI (August and September 2017, Figure 3a,b) and the NDVI from September of the two subsequent years (2018 and 2019, Figure 3c,d). The fire disturbance is clearly observable from the NDVI data and some forestry recovery is noticeable in the north, north-eastern and south-western parts of the study area. This is especially distinguishable in 2019 (Figure 3g). The mean NDVI value was extracted for each severity level and unburned area for the months of June, July, August and September during 2017-2019, as well as for the whole area affected by the fire. Figure 4 presents these results. When analyzing the values obtained from the whole area ( Figure 4a), the decline of NDVI values (−56%) after the fire disturbance (August to September 2017) is clearly noticeable. From September 2017 to June 2018, a growth of 52% in the mean NDVI value was verified, while in the homologous period in 2019 the growth was 33%. When separately analyzing each year, the values declined each month, with less noticeable results from August to September. The mean NDVI value was extracted for each severity level and unburned area for the months of June, July, August and September during 2017-2019, as well as for the whole area affected by the fire. Figure 4 presents these results. When analyzing the values obtained from the whole area (Figure 4a), the decline of NDVI values (−56%) after the fire disturbance (August to September 2017) is clearly noticeable. From September 2017 to June 2018, a growth of 52% in the mean NDVI value was verified, while in the homologous period in 2019 the growth was 33%. When separately analyzing each year, the values declined each month, with less noticeable results from August to September. The mean NDVI value was extracted for each severity level and unburned area for the months of June, July, August and September during 2017-2019, as well as for the whole area affected by the fire. Figure 4 presents these results. When analyzing the values obtained from the whole area ( Figure 4a), the decline of NDVI values (−56%) after the fire disturbance (August to September 2017) is clearly noticeable. From September 2017 to June 2018, a growth of 52% in the mean NDVI value was verified, while in the homologous period in 2019 the growth was 33%. When separately analyzing each year, the values declined each month, with less noticeable results from August to September. This tendency is reflected when observing the mean NDVI values per severity level (Figure 4b). The mean NDVI value of the unburned area was relatively constant, with a standard deviation of 0.03. Similarly, the area classified as low severity presented a standard deviation of 0.06. On the other hand, high-severity areas presented higher post-fire increases (with a standard deviation of 0.06 considering 2018 and 2019 values, and 0.17 overall), and the mean NDVI value presented growths of 93% from September 2017 to June 2018 and 42% from September 2018 to June 2019. For the moderate severity areas, these increases were 39% for September 2017 to June 2018, and 32% for September 2018 to June 2019, with a standard deviation of 0.04 (0.12 for the whole period). By comparing June 2018 to June 2019, the mean NDVI values for the high-, moderate-and low-severity areas and unburned areas presented variations of 32%, 16%, 1% and −2%, respectively. When analyzing the post-fire dNDVIs ( Figure 5) relating the differences in the first post-fire data (September 2017), a similar trend was observed. By analyzing the mean differences per year, an overall mean difference of 0.12 was verified in 2018, while in 2019 this difference was 0.20. In both years the same trend was verified, with the higher differences verified in areas with high severity, followed by moderate-severity areas. Both unburned and low-severity areas presented lower differences, with a mean difference of 0.06 for the two classes in 2018, an increase to 0.08 in 2019 for the low-severity areas and the same value maintained for the unburned area. The values declined from June to August and remained similar in September. When comparing July 2018 to July 2019, an overall increase of 0.09 was verified in the mean dNDVI values, representing increases of 0.14, 0.07, 0 and −0.01 for high, moderate, low-severity and unburned areas, respectively. A visual representation of the pre-and post-fire dNDVIs for the two subsequent years is presented in Figure 3e-g. 2018 to June 2019, the mean NDVI values for the high-, moderate-and low-severity areas and unburned areas presented variations of 32%, 16%, 1% and −2%, respectively. When analyzing the post-fire dNDVIs (Figure 5) relating the differences in the first post-fire data (September 2017), a similar trend was observed. By analyzing the mean differences per year, an overall mean difference of 0.12 was verified in 2018, while in 2019 this difference was 0.20. In both years the same trend was verified, with the higher differences verified in areas with high severity, followed by moderate-severity areas. Both unburned and low-severity areas presented lower differences, with a mean difference of 0.06 for the two classes in 2018, an increase to 0.08 in 2019 for the low-severity areas and the same value maintained for the unburned area. The values declined from June to August and remained similar in September. When comparing July 2018 to July 2019, an overall increase of 0.09 was verified in the mean dNDVI values, representing increases of 0.14, 0.07, 0 and −0.01 for high, moderate, low-severity and unburned areas, respectively. A visual representation of the pre-and post-fire dNDVIs for the two subsequent years is presented in Figure 3e-g. Comparison of UAV-Based and Sentinel-2 MSI Data As mentioned in Section 2.3.3., the UAV-based multispectral data covered 116 ha of the study area. This was used to perform a comparison between the Sentinel-2 NDVI and the UAV-based NDVI at different spatial resolutions (Figure 6). The statistics of the different spatial resolutions of the UAV NDVI (Table 12) were similar in their mean values, while the minimum, maximum and standard deviation values tended to be greater for higher spatial resolutions. In regards to the NDVI computed from the Sentinel-2 dataset, a small difference was verified for the mean value, while the minimum, maximum and standard deviation values were similar to the UAV NDVI at a 10-m spatial resolution (Figure 6c). Comparison of UAV-Based and Sentinel-2 MSI Data As mentioned in Section 2.3.3., the UAV-based multispectral data covered 116 ha of the study area. This was used to perform a comparison between the Sentinel-2 NDVI and the UAV-based NDVI at different spatial resolutions (Figure 6). The statistics of the different spatial resolutions of the UAV NDVI (Table 2) were similar in their mean values, while the minimum, maximum and standard deviation values tended to be greater for higher spatial resolutions. In regards to the NDVI computed from the Sentinel-2 dataset, a small difference was verified for the mean value, while the minimum, maximum and standard deviation values were similar to the UAV NDVI at a 10-m spatial resolution (Figure 6c). The confusion matrix presented in Table 3 shows the correlation between all NDVIs. All resolutions of the UAV-based NDVIs showed a good correlation and increased as the spatial resolution became closer to the satellite resolution. Geospatial correlation was conducted using a 50 × 50 m grid, resulting in a total of 479 cells. The mean value of the satellite NDVI was compared with each UAV resolution, and the results are presented in Figure 7. The MI value for all approaches was 0.634. Generally, all approaches presented a similar behavior in the BILISA relationships, where 59% of the cells presented a p-value lower than 0.05: 81% of cells presented an HH or LL correlation (39.3% and 41.4%, respectively), 11% presented an LH correlation and only 8% presented an HL correlation. The confusion matrix presented in Table 3 shows the correlation between all NDVIs. All resolutions of the UAV-based NDVIs showed a good correlation and increased as the spatial resolution became closer to the satellite resolution. Table 3. Correlation matrix between the normalized difference vegetation index of the different UAV-based spatial resolutions and the Sentinel-2. Geospatial correlation was conducted using a 50 × 50 m grid, resulting in a total of 479 cells. The mean value of the satellite NDVI was compared with each UAV resolution, and the results are presented in Figure 7. The MI value for all approaches was 0.634. Generally, all approaches presented a similar behavior in the BILISA relationships, where 59% of the cells presented a p-value lower than 0.05: 81% of cells presented an HH or LL correlation (39.3% and 41.4%, respectively), 11% presented an LH correlation and only 8% presented an HL correlation. Discussion This study evaluates the usage of free-access multi-temporal Sentinel-2 data to perform post-fire monitoring over an area of 361 ha in the north-eastern Portugal. The dNBI (Figure 2) was used to assess fire severity, which enabled estimation and delineation of the area affected per severity level. Both high and moderate severity classes represented the majority of the burned area (a total of 86%, corresponding to 311 ha), demonstrating a high incidence of fire disturbance in the forest stands present in the area. Moreover, both classes also presented the lowest post-fire NDVI values ( Figure 4, September 2017). The same trend has been verified by other studies, noting that values decrease as fire severity rises [32]. On the other hand, unburned and low-severity areas were mostly located on the perimeter of the fire disturbance. These areas had easier access, due to the existence of roads and of priority protection by the authorities due to the proximity to settlements and infrastructures. These results are corroborated by the mean NDVI value of the multitemporal analysis (Figure 4), which shows similar values to the pre-fire data in the low-severity and unburned areas along with lower NDVI differences after the fire event ( Figure 5). An example of a riparian stand that resisted fire disturbance is shown in Figure 8. On the other hand, areas classified with a high or moderate fire severity presented higher difference in the NDVI values during the analyzed period. This can be justified by the resprouting of some species and by the regeneration of others, as is the case with maritime pine, which has physical characteristics that allow its survival (thick bark and reproduction procedures) [54]. Moreover, the trend of NDVI values declining over the months can be justified by the presence of some undergrowth cover that dries out due to the absence of precipitation and increase of air temperature [55]. Discussion This study evaluates the usage of free-access multi-temporal Sentinel-2 data to perform post-fire monitoring over an area of 361 ha in the north-eastern Portugal. The dNBI (Figure 2) was used to assess fire severity, which enabled estimation and delineation of the area affected per severity level. Both high and moderate severity classes represented the majority of the burned area (a total of 86%, corresponding to 311 ha), demonstrating a high incidence of fire disturbance in the forest stands present in the area. Moreover, both classes also presented the lowest post-fire NDVI values (Figure 4, September 2017). The same trend has been verified by other studies, noting that values decrease as fire severity rises [32]. On the other hand, unburned and low-severity areas were mostly located on the perimeter of the fire disturbance. These areas had easier access, due to the existence of roads and of priority protection by the authorities due to the proximity to settlements and infrastructures. These results are corroborated by the mean NDVI value of the multitemporal analysis (Figure 4), which shows similar values to the pre-fire data in the low-severity and unburned areas along with lower NDVI differences after the fire event ( Figure 5). An example of a riparian stand that resisted fire disturbance is shown in Figure 8. On the other hand, areas classified with a high or moderate fire severity presented higher difference in the NDVI values during the analyzed period. This can be justified by the resprouting of some species and by the regeneration of others, as is the case with maritime pine, which has physical characteristics that allow its survival (thick bark and reproduction procedures) [54]. Moreover, the trend of NDVI values declining over the months can be justified by the presence of some undergrowth cover that dries out due to the absence of precipitation and increase of air temperature [55]. The UAV-based multispectral imagery acquired in the 116 ha of the study showed similar results when compared to the Sentinel-2 data. These findings have already been verified for WordView-2 1m spatial resolution data [14], but never for Sentinel-2. In fact, for this type of application, the Sentinel-2 proved to be a more cost-effective approach that was able to cover wider areas, providing a short revisit time (five days) and delivering a wider spectral range. UAV-based multispectral data acquisition, on the other hand, can provide similar or higher temporal resolutions, but in a more timeconsuming and expensive way, with costs increasing significantly for bigger areas [56]. This is an issue, since at least two human resources are needed who will make multiple trips and spend several days of work in order to meet a similar revisit time [57]. Furthermore, multiple batteries are needed to cover a considerable area. Fernández-Guisuraga et al. [14] used the Parrot SEQUOIA for UAVbased data acquisition during the post-fire monitoring of a 3000-ha area and faced several issues in the process. The overall procedure was time-consuming and computationally demanding, with data acquisition taking two months to conduct (resulting in a total of 100 h), and further data processing taking approximately 320 h. Some of these data then had to be discarded due to sensor malfunctions during the flights, in addition to radiometric anomalies found in the acquired images and further data storage problems. The experiment carried out by Fernández-Guisuraga et al. [14] allowed the suitability of UAV-based multispectral imagery to be determined when more information in terms of spatial variability in heterogeneous burned areas is needed. Other authors have explored fire severity measuring using UAV-based RGB imagery [17], but some limitations that directly impact its accuracy have been found such as the influence of canopy shadows, photogrammetric errors in canopy modelling and inconsistent illumination across the imagery. However, all remaining applications in terms of fire monitoring can be accomplished using satellite imagery, including those provided by Sentinel-2 MSI. Despite the great effectiveness of the satellite data for post-fire monitoring at a local/regional scale, some applications may require a significantly higher spatial resolution, making UAVs necessary, as is the case in individual tree monitoring [58], which cannot be conducted with satellite data with a decameter resolution or in real-time fire monitoring applications [12]. Thus, the complementarity of the two types of data are proven. Conclusions In this article, the potential of the use of satellite optical time series images from the ESA Copernicus Programme was addressed for monitoring relatively small areas affected by forest fires. In areas with sizes up to the one presented in this study (~400 ha), the use of small and very flexible UAVs for the analysis of post-fire vegetation recovery would be perfectly possible. However, the use of UAVs would result in a more laborious and expensive UAV tasks, requiring several visits to a field. Thus, in this study, Sentinel-2 MSI data were used to compute NBRs before and after fire disturbances in order to measure their extents and severity using difference NBR (dNBR). Subsequently, NDVI was also calculated to assess forestry recovery in the study region from 2017 to 2019. The NDVI from the Sentinel-2 MSI data was compared with UAV-based high-resolution data at different spatial resolutions (0.25, 5 and 10 m) to access their similarities. The results demonstrated The UAV-based multispectral imagery acquired in the 116 ha of the study showed similar results when compared to the Sentinel-2 data. These findings have already been verified for WordView-2 1-m spatial resolution data [14], but never for Sentinel-2. In fact, for this type of application, the Sentinel-2 proved to be a more cost-effective approach that was able to cover wider areas, providing a short revisit time (five days) and delivering a wider spectral range. UAV-based multispectral data acquisition, on the other hand, can provide similar or higher temporal resolutions, but in a more time-consuming and expensive way, with costs increasing significantly for bigger areas [56]. This is an issue, since at least two human resources are needed who will make multiple trips and spend several days of work in order to meet a similar revisit time [57]. Furthermore, multiple batteries are needed to cover a considerable area. Fernández-Guisuraga et al. [14] used the Parrot SEQUOIA for UAV-based data acquisition during the post-fire monitoring of a 3000-ha area and faced several issues in the process. The overall procedure was time-consuming and computationally demanding, with data acquisition taking two months to conduct (resulting in a total of 100 h), and further data processing taking approximately 320 h. Some of these data then had to be discarded due to sensor malfunctions during the flights, in addition to radiometric anomalies found in the acquired images and further data storage problems. The experiment carried out by Fernández-Guisuraga et al. [14] allowed the suitability of UAV-based multispectral imagery to be determined when more information in terms of spatial variability in heterogeneous burned areas is needed. Other authors have explored fire severity measuring using UAV-based RGB imagery [17], but some limitations that directly impact its accuracy have been found such as the influence of canopy shadows, photogrammetric errors in canopy modelling and inconsistent illumination across the imagery. However, all remaining applications in terms of fire monitoring can be accomplished using satellite imagery, including those provided by Sentinel-2 MSI. Despite the great effectiveness of the satellite data for post-fire monitoring at a local/regional scale, some applications may require a significantly higher spatial resolution, making UAVs necessary, as is the case in individual tree monitoring [58], which cannot be conducted with satellite data with a decameter resolution or in real-time fire monitoring applications [12]. Thus, the complementarity of the two types of data are proven. Conclusions In this article, the potential of the use of satellite optical time series images from the ESA Copernicus Programme was addressed for monitoring relatively small areas affected by forest fires. In areas with sizes up to the one presented in this study (~400 ha), the use of small and very flexible UAVs for the analysis of post-fire vegetation recovery would be perfectly possible. However, the use of UAVs would result in a more laborious and expensive UAV tasks, requiring several visits to a field. Thus, in this study, Sentinel-2 MSI data were used to compute NBRs before and after fire disturbances in order to measure their extents and severity using difference NBR (dNBR). Subsequently, NDVI was also calculated to assess forestry recovery in the study region from 2017 to 2019. The NDVI from the Sentinel-2 MSI data was compared with UAV-based high-resolution data at different spatial resolutions (0.25, 5 and 10 m) to access their similarities. The results demonstrated the effectiveness of satellite data for post-fire monitoring, even at a local scale. The Sentinel-2 MSI data presented a similar behavior to the UAV-based data in assessing burned areas. The confusion matrix, calculated for Sentinel-2 and UAV, showed high correlations between all NDVIs (i.e., 0.83, 0.90 and 0.93 for 0.25, 5 and 10 m spatial resolutions, respectively). Furthermore, the median and extreme values were very similar, differing no more than 0.02 for the mean, 0.04 for the minimum and 0.01 for the maximum. Thus, the availability of multi-temporal Sentinel-2 MSI data with frequent revisit times enables the severity of fire disturbances to be identified and, in a post-fire context, for the recovery of forests to be monitored and their evolutions observed when compared to pre-fire vegetation status. In this way, Sentinel-2 data can be automatically used to monitor burned areas. However, this approach should be evaluated in other areas with different fire extensions and vegetative covers, as well as in broader post-fire periods.
v3-fos-license
2019-05-20T13:06:47.768Z
2018-12-06T00:00:00.000
158224996
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.br/pdf/ram/v19nspe/1678-6971-ram-19-spe-eRAMD180061.pdf", "pdf_hash": "6a206516e1a40fb1c5eb1a74af7fca221baff92e", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42300", "s2fieldsofstudy": [ "Art", "Sociology" ], "sha1": "6a206516e1a40fb1c5eb1a74af7fca221baff92e", "year": 2018 }
pes2o/s2orc
BETWEEN JUGGLING , STUNTS , AND ANTICS : THE MEANING OF WORK FOR CIRCUS ABSTRACT Purpose: This study aimed to investigate the meanings of the work of circus artists in three dimensions, individual, organizational, and social, following the adaptation to the model of Morin (2001) proposed by Oliveira, Piccinini, Fontoura, and Schweig (2004) and Morin, Tonelli, and Pliopas (2007). Originality/value: The article stands out because it deals with a category of artists that is losing space within society, being increasingly marginalized. Given this, it becomes relevant to understand the meaning of an activity that is in decline. Design/methodology/approach: This is a descriptive and exploratory study of a qualitative nature, in which primary data were collected through semi-structured interviews with ten circus artists. For data collection, the content analysis technique was used in Atlas.ti software. Findings: The results point out that there is meaning in work for the circus artists across all dimensions: 1. In the individual dimension, the pleasure was the predominant factor, since in more than one moment, all the interviewees expressed their satisfaction in belonging to the circus; 2. In the organizational dimension, utility prevailed, since everyone considered contributing and meeting the needs of the circus; 3. In the social dimension, interpersonal relations, demonstrating that even with evidence of prejudice, external relationships can be formed in the circus. The results contribute to the literature that involves studies with circus artists, considering their meanings of work. INTRODUCTION The contemporary circus has a rich plurality of possibilities of production, socialization, and work for organization, with unique and dynamic characteristics (Miranda & Bortoleto, 2018).The art of circus is propagated through the body, revealing emotions, dialogues, and tensions, and presenting a ritual, repetitive, and sometimes a dangerous process, in which there is the possibility of failure, with a performance being considered a failure when it is not completed satisfactorily (Duprat & Bortoleto, 2016). Considered a declining cultural activity, the presence of circus presentations in large Brazilian cities is increasingly rare, given the lack of investment and support for the "circus"; and in this context, many artists are motivated and are resisting this problem.Also, there are several social projects that are trying to reverse the current picture, such as the circus school.In an attempt to fill the financial gap in cultural activities, in 1991 the Rouanet Law was promulgated (Brasil, 1991), which created the National Support Program for Culture (PRONAC), with the objective of supporting and directing resources for investments in cultural projects.However, although the circus is related to cultural and artistic activity, there are restrictions in the revision of funding in this area, is destined for few. This research is justified because there has been little work on circus activities, especially in Brazil, just as the meaning of work is also little explored.There have been recent advances with the research of Morin (2001), whose database was made public by the Meaning of Work International Research Team (MOW) of the 1980s, and subsequently by Oliveira et al. (2004), Brun and Monteiro (2010), Bendassolli, Borges-Andrade, Alves, and Torres (2015), Boas and Morin (2016), and Rodrigues, Barrichello, and Morin (2016).Bendassolli and Borges-Andrade (2015) have confirmed the paradoxical effect of the dynamics of artistic works, in which uncertainty seems to have a motivational effect on artists, which highlights the importance of investigating artists' senses of work, for example, of circuses. It is pertinent to understand what drives individuals to work, their experiences, and the environment to understand the values and ethics of the group to which they belong, and what forms the meanings of their work (Sharabi, 2017).This is why understanding the meanings of work, which go beyond the economic dimension, becomes so relevant (Borchardt & Bianco, 2016).Work has gained a new meaning, since individuals seek social recognition for their personal fulfillment (Silva & Cappelle, 2015), mainly through the recognition of the spectator (Bendassolli & Borges--Andrade, 2015). Given the above, after searching the major portals of periodicals, such as Spell, Scielo, and Scopus, it was evident that there is a gap in research on the meanings of work of the circus artists; in Brazil, these artists are often marginalized and forgotten.Based on this context, the question that arises is: "What are the meanings of work for circus performers?" The general objective of this research is to investigate the meanings of work of circus artists.To achieve this, the following specific objectives were outlined: 1.To identify whether circus work makes sense or not from the perspective of the individual dimension, based on the perspective of circus artists; 2. To evaluate the role of the organizational dimension in the work of circus artists; 3. To understand the importance of the social dimension in the work of circus artists. This research broadens the understanding of the work developed by circus artists, as well as helping in the development of public policies that can facilitate the breaking down of prejudices, improvements in working conditions, and higher valuations for these professionals.As Girelli, Dal Magro, and Werner (2017) emphasize, circus activity is, above all, the way of life of individuals.This study seeks to overcome the gap in the literature on research related to the senses of the work of circus artists, understanding how artists relate to the work environment. The remainder of this paper is structured as follows.Section 2 deals with work and its origins.Section 3, then, contextualizes circus activity and the circus, showing the difficulties faced by the artists.Section 4 describes the methodological procedures for carrying out the research.In section 5, the results obtained from the data processing are presented.Finally, in section 6, the final considerations are discussed, with contributions and reflections for future research. WORK AND THE MEANING OF WORK Work is seen as a way of transforming nature, building something of value for the individual and the organization (Oliveira, Piccinini, Fontoura, & Schweig, 2004).To be useful, work must be used for some purpose, bringing to those who carry it out identification and meaning (Boas & Morin, 2016).When one tries to understand work in its innumerable consequences for cultures, and different historical and social moments, it is common to use the expression "meaning of work" (Bendassolli & Tateo, 2018;Brun & Monteiro, 2010). Several studies have been carried out to understand the meaning of work (Antal, Debucquet, & Frémeaux, 2017;Araújo & Sachuk, 2007;Bendassolli & Tateo, 2018;Boas & Morin, 2016;Tolfo & Piccinini, 2007), which show that unlike other types of resources, individuals have management and control over work, and require participation, satisfaction, and rewards.For this reason, human resource departments perform basic functions, among them, improving the meaning of work with commitment (May, Gilson, & Harter, 2004), enhancing workers' quality of life (Bernal, 2010;Elangovan, Pinder, & Mclean, 2010), and the motivational and organizational climate (Rodrigues, Barrichello, Irigaray, Soares, & Morin, 2017).In this regard, it is important to understand how individuals feel when they perform their tasks, even if at first these do not seem to make sense or motivate the workers. According to Tolfo and Piccinini (2007), in the view of the MOW group, at the individual level the meaning of work can be defined as the social representation of the task performed by the individual, at the collective level by the feeling of belonging to a united class that carries out common work, and at the social level by the feeling of accomplishing work that contributes to society (Spinelli-De-Sá, Lemos, & Cavazotte, 2017).The adaptation made by Oliveira et al. (2004) to the classification used by Morin (2001) highlights the meaning of work in the individual, organizational, and social dimensions. In the individual dimension, work is identified with the person's ethical values, is pleasurable, allows personal appreciation and satisfaction, and enables development and growth (Antal et al., 2017;Morin, Tonelli, & Pliopas, 2007;Oliveira et al., 2004).In this dimension, work that has no meaning goes against personal values, not allowing growth or recognition for the person (Morin et al., 2007;Oliveira et al., 2004). The organizational dimension concerns aspects related to usefulness, the organization itself, and interpersonal relations at work (Antal et al., 2017;Oliveira et al., 2004;Tolfo & Piccinini, 2007).Rosso, Dekas, and Wrzesniewski (2010) and Duffy, Autin, and Bott (2015) argue that as employees identify themselves with their work groups, these provide a positive meaning, and satisfaction is related to adjustments in the work environment that lead to a meaning perceived by the worker.Work that has no meaning is seen as unproductive, which leads to loss of time, and personal and professional dissatisfaction (Antal et al., 2017;Oliveira et al., 2004;Tolfo & Piccinini, 2007). In the social area, the person must be able to contribute and to be useful to others and society.Thus, work adds value to the person and the society (Antal et al., 2018;Oliveira et al., 2004;Silva & Capelle, 2017).The absence of meaning is seen when work does not offer benefits to society (Oliveira et al., 2004;Silva & Cappelle, 2017). Given this contextualization of work and the meaning of work, the following section deals with the circus and the circus artist, a symbolic profession in labor relations currently (Concolatto, Rodrigues, & Oltramari, 2017) showing the circus artist in the Brazilian context. THE CIRCUS AND CIRCUS ACTIVITY The traditional circus is an institution with a set of rules and social norms of its own; over time and space, these reaffirm its communal unity, but there are also emerging trends, incorporating circus activity into other artistic contexts, such as theater, dance, and music, related to the contemporary urban cultural mode (Bezerra & Barros, 2016).Studying the culture, they produce to continue building their territory is the same as understanding the actions and notions that constitute their daily life (Aguiar & Carrieri, 2016;Bezerra & Barros, 2016;Ilkiu, 2011). Given the importance of itinerant activity, in 1978 the National Congress, expressing concern for itinerant activity, promulgated Law 6.533, of May 24, which regulates the professions of artists and technicians in entertainment shows, ensuring in Article 29 the right of vacancy and transfer to public schools in primary and secondary education for the children of these professionals (Brasil, 1978).The circus artist is a cultural worker, as related to culture, according to Law nº 6.533/78 (Brasil, 1978).Law No. 6.533 of 1978, which regulates the work developed by the employed artist, considers as such "the professional who interprets or performs works of any cultural nature, for the purpose of exposure or dissemination through media of mass or in places where public entertainment programs are held" (Brasil, 1978), including circus establishments.From Oliveira's (2012) perspective, circus performers can perform their activities independently or as employees.The latter is the case when the performer is subject to the directive power of an employer, able to determine the function, time, and place of the provision of services.Oliveira (2012) also affirms that it is possible to establish the existence of an employment relationship if the artist receives weekly payment and housing, with the obligation to undertake exhibitions in facilities and at hours previously determined by the contractor.As long as the artist does not assume the risk of the business and the artist's activity constituted an attraction, the employment bond is confirmed; however, the circus is made up of families working for their own benefit, often representing the sole livelihood of the family. In light of the discussion, considering the meaning of work and circus artists, the following section sets out the methodological procedures that led to the results of this research. METHODOLOGICAL PROCEDURES This is a qualitative, descriptive, and exploratory study (Sampieri, Collado, & Lucio, 2006), based on the model proposed by Oliveira et al. (2004) and Morin et al. (2007), which verifies the meaning of work for the individual, considering also the employer organization (circus) and society. Data collection was performed through the application of semi--structured interviews.First, questions were asked that sought to verify whether or not the artists understood the meanings of the activities performed; then, the interviews explored their perceptions for each of the analyzed dimensions.All the interviews were recorded, with the participation of the researchers, who were placed as non-participant observers (Godoi, Bandeira-de-Mello, & Silva, 2006).Data collection was carried out from March 1 to 19, 2017. The universe of this research was made up of artists from the Puerto Rico Circus (Circo Porto Rico), a medium-sized venture from the Rio Grande do Sul, founded on May 11, 2001, which performs more than 350 shows a year and about 60 shows per season.This circus was awarded the Picadeiro trophy of the Ministry of Culture, and the Piolim and Arte Circense trophies granted by the Government of the State of São Paulo.The interviews were undertaken during the circus's stay in the city of Fortaleza-Ceará (Brazil).This circus was chosen at random because of the willingness of the artists to participate in this research.The sample comprised ten circus artists as saturation was reached in the interviewee with this number. For the analytic instrument, based on Oliveira et al. (2004) and Morin et al. (2007), the categories of analysis were the individual, organizational, and social dimensions of the meanings of work.In the model used, one registration unit excludes the other, i.e.; if the respondent stated that an aspect was meaningful, it was not possible simultaneously for there to be no meaning in that context unit (Figure 4.1).The interviews generated about 120 minutes of audio requiring transcription for later analysis.Before conducting the interviews, we talked to the person in charge of the circus, explaining the objectives of the work, and marking the day and hour of the interviews so as not to obstruct the daily activities of the artists.For the transcription of the interviews, the software Express Dictate v. 5.95 was used. In the treatment of the data, the content analysis method (Bardin, 2011) was used, understood as a set of research techniques, the objective of which is to seek the meaning or meanings of a document, allowing tabulation, coding, and interpretation in a structured and sequential manner.For the data analysis, the software Atlas.ti(version 8) was adopted. ANALYSIS AND DISCUSSION OF RESULTS For the analysis and discussion of the results, the profile of the circus artists is first presented, and then the dimensions of the meanings of work are analyzed, according to the segmentation of the specific objectives of this research. Profile of respondents The sample comprised ten people, eight male (M) and two female (F).Most had school-age children accompanying the circus (Figure 5.1.1).Source: Elaborated by the authors. PROFILE OF RESPONDENTS The respondents have been given identifiers from I1 to I10 to maintain their anonymity.The average length of time spent in circus activities is 20 years and six months.However, there is great breadth here, from the shortest duration of 2 years to the longest of 43 years.The average age is 29.8 years, the youngest being 15 years old (circus performers start early in the profession) and the oldest 43 years old. Dimensions of the meaning of work The analysis and discussion of the results are presented for each dimension-individual, organizational, and social-in turn, drawing on the meaning of work from the perspective of the circus artists. The individual dimension of the meaning of work In the individual dimension, the results are evaluated for the following context units: coherence, alienation, valorization, pleasure, development, and survival and independence.This sub-section seeks to address the first specific objective, which is to identify whether the work makes sense or not from the perspective of the individual dimension, based on the views of the circus artists.The results are represented for each of the context units (Figure 5.2.1.1).In the "coherence" context unit, 47 quotations were found.The interviewees reported identifying with their work and considered it important, as we can observe from I2: "Certainly (I identify myself).[...] I adore (what I do) because my life has always been this.I was born in this world and grew up watching everyone working, and we enjoy and learn."Although it is an activity that goes from generation to generation, and thus the circus artist may not have a real notion of the coherence of the work performed, the result of this component is in harmony with Morin (2001), who states that the work and its process helps individual to discover and to form their identities.Tolfo and Piccinini (2007) also emphasize that the meaning of work allows the construction of the personal and social identity of the individual through the tasks performed in the course of work. In the "alienation" context unit, there were 31 quotations.These showed that the interviewees had the knowledge and perceived the clarity of the objectives of the work.However, there were two mentions of a lack of clarity or lack of knowledge in this regard.For instance, I4 stated "(I present it) because I have learned ... I grasped it, and I go to the end," but when asked if he knew why he presents his number and about the usefulness of his activity, it was possible to perceive that there was no greater reflection on the activity that he performs.In contrast, all other participants had clarity and knowledge of the objectives of the work, totaling 33 occurrences related to alienation.Morin (2001) and Boas and Morin (2016) necessary for the worker to understand the purpose of the work so that it makes sense, as do most of the circus artists investigated. In the view of the participants, the "valorization" unit represented 28 quotations, with 100% corresponding to the recognition and valorization of the work, indicating performance satisfaction.In the case of artistic activity, about this unit, it could be perceived that the activities have a sense-oriented toward values, favoring the pleasure in accomplishing such work.As pointed out by I8: I feel valued in this circus, and in all where this happens.The team likes my work.[...] Through the social networks and the team when the show ends, they come to hug me, hit the photo, and say that they liked the show, that they liked the clown, and that's where people are happier. Therefore, it can be concluded that the recognition of artists values what they know how to do, which according to Antal et al. (2017) relates to "valorization." There were consistent and frequent references to the "pleasure" unit.None of the interviewees showed that they did not like the work, and their passion for the activity was verified in several interviews, totaling 43 positive mentions.The enthusiasm felt during the interviews reflected the pleasure that the interviewees demonstrated in their work, confirming Morin's (2001) and Boas and Morin's (2016) point that satisfaction can be felt when performing tasks that make sense for the work, and it is necessary for there to be pleasure in work to maintain health (Tolfo & Piccinini, 2007). In the unit "survival and independence," Morin (2001) and Oliveira et al. (2004) emphasize that the financial element is paramount in ensuring survival, and they conclude that if workers have sufficient money to live comfortably, they will continue to work as it is a source of livelihood and a means of relating to others, integrating them into a group or society.There were 36 references to attention to basic needs and financial returns, none of which were contradictory, as highlighted by I8: I left (the circus), but I was in need, and in the circus people earn well.In the city, the minimum wage is only received at the end of the month, and here people receive their wage weekly [...] With the circus activity, I paid for a truck, paid for my car, I support my family, I paid my son's pension.In the "development" unit, 50 passages were identified that confirm growth in both personal and professional settings, as well as the acquisition of new knowledge and skills.The statements verify that the fields of professional growth and personal development are connected, in line with Tolfo and Piccinini (2007) and Antal et al. (2017), who consider work has meaning when it enables learning, development, recognition, and valorization of the activities that the individual performs. Only the "alienation" context unit showed negative perceptions, totaling less than 1%.Thus, circus activity has meaning for the interviewees in the individual dimension of their work, which makes sense and can be identified with the moral values of the individual, and this can enable the development and growth of the circus artist. The organizational dimension of the meaning of work This dimension seeks to meet the second specific objective, which is to evaluate the role of the organizational dimension in the work of circus artists.The results are evaluated for the following context units: usefulness, work organization, and interpersonal relations (Figure 5.2.2.1).In the "usefulness" unit, of the 46 positive mentions, 25 concerned the interviewees' consideration of their work as contributing to the circus, and the 21 interviewees' belief that their work met the expectations of the circus.For example, highlighting the meaningfulness of the work, I8 remarked "Besides (the owner of the circus considers my work useful), he values me a lot, thank God [...].I can do all the tasks and without him asking me to do it."Andrade, Tolfo, and Dellagnelo (2012) confirm that the meaning of work is determined by utility, self-fulfillment, satisfaction, a sense of development, and personal and professional evolution, as well as freedom and autonomy in performing tasks. The identification of the meaning of work in the "work organization" context unit can be verified through the 39 affirmations among the interviewees regarding autonomy, diversified new ideas or practices, and the challenging nature of the work.Work that makes sense allows the workers to have autonomy, be able to exercise their creativity and think, showing that work in the circus makes sense.The definition of meaningless work is when the task does not motivate the worker.It is important to note that there were nine negative mentions about the artists' work organization. Regarding diversified work, those who considered their work routine (7.5% of quotations) confirmed that they learn and improve their performance every day because of repetition.Those who considered their work diversified (17.5%) ensure that each day is a new presentation.Moreover, 25% of the quotations reveal that there is autonomy in the presentation, and there were no instances in the "no autonomy" unit.Regarding the suggestion of ideas or new practices, 17.5% of the quotations affirm that they can make proposals, while only 7.5% reported not having this opportunity/freedom of suggestion.According to I6: It is great (the routine), because I am learning more and more, and I am getting better and better.[...] I have (the freedom to suggest some new idea or practice when I am doing my number).I say I'm going to make such a position-go, do, please, fine.[...] (My job) is challenging, there are few that do it. Regarding "interpersonal relations," there were four passages in which this aspect was considered unfavorable in the view of the participants, corresponding to 11% of the quotations.In two quotations, this was related to a lack of acceptance in the work environment, corresponding to approximately 5% of the mentions.The main complaint about the circus concerned the interactions between the artists themselves and the difficulty of making friends outside the work environment, indicating that circus performers are close to each other, and there is a strengthening of interpersonal relations only in the work environment. In line with Oliveira et al. (2004) and Rosso et al. (2010), there were several positive quotations regarding meaningful work, namely the "favorable work environment" and "acceptance at work."This is exemplified by I1, who stated "It is quiet (the work environment is good), thank God.There are some who are bored, but people swallow it up."In addition, I2 noted "It is (the work environment is good).Sometimes, there are some discussions that are ordinary, but most are okay." The organizational dimension of the work includes aspects related to utility, work organization, and interpersonal relations in the work environment.For Tolfo and Piccinini (2007), for a job to make meaning in the organizational dimension, it must achieve results, have value for the company or the group, and be useful.Otherwise, the work will be unproductive, generating wastage of time, and failing to have any sense.The circus activity has meaning for the interviewees in the organizational dimension; although there were some negative mentions, these were far fewer than the positive statements.Thus, it is possible to infer that circus artists perceive meaning in their work for the organizational dimension. The social dimension of the meaning of work In this dimension, we seek to meet the third specific objective, which is to understand the importance of the social dimension for the work of circus artists.The results are evaluated for the following context units: usefulness, valorization, and interpersonal relations (Figure 5.2.3.1).The social dimension of work presents 65% positive references and 35% negative, showing that the work makes sense from circus artists.The "usefulness" context unit seeks to discover whether the work contributes to society.Only one of the interviewees (I4) did not perceive the work as contributing to the development of society, stating "[...] we do shows in school too, I understand if I say, let's go, we'll do it, we'll have a birthday."In contrast, the other interviewees believed that their work contributes to society.As mentioned by I8: ... It is (my work contributes to society), it is through clowning that we help to explain more the heads of the children, of society.There are people who have prejudice about race, sexual choice, and we explain that each person has a choice and each person makes life what he/she wants. Based on the above, it is possible to corroborate Morin et al.'s (2007) findings, since some interviewees reported that work makes sense when it contributes to society, transcending, in this case, individual and organizational issues. Although sharing the same nomenclature as in the individual dimension, the "valorization" context unit within the scope of the social dimension takes a different approach, seeking to identify value and recognition by society.In this regard, there were 33 quotations, with 54% indicating value and recognition by society.Regarding the positive quotations, I1 stands out: "Some, yes, not all (of society values us).[We perceive this] because there are places and people that say 'these people of the circus are equal to gypsies, not quiet,' and discriminate a lot against the circus some people, not all." Despite some positive quotations expressing a lack of prejudice and discrimination, there were also emphatically negative quotations.Among these, I6 stands out: "They could give more value, more recognition to the circus, they think that we, with permission for the word, are marginal, but we are citizens... we pay our taxes, and they do not see it, Hail Mary."As Morin (2002) comments, work should allow the union between activities and social sanctions, contributing to the construction of social identity and protecting the personal dignity, rather than contributing to prejudice in society. Concerning "interpersonal relations," there are references to some kind of discrimination within society.This is highlighted by the testimony of I5: Often the people we meet in the city ask if we are from the circus and think that we have nowhere to sleep, nowhere to bathe, as if the circus played it, but they never come to me to say something positive. Despite remarks such as those of I5 and I6, the study reaffirms Silva and Cappelle's (2017) point that even with the prejudice experienced by artists, they continue to identify meaning in the activities they perform.The context unit "interpersonal relations" in the social dimension presents a greater emphasis on discrimination, described as continued prevalence of prejudice against the circus and circus artists. Summary of key results The main results found in this study are presented in Figure 5.3.1, with the number of positive and negative references, linked to the main theoretical bases. Analysis Category Context Unit Individual Dimension Coherence 47 0 The interviewees unanimously identify with their work and consider it important, identifying their work as consistent.Morin (2001); Tolfo and Piccinni (2007) Alienation 31 2 The interviewees have the knowledge and perceive the clarity of the objectives of the work; with only two negative mentions there is a low level of alienation.There is the possibility of growth, both in the personal and professional fields, as well as the acquisition of new knowledge and the development of new skills.These results provide an overview of the meaning of work for circus performers, showing that in the individual dimension their work offers an opportunity to develop their physical abilities (Antal et al., 2017;Miranda & Bortoleto, 2018), provides pleasure, and makes the individuals feel valued (Antal et al., 2017;Tolfo & Piccinini, 2007); however, in terms of organization, the diversity of tasks and interpersonal relations were perceived as negative, especially concerning the difficulty of maintaining relationships in view of the itinerant aspect of the circus (Bezerra & Barros, 2016); in the social dimension, there were contrasting responses in the valuation aspect, and the prejudice experienced by circus artists became more evident. This research is relevant in investigating the meaning of work for circus artists, a category of artists losing space within society.This is especially significant as circus activities represent the way of life for these people.In general, the findings show that these circus artists perceive the meaning of their work and this is also related to how they perceive the world and insert themselves in society, thus bringing a sense of identity and social inclusion. FINAL CONSIDERATIONS This research had as its general objective investigating the meanings of the work of circus artists.Using the model proposed by Oliveira et al. (2004), based on Morin et al. (2007), three specific objectives were outlined, considering the individual, organizational, and social dimensions of the meaning of work. From the perspective of the individual dimension, the findings verify that the work has meaning: According to the testimony of the artists, their work in the circus makes them feel fulfilled and satisfied because it promotes development, improvement, survival, and independence, which allows them to develop their personal and social identities, as well as to be pleasurable.In the organizational dimension there is also meaning in work.The interviewees believe that they meet the interests and needs of the circus, considering it mainly a challenging, diversified, and autonomous work.Moreover, in the social dimension, there is a sense in social work in that respondents believe they contribute to society and are valued, in addition to maintaining good interpersonal relationships. Although there are instances in the data that show perceptions of low appreciation for circus artists, as well as prejudice, discrimination, and even marginalization, there is a discernable positive balance in the valorization.In this regard, the interpersonal relations factor presents the highest positive balance.These findings point to the reasons why circus professionals continued to feel motivated to develop their activities, despite the difficulty that circuses face in maintaining presentations in the country.Notwithstanding the artists' understanding of the meaning of their work, with its challenges and potentialities, it is important to focus on the prejudice identified in the social dimension, namely that they are marginalized by society; this meant that there was little difference in the results of the valorization context unit. This research contributes to practical terms by affording professionals in the circus reality the ability to avoid problems such as increased child prostitution and chemical dependency.In the academic field, the research seeks to broaden the perceptions of artists in a field of activity that is in decline.Specifically, this research provides a greater understanding of the meaning of work in the circus environment, seeking to contribute to a change in the stigmatized view, and thus mitigate the devaluation that the circus and the artists have been suffering. The results presented here cannot be generalized as the sample in this research may not be representative of artists more widely.In particular, the interviews were performed only with artists from one circus, of medium size, with a good structure and conceptualization in the circus environment.Thus, as a suggestion for future study, this research could be extended to artists from other circuses (located in other cities and countries), and other artists, such as musicians, dancers, and actors, to provide a broader picture of the meaning of the work of artists.Moreover, quantitative research on the meaning of work for circus artists could be carried out. Figure Figure 4.1 CATEGORIES, CONTExT, AND RECORD UNITS: MEANINGFUL AND NOT MEANINGFUL WORK Figure 4 Figure 4.1 (conclusion) CATEGORIES, CONTExT, AND RECORD UNITS: MEANINGFUL AND NOT MEANINGFUL WORK Figure Figure 5.1.1 Figure Figure 5.3.1 Figure Figure 5.3.1 (conclusion) SUMMARy OF MAIN RESULTS
v3-fos-license
2018-07-15T05:35:00.510Z
2018-03-20T00:00:00.000
51691334
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/10/3/883/pdf?version=1525344503", "pdf_hash": "31a01638e46808368b59094ebc03b8c92717f045", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42302", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "31a01638e46808368b59094ebc03b8c92717f045", "year": 2018 }
pes2o/s2orc
Alternative Use of Extracts of Chipil í n Leaves ( Crotalaria longirostrata Hook . & Arn ) as Antimicrobial The genus Crotalaria comprises about 600 species that are distributed throughout the tropics and subtropical regions of the world; they are antagonistic to nematodes in sustainable crop production systems, and have also shown antimicrobial capacity. Chipilín (C. longirostrata), which belongs to this genus, is a wild plant that grows in the state of Chiapas (Mexico) and is traditionally is used as food. Its leaves also have medicinal properties and are used as hypnotics and narcotics; however, the plant has received little research attention to date. In the experimental part of this study, dried leaves were macerated by ethanol. The extract obtained was fractionated with ethyl ether, dichloromethane, ethyl acetate, 2-propanone, and water. The extracts were evaluated against three bacteria—namely, Escherichia coli (Ec), Citrobacter freundii (Cf), and Staphylococcus epidermidis (Se)—and three fungi—Fusarium oxysporum A. comiteca (FoC), Fusarium oxysporum A. tequilana (FoT), and Fusarium solani A. comiteca (FSC). During this preliminary study, a statistical analysis of the data showed that there is a significant difference between the control ciprofloxacin (antibacterial), the antifungal activity experiments (water was used as a negative control), and the fractions used. The aqueous fraction (WF) was the most active against FoC, FsC, and FoT (30.65, 20.61, and 27.36% at 96 h, respectively) and the ethyl ether fraction (EEF) was the most active against Se (26.62% at 48 h). Introduction The number of plant diseases caused by pests attacking crops has increased the need for new antimicrobials to eliminate the pathogens.This need has led to a renewed focus on natural extracts from plants, fungi, bacteria, algae, etc. [1].Every year, plant diseases cause an estimated 40 billion dollars in losses worldwide [2].Chemical fungicides are not readily biodegradable and tend to persist for years in the environment.As a result, the use of natural products for the management of fungal diseases in plants is considered a reasonable substitute for synthetic fungicides [3].The genus Crotalaria includes around 600 species distributed throughout the tropics and subtropical regions of the world, which have been used as antagonists to nematodes in sustainable crop production systems [4,5]. There are also previous studies showing the anti-inflammatory [6], anthelmintic [7], antitumoral capacity [8] and antimicrobial activity of C. madurensis [9] and C. burhia [10,11], which showed activity against Bacillus subtilis and Staphylococcus aureus, while C. pallida demonstrated that it has an effect on Escherichia coli and Pseudomonas sp.[12][13][14].The species of this genus contain alkaloids, saponins, and flavonoids to which biological activity is attributed [4].Chipilín (Crotalaria longirostrata) belongs to this genus; it is a wild plant that grows in the state of Chiapas, Mexico that is used traditionally food [15], and also has ethnobotanical properties as hypnotics and narcotics [16].Since there are few reports of the biological activity of the species C. longirostrata, this study fractionates the crude extract from Chipilín (C.longirostrata) leaves, obtaining ethyl ether (FEE), dichloromethane (FDM), ethyl acetate (FEA), 2-propanone (FAO), and aqueous fractions (FW), as a preliminary measure in order to evaluate its potential as an antimicrobial. Extraction The plant material were shade-dried for seven days.The dried leaves were grounded to a fine texture, then soaked (0.15 g of dry matter/mL of solvent) in EtOH (96%) (Meyer, CDMex, Mexico) for 15 days.After filtration, the extract was evaporated to obtain the crude extract.About half of the crude extract was suspended in distilled water (H 2 O) (Sigma-Aldrich-Merck, Darnstadt, Germany) and separately partitioned with ethyl ether (Et 2 O) (Meyer, CDMex, Mexico), followed by dichloromethane (CH 2 Cl 2 ) (Meyer, CDMex, Mexico), ethyl acetate (AcOEt) (Meyer, CDMex, Mexico), and 2-propanone (C 3 H 6 O) (Meyer, CDMex, Mexico), respectively.The organic layer of each solvent was concentrated to dryness under reduced pressure, and dried over anhydrous sodium sulfate to afford Et 2 O, CH 2 Cl 2 , AcOEt, C 3 H 6 O, and H 2 O fractions.These fractions were stored at 4 • C until use.Each fraction was dissolved in dimethyl sulfoxide (DMSO) (Sigma-Aldrich-Merck, Darnstadt, Germany), and prepared at a concentration of 200 mg/mL in all bioassays [17]. Evaluation of Antifungal Activity In order to evaluate the effect by direct contact of the fractions on the microorganisms, Whatman No. 1 paper discs were impregnated with 10 µL of the corresponding fraction.Later, discs were placed with the microorganism on the disks with the fraction.In the second bioassay, the effect of the fractions that showed antimicrobial activity in the first bioassay was evaluated.A 5 mm paper disc with the fraction was placed in the Petri dishes colonized by the microorganism.Microbial growth was measured every 24 h until 96 h, as a positive control sterile distilled water (H 2 O) was employed.A solvent test was also performed using a filter paper disc treated with sterile DMSO [19].Negative control test wity DMSO were performed (data provided in the Supplementary Materials).The diameters for the inhibition zones were measured in millimeters.The percentage inhibition of radial growth (PIRG) was calculated using the Abbott formula: PIRG (%) = [(RC − RT)/RC] × 100, where RC is the radius of the control, and RT the radius of the treatment [20]. Evaluation of Antibacterial Activity A volume of 0.1 mL of inoculated cell suspension broth was placed on each Petri dish (Ec 2.95 × 10 3 CFU/mL; Cf 8.47 × 10 3 CFU/mL and 2.86 × 10 6 CFU/mL for Se) [12].Then, four Whatman No. 1 paper discs were impregnated with 10 µL of the corresponding fraction.The diameter of the growth inhibition zone was measured at 15 h, 24 h, 40 h, and 48 h.Ciprofloxacin 125 mg/mL for Ec and Cf, and chloramphenicol at 5 mg/mL for Se [20] were used as a positive control.A solvent test was also performed using a filter paper disc treated with sterile DMSO [19].DMSO tests were performed, observing growth on the whole plate identical to the control (water) Percentage inhibition (PI) was calculated using a modified expression of the Abbott formula: PI (%) = [DT/DC] × 100, where DC is the diameter of the inhibition halo of the control, and DT is the diameter of the inhibition halo of the treatment [20]. Experimental Design A completely randomized experimental design with three replicates was used for each microorganism, taking as a response variable to PIRG or PI.A simple ANOVA was performed with a comparison of means using the Tukey test at 95% confidence. Evaluation of Antifungal Activity The bioassays were carried out to know the possible antimicrobial activity of the different fractions, and directed towards the most promising fraction after other specific bioassays. In the first bioassay, the fractions showed a fungistatic effect on the three fungi.For each fungus, the most effective fraction at 24 h was different.In the case of FoC, it was the aqueous fraction with a PIRG of 50.00%.For FoT, the highest value of PIRG was obtained with the dichloromethane fraction (FDM, 61.76%), and for FsC, it was the 2-propanone fraction that obtained higher values of inhibition, with 35.00% (Table 1).However, for the three species fungi, the aqueous fraction (FW) was the one with the highest percentage of inhibition (PIRG) at 48 h, 72 h, and 96 h (Table 1). For the second bioassay, the aqueous fraction was employed.For FoC and FsC, a mycelial growth-promoting effect was observed at the end of the test time.For FoT, the aqueous fraction showed a value of 27.94% of inhibition in the first 24 h; however, this effect did not last after 72 h (Table 2). Discussion In the last two decades, there has been growing interest in research for extracts for medicinal plants as sources of new antimicrobial agents [12][13][14].Recent findings about species of the genus Crotalaria describe their biological activity, by example the ethanolic fractions of C. retusa, the chloroform fraction of C. prostrate, the ethanolic extract of C. medicaginea, the ethanolic extract of C. pallida, the methanolic extract of C. burhia, and the fractions of C. bernieri and C. madurensis showed inhibitory capacity against E. coli [4,9,10,13,21].The species C. longirostrata has been reported to have ethnobotanical activity, but an antimicrobial evaluation had not yet been done, and the chemical compounds responsible remained unknown.We evaluated its antimicrobial potential, finding that the fraction of dichloromethane is more effective in inhibiting the growth of FoT (61.7%) in comparison with the aqueous extract of C. medicagenina (33%) and the methanolic extract of C. filipes (55%).However, it was less efficient than the isolated peptide of C. pallida (70%) [1,12,22].Further, the aqueous fraction showed low inhibition values (2.5%) against F. solani, while the 2-propanone fraction (35%) revealed activity for the aqueous extract of C. juncea [23].Subsequently, the antibacterial evaluation of the fractions of C. longirostrata found statistically significant differences between the fractions and the control in C. freundii and S. epidermidis; however, the percentage of inhibition was lower than that of the antibiotic (control). The results obtained of this preliminary study show the fungistatic capacity, but not fungicidal capacity, of the fractions obtained from the Chipilín (C.longirostrata).Some phenolic compounds, alkaloids, essential oils, and glycosides have shown to be responsible for antifungal activity [1].This suggests the presence of these compounds in the fractions that were analyzed, which proved to be more effective against fungi than bacteria.The effect of substances of plant origin is due to mechanisms of direct fungitoxic action [21], while the bactericidal potential is associated with anthraquinones and flavonoids of a catechinic nature [24].There is a great diversity in the forms of action of secondary metabolites that have been reported as antifungal [25][26][27].However, each extract showed a spectrum of specific activity that could be due to the difference between the chemical nature and the concentration of bioactive compounds in extracts [21].For example, the EEF fraction against FsC showed low inhibition or equal to the growth of the control, so its PIRG value was diminished.Therefore, it is necessary to carry out further phytochemical studies for the identification of the secondary metabolites of the Chipilín (C.longirostrata) leaves that are responsible for its antimicrobial activity. Table 1 . Percent inhibition by direct contact of Chipilín (C.longirostrata) active fractions on phytopathogenic fungal species. Table 2 . Volatility test in the aqueous fraction of Chipilín (C.longirostrata) on phytopathogenic fungal species.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-07-28T00:00:00.000
3854076
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CC0", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0022869&type=printable", "pdf_hash": "7918e9e60454886e10cd1c2804e91544a51840c4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42304", "s2fieldsofstudy": [ "Biology" ], "sha1": "2b0a7ba98679329491852e6a1fb3cf2592203e5f", "year": 2011 }
pes2o/s2orc
Inter-Specific Coral Chimerism: Genetically Distinct Multicellular Structures Associated with Tissue Loss in Montipora capitata Montipora white syndrome (MWS) results in tissue-loss that is often lethal to Montipora capitata, a major reef building coral that is abundant and dominant in the Hawai'ian Archipelago. Within some MWS-affected colonies in Kane'ohe Bay, Oahu, Hawai'i, we saw unusual motile multicellular structures within gastrovascular canals (hereafter referred to as invasive gastrovascular multicellular structure-IGMS) that were associated with thinning and fragmentation of the basal body wall. IGMS were in significantly greater densities in coral fragments manifesting tissue-loss compared to paired normal fragments. Mesenterial filaments from these colonies yielded typical M. capitata mitochondrial haplotypes (CO1, CR), while IGMS from the same colony consistently yielded distinct haplotypes previously only found in a different Montipora species (Montipora flabellata). Protein profiles showed consistent differences between paired mesenterial filaments and IGMS from the same colonies as did seven microsatellite loci that also exhibited an excess of alleles per locus inconsistent with a single diploid organism. We hypothesize that IGMS are a parasitic cellular lineage resulting from the chimeric fusion between M. capitata and M. flabellata larvae followed by morphological reabsorption of M. flabellata and subsequent formation of cell-lineage parasites. We term this disease Montiporaiasis. Although intra-specific chimerism is common in colonial animals, this is the first suspected inter-specific example and the first associated with tissue loss. Introduction Diseases inducing tissue loss have led to declines of dominant corals in Florida and the Caribbean [1,2] and are generally thought to be caused by infectious agents. For example, convincing evidence exists experimentally and morphologically that black band in the Caribbean is caused by a consortium of bacteria and other organisms [3]. Several studies have implicated bacteria as causes of tissue loss in Acropora from the Caribbean [4,5] and Acropora and Pachyseris the Pacific [6] based on the ability of these agents to replicate gross lesions of tissue loss experimentally. However, tissue loss is a non-specific gross lesion that can be associated with a wide variety of extrinsic organisms such as bacteria, ciliates, algae, fungi, crown of thorns starfish, snails, nudibranchs and flat worms [7,8,9,10,11]. This complicates determination of causation of lesions in corals based on gross examination alone [12,13]. In contrast to extrinsic factors, much less is known about intrinsic factors associated with tissue-loss in corals. One example is the coral Seriatopora hystrix that undergoes a physiological process called polyp bail-out in response to predation or environmental stress resulting in rapid tissue loss over the entire colony [14]. Another possible example is ''shut-down-reaction'' in Acropora [15]. Intrinsic genetic factors associated with disease in corals have thus far not been documented. As part of an investigation of tissue-loss in a dominant coral, Montipora capitata, in Kane'ohe Bay, Oahu, Hawai'i, we documented pathology associated with a putative cellular parasite (referred to as montiporaiasis) that did not fit standard Cnidarian morphology [16]. We present here molecular and protein data that reveal unexpected differences between these putative parasites and mesenterial tissues from the same colony. Based on the combined data, we hypothesize IGMS to be somatic or germ cell lineage parasites formed by inter-species chimerism between two reef corals, M capitata and Montipora flabellata. Results Coral fragments with high densities of IGMS manifested focal to multifocal indistinct acute tissue-loss revealing bare white skeleton (Fig. 1A). On microscopy, IGMS were located within gastrovascular canals and when present in high numbers resulted in fragmentation or effacement of basal body walls; however, cell necrosis was not present (Fig. 1B-C). IGMS were round to amorphous, contained occasional pigment cells, and ranged from 85-350 mm at their widest point. Occasionally, central cavities lined by cuboidal cells were seen ( Fig. 1D-F). Masson's trichrome failed to reveal collagen within the IGMS, and no zooxanthellae, cnidae (nematocysts), or gonads were seen on light microscopy. On electron microscopy, IGMS consisted of amorphous masses of cells with no mesoglea, gut, or other evident organized internal structures (Fig. 1G); occasional cilia with characteristic basal body were observed (Fig. 1H) as were what appeared to be pigment cells (Fig. 1I). Characteristics of mesenterial filaments and IGMS are summarized in Table 1. Of 46 colonies with Montipora white syndrome (MWS) examined at multiple time points over one year, prevalence of IGMS ranged from 0 to 34%. Significantly higher densities of IGMS were present in tissues with lesions (0.0760.08 IGMS/mm2) compared to normal tissues from the same colony (0.0160.03 IGMS/mm2) (Mann-Whitney U = 214, p = 0.004), and IGMS were present in consistently higher densities and deeper within tissues (Fig. 2). Attempts to infect colonies with IGMS either through direct contact or water in aquaria failed, and experimental fragments with IGMS invariably lost all their tissues within 6-8 weeks. However; in two separate experimental open water table studies to monitor healing of experimentally induced lesions in lesion-free M. capitata, low densities (1-3 IGMS/tissue section) were observed on histology (100% of 80 fragments in the first experiment and 10% of 80 fragments in the second experiment); all fragments subsequently healed completely. When coral fragments were incubated overnight with salt agar, IGMS migrated 0.1-4 cm from within the fragments into surrounding sterile seawater. Compared to mesenterial filaments, IGMS had a different protein profile; bands unique to IGMS were seen between 20-45 KD whereas bands unique to mesenterial filaments were seen at .45KD (Fig. 3). The initial survey (DNA extraction from IGMS on 4/21/2009) with 'universal' primers yielded sequences for 18S [17], CO1 [18], and 16S [19], with strong similarity to close relatives of Montipora (93% to Montipora peltiformis AY722777.1, and 100% to Anacropora matthai AY903295.1, respectively) according to the National Center Biological Information (NCBI) basic local alignment search tool [20,21]. We observed that the CO1 sequence from IGMS tissue unexpectedly differed from Hawaiian M. capitata from a previous study [22] and was 100% identical to haplotypes that have previously only been isolated from M. flabellata or from the very rare species M. dilatata, or M. cf. turgescens (Fig. 4). The second independent experiment extracted DNA from freshly isolated IGMS (1/11/2010) and confirmed this result; all mesenterial filaments shared identical CO1 haplotypes with other M. capitata samples, while all IGMS haplotypes from the same colonies were identical to M. flabellata. The same pattern was observed for the mitochondrial control region for all four colonies (NCBI #HQ246454-HQ246712) (Fig. 5). Microsatellite results further confirmed that mesenterial filaments and IGMS tissue types from each colony were genetically distinct. Most loci differed by several alleles between tissue types; on average there were fewer shared microsatellite alleles than different ones (overall average bands shared = 1.760.5 SD, overall average bands different = 1.960.3 SD); (Table S1, S2). All loci except Mc0947 showed multiple alleles per locus (overall average = 3.860.3 SD) per colony, a pattern inconsistent with a single diploid organism. Each tissue type also contained an excess of alleles per locus; mesenterial filaments average = 2.260.7 SD; IGMS average = 3.360.4 SD that has not been previously observed for other M. capitata samples using these markers [23]. This pattern is consistent either with polyploidy or a mixed sample containing several individuals. These microsatellite results were highly consistent over repeat runs and multiple independent isolations and extractions of IGMS and mesenterial filaments from affected colonies, whereas no peaks were observed in any of the negative controls. Discussion Within affected M. capitata colonies, we saw unusual multicellular structures (IGMS) that contained mitochondrial CO1 and control region sequences that differed from that of mesenterial filaments from the same colony in terms of morphologic, protein, and genetic profiles. IGMS were not transmissible experimentally, wound repair experiments in healthy M. capitata failed to reveal IGMS as a simple host response to trauma, and IGMS were expelled or capable of movement out of semi-submerged coral fragments placed on salt agar such that they could be collected in isolation from the host coral tissue. Finally, IGMS appeared to be harmful to the coral host as they were consistently associated with thinning and fragmentation of the basal body wall. The IGMS mitochondrial haplotypes differed from M. capitata (CO1 n = 6, CR n = 6) and were identical to the reef coral, M. flabellata (or to the very rare M. dilatata or M.cf. turgescens) with the exception of a single CO1 sequence (IGMS6.16.10) that was also divergent from other M. capitata haplotypes ( Figure 5). Microsatellite markers and protein profiles indicated consistent differences across most loci between paired mesenterial filaments and IGMS samples from the same colony, and the microsatellite profiles for both tissue types were inconsistent with a single diploid organism. This result was surprising given that no such issue was detected in a sample of 560 colonies genotyped from 13 locations across the length of the Hawai'ian Archipelago [23]. Given these combined observations, we propose that IGMS are cell-lineage parasites that are the result of chimeric fusion between M. capitata and M. flabellata larvae followed by morphological resorption of M. flabellata. Chimerism (the fusion of two or more post-zygotic individuals) has been documented in nine phyla; most commonly in colonial marine animals such as corals, bryozoans, and ascidians [24,25]. Newly settled colonial animals typically have high mortality and gregarious settling, and larval chimerism is thought to be a mechanism to rapidly increase size, survivorship, and genetic variation in a heterogeneous environment [26,27,28,29,30,31]. Somatic or germ lineage parasitism is a potential negative consequence of chimerism, whereby a dominant genotype reabsorbs the tissue of a subordinate that can then persist in the form of totipotent stem cells that can later invade and parasitize either the somatic or germ tissue [25,32,33,34,35]. Allorecognition systems are thought to have emerged as a defense against cell lineage parasitism; however, in colonial animals this mechanism is less active and potentially error prone during early stages of development [36,37]. For example in the coral Stylophora pistillata, non-related larvae fused and formed stable chimeras, while colonies older than 2 months were incompatible [38]. Intra-specific chimerism is common in soft coral [39] and has been well documented in stony corals, most notably in S. pistillata [38,40], Pocillopora damicornis [29], and Acropora millepora [41], a sister genus to Montipora. Up to 47% of A. millepora larvae fused under lab conditions and microsatellite markers revealed that as many as 5% of the colonies in nature were composed of two or more closely related genotypes [41]. Inter-specific chimerism has been experimentally induced in a variety of organisms including vertebrates [42] and plants [43]. This is the first instance to our knowledge of suspected inter-specific animal chimerism; however; coral species boundaries are difficult to delineate and in some cases may be semi-permeable due to hybridization. Confirming this hypothesis will require more targeted studies, but if confirmed, corals affected by this disorder are likely to serve as a useful model. If the chimera hypothesis is true, it is not yet clear what triggers the onset of tissue loss in M. capitata. Temperature may be a factor. For example, clonal urochordates (Botryllus schlosseri) have shifts in chimeric constituents in accordance with changes in seawater temperature [40]. The exact role that IGMS play in tissue loss in M. capitata is unclear, but IGMS were more abundant in coral fragments manifesting tissue-loss. Parasites do not always kill their hosts, and animals that appear healthy can harbor parasitic infections [44] which would be consistent with the presence of There are alternative hypotheses for IGMS; however, they do not reconcile with all of the data. Hybridization between M. capitata and M. flabellata (or the rare M. dilatata or M. cf. turgescens) could potentially result in multiple tissue types with differing mitochondrial genotypes. This is similar to mitochondrial heteroplasmy and doubly uniparental inheritance (DUI) observed in mussels (Mytilus sp.), where male gonads contain a distinct mitochondrial genotype [45,46]. The microsatellite results (SI text), however, are inconsistent with hybridization because most loci differ between mesenterial filaments and IGMS within individual colonies. Furthermore, the regions where M. capitata were sampled were in South Kane'ohe Bay whereas M. flabellata are concentrated in the north bay, only a handful of M. dilatata colonies exist in that area, and M. cf. turgescens has only been documented in the Northwest Hawai'ian Islands [47]. We have not thus far documented M. flabellata in the vicinity of affected colonies, but montiporaiasis affects ca. 20% of M. capitata colonies manifesting tissue loss within Kane'ohe Bay. We have no data on the overall prevalence of IGMS in M. capitata that do not manifest tissue loss. Allopolyploidy may result in more than two alleles per locus per individual; however, the genotype should remain consistent across tissues, and the maximum number of alleles should be relatively consistent among individuals. The maximum number of alleles per locus is highly variable among tissue types with a colony and between individuals which is more consistent with a mixture of several diploid individuals. Furthermore, a population genetic survey of 560 colonies sampled across the length of the Hawai'ian Archipelago found that virtually all had only 2 alleles per locus, and all but 2 loci with null alleles conformed to Hardy-Weinberg expectations [23]. Contamination, mislabeling or other procedural artifacts are highly unlikely because the results were highly concordant across mitochondrial, microsatellite and protein assays in spite of the fact that multiple collections, purifications, DNA extractions and amplifications were done by separate workers. For the microsatellite assay, all samples were run twice and were highly consistent, and there were no signs of amplification in the negative controls done on the ABI 3100 Genetic Analyzer which is highly sensitive to minute amounts of DNA. Known PCR artifacts such as nonspecific amplification primer binding, or slip-strand mispairing are inconsistent with the genetic results presented here. More broadly, these findings may be the first example of chimerism between two coral species and the first implication of chimerism resulting in coral disease. In addition to elucidating their role in causing tissue-loss in M. capitata, this phenomenon may be a valuable system for research on the evolution of multicellularity and allorecognition systems in corals in general. Finally, the presence of multiple genotypes from different species within a single coral colony has the potential to confound a wide variety of studies, particularly phylogenetic, population genetic, and proteomic studies. It is also likely to add to the difficulty in distinguishing between coral species and in understanding the role of hybridization in the evolution of reef-building corals. Accordingly, it would be wise for such studies to incorporate microscopic examination of tissues to at least rule out the presence of cellular lineage parasites that could affect genomic investigations. Materials and Methods Paired fragments (normal and lesion tissue) were collected from M. capitata colonies manifesting tissue loss [12] and fixed in zinc formalin (Z-Fix, Anatech) diluted in seawater according to manufacturer instructions. Coral fragments were subsequently decalcified (Cal Ex II, Fisher Scientific), embedded in paraffin, trimmed at 5 mm, and stained with hematoxylin and eosin. As appropriate, Massons trichrome stain was used to highlight collagen in the sections. For each of 46 colonies, sampled over a one-year period, IGMS were quantified in paired tissue sections with and without lesions by counting numbers of IGMS/mm 2 along a gradient from the lesion inwards in 100 mm intervals. Mean IGMS densities were compared between normal and lesion fragments using the Mann-Whitney U test because the data did not meet assumptions of normality and equal variance. For electron microscopy, IGMS were collected from live coral by partially immersing a coral fragment in artificial seawater in a petri dish, placing a block of agar made with saturated NaCl on top of the fragment, and incubating at room temperature overnight. This provided a high salt/low moisture gradient prompting the IGMS to migrate or be expelled from the fragment to surrounding artificial seawater from which they were harvested the next morning (salt method). IGMS were fixed in Trump's fixative [48], post fixed in 2% osmium tetroxide, embedded in epoxy, and cut into 1 mm thick toluidine blue-stained sections. Ultrathin sections were stained first with uranyl acetate then lead citrate and examined using a Zeiss LEO 912 transmission electron To assess whether IGMS were transmissible, a fragment from IGMS-infected colonies was incubated with two IGMS-free fragments from a healthy colony (infection/non-infection was confirmed by histology) either in direct contact or in water contact in closed 11 L aquaria filled with artificial seawater maintained at 27uC on a 12:12 LD cycle; a duplicate tank with 3 healthy histologically-confirmed uninfected fragments served as a control. Experiments were ended if tissue-loss developed in healthy contact or non-contact fragments or when the IGMS-laden fragment died completely. At termination of the experiments, all tissues were examined by histology for the presence of IGMS. Transmission experiments were replicated three times with separate colonies (27 fragments total examined). To determine if IGMS were a host response to trauma, 80 fragments from a healthy colony were experimentally traumatized by abrading a 1 cm 2 area of tissue and skeleton and monitoring the wound repair process using histology every 2-4 days for 32 days. For protein and molecular studies, paired IGMS and extruded mesenterial filaments were collected from a single affected coral fragment from each of three different colonies using the salt method, harvested, washed extensively with sterile artificial seawater, pelleted by centrifugation, and seawater decanted prior to freezing (270uC). For proteins, IGMS and mesenterial filaments were homogenized separately in 10 volumes of phosphate buffered saline, centrifuged, the supernates resolved on 12% denaturing polyacrylamide, and bands visualized with silver [49]. DNA was extracted on three separate occasions, by three separate workers on IGMS and mesenterial filaments isolated from a single fragment as previously described. In total there were 6 different colonies (2 colonies/ worker) with paired IGMS and mesenterial filaments extracted using the Qiagen DNeasy Blood and Tissue kit (Tissue protocol) following the manufacturers recommendations. PCR primers were based on previously published primers: mt16S, 1 TCGACTGTT-TAGMAAAAACATA, 2 ACGGAATGAACTCAAATCATG-TAAG [19], mtCO1, HCO2198 TAAACTTCAGGGTGAG-MAAAAAATC, LCO1490 GGTCAACAAATCATAAAGA-TATTGG, [18]; mtCR, Ms FP2 TAGACAGGGGMAAGGA-GAAG, MON RP2 GATAGGGGCTTTTCATTTGTTTG [17]. PCR reactions were performed on a MyCycler thermal cycler (BioRad). Each PCR contained 1 mL of DNA template, 2.5 mL of 106 ImmoBuffer, 0.1 mL IMMOLASE DNA polymerase (Bioline), 3 mm MgCl2, 10 mm total dNTPs, 13 pmol of each primer, and molecular biology grade water to 25 mL final volume. Hotstart PCR amplification conditions varied slightly depending on the primer set used and were generally: 95uC for 10 min (1 cycle), 95uC for 30 s, annealing temperature (2 degrees less than primer melting temperature, ranging between 50 and 60uC) for 30 s, and 72uC for 60 s (35 cycles) followed by a final extension at 72uC for 10 min (1 cycle). PCR products were visualized using 1.0% agarose gels (16 TAE) stained with GelstarH. PCR products for direct sequencing were treated with 2 U of exonuclease I and 2 U of shrimp alkaline phosphatase (Exo:SAP) using the following thermocycler profile: 37uC for 60 min, 80uC for 10 min. Treated PCR products were then cycle-sequenced using BigDye Terminators (Perkin Elmer) run on an ABI 3130XL automated sequencer at the NSF-EPSCoR core genetics facility at the Hawai'i Institute of Marine Biology (HIMB). Resulting sequences were inspected and aligned using Geneious Pro 4.8.5 [50] to implement either ClustalW [51] or Muscle [52]. The nucleotide substitution model was selected in Modeltest V.3.7 [53] by the Akaike information criterion (COI = K81uf+I). All phylogenetic analyses were performed with Bayesian Inference (BI) and Maximum Likelihood (ML). Bayesian Inference trees were generated with Mr.Bayes 3.1.2 [54], with 1,100,000 generations and a burn in of 110,000 generations, and ML trees were generated by RaxML [55]. Microsatellite markers, PCR conditions, primer tailing method, and pooling methods were done as described previously [56]. The amplified fragments were analyzed on the ABI 3100XL Genetic Analyzer at the EPSCoR core genetics facility at HIMB and sized using GENEMAPPER v4.0 and GS500LZ size standards (Applied Biosystems). Bands were scored as present only in cases where a clearly visible peak was determined to be at least K the height of the size standard. Reactions that failed to produce any clear peaks were scored as 'na' (Appendix 1). Table S1 Table of sample Absence of bands are indicated by an X, f indicates a failed reaction. Each colony is represented by a number indicating month and day of SCP isolation. Each sample was run twice (a,b). Gray columns indicate confirmed differences between MF and SCP for a given colony. (XLS) trade names does not imply endorsement by the US Government. Evelyn Cox, Caroline Rogers, and Robert Kinzie provided constructive comments.
v3-fos-license
2019-02-27T23:57:05.475Z
2009-01-01T00:00:00.000
128586501
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/ijge/2009/621528.pdf", "pdf_hash": "10fab463a768e6ad63bf968e3094a77c4c609b5b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42306", "s2fieldsofstudy": [ "Geology", "Physics" ], "sha1": "10fab463a768e6ad63bf968e3094a77c4c609b5b", "year": 2009 }
pes2o/s2orc
The Edgerton Structure : A Possible Meteorite Impact Feature in Eastern Kansas Recognized meteorite impact features are relatively rare in the U.S. Midcontinent region, but recently with increased interest and research, the number has increased dramatically. We add another possibility to the growing list, the Edgerton structure in northwestern Miami County, Kansas. The feature is elliptical (∼5.5 × 6.5 km, slightly elongated east-west) with radial surface drainage. The feature was first observed on hillshade maps of digitized topography of 7.5 minute quadrangles. Subsequent magnetic profiles show a higher magnetic value in the center of the ellipse with higher values around the edges; this shape is characteristic of an impact feature. Depth to the anomalous body is estimated to be about 1 km, which puts it in the Precambrian crystalline basement under a cover of Paleozoic sediments. There are no deep boreholes in the vicinity and no seismic profiles are available. If it is an impact structure, it will be the second such feature documented in Kansas, the first being the Brenham meteorite crater at Haviland in Kiowa County in southwestern Kansas. It would be older than the other impact structures identified in the Midcontinent—Manson in Iowa, Ames in Oklahoma, Haswell Hole in Colorado, and possibly Belton in Missouri and Merna in Nebraska. There are at least two other prospective impact features in Kansas: the Goddard ring west of Wichita and Garden City ellipse north-west of Garden City. Introduction Circular, oval, or elliptical features, observed on airphotos, remote-sensed images, topographic quadrangle maps, hillshade maps, or on the ground in the field, may be the result of several independent or combination of factors.The most obvious and spectacular features are sinkholes that form relatively quickly and "buffalo wallows," shallow depressions present in large numbers on the High Plains.More recently, some circular features have been attributed to meteorite impact. There are at least five causes of physiographic circular features: (1) erosion widens a valley and then closes the open end giving an illusion of a circular feature, for example, Cheyenne Bottoms in Barton County, (2) solution of material and collapse of overlying material to form a sinkhole, for example, Old Maids Pool in Wallace County, (3) meteorite impact feature, for example, Haviland crater (the term crater is used here in the sense of a meteorite impact feature and does not necessarily imply a surface depression) in Kiowa County, (4) buffalo wallows-formed by animals and enlarged by wind, with many examples on the High Plains [1], (5) dish-shaped structure with compaction of overlying sediments. We are interested here only in the meteorite impact features. There have been speculations for years of possible meteorite impact features in Kansas.In fact, Big Basin and International Journal of Geophysics Little Basin in Clark County in the southwestern part of the state have been cited as examples (e.g., [2]).Several impact features have been identified and verified in the U.S. Midcontinent, for example, Manson in Iowa (Cretaceous; [3,4]), Ames in Oklahoma (Cambrian; [5]), Haswell Hole in Colorado (Precambrian; [6,7]), and possibly Belton Ring-Fault Complex in Missouri (post-Pennsylvanian; [8]) and Merna in Nebraska (Recent; [9]).So why not in Kansas?Although many meteorites have been found and catalogued in the state, only one meteorite feature has been identified and recorded, the Brenham crater near Haviland in Kiowa County in southwestern Kansas [10]. It is known that some drainage patterns reflect features at depths to the Precambrian basement.Long, straight river valleys follow fractures/faults; for example, segments of the Neosho River Valley and the zigzag course of the Arkansas River are good examples [11].Therefore, circular and radial drainage patterns on the surface may reflect circular features in the subsurface in Kansas [12].We suggest that there are several intriguing such features in Kansas recognizable on topographic, hillshaded, and other maps that with detailed study may prove to be impact features (Figure 1; [13]). We have named the feature described here Edgerton (for the small town nearby) in northwestern Miami County in eastcentral Kansas (Figure 1).Edgerton was recognized by its circular nature and surface drainage pattern.A followup magnetic profile is highly suggestive of a circular feature with a rim and central high, which could indicate that it is an impact feature.Unfortunately, no subsurface data or seismic profiles are located close enough to confirm our preliminary identification. Local Geology Surface rocks are the lower part of the clastic Douglas Group and alternating limestone and shale units of the Lansing/Kansas City groups (upper Pennsylvanian) thinly covered by Recent and Pleistocene deposits.Surface geology of Miami County was mapped and described by Newell in 1935 [14] and for Franklin County by Ball et al. in 1963 [15].Neither study reports anything out of the ordinary in the area of the Edgerton feature.Overlying sedimentary rocks are relatively flat and have relatively low magnetic susceptibility [16]. Two wells drilled in the area in 1944 and 1965, neither reaching the Precambrian surface, reported a normal stratigraphic section for this part of the state.The Pennsylvanian section is underlain by Mississippian limestones and Kinderhook Shale, Viola Limestone and Simpson Group (Ordovician), and Cambro-Ordovician Arbuckle Group carbonates.The total section to the Arbuckle is about 480 m and the thickness from top of Arbuckle to Precambrian should be about 240 m [11]; total sedimentary section, then, is on the order of about 700 m to 800 m. The interpretation of the Precambrian basement by [17] is that the basement in this part of the state is composed mainly of granite (1.6 Ga) with younger intrusive granite bodies (1.35 Ga). Edgerton Ring Surface Expression It was the topographic ring or elliptical shape on the hillshade map near Edgerton that attracted attention.The slightly elongated east-west feature is approximately 5.5 × 6.5 km.The features are emphasized by radial drainage with Rock Creek and Bull Creek forming the southern and eastern boundaries (Figure 2). Geophysical Magnetic Survey In December of 2004 surface east-west and north-south magnetic profiles, based on accessibility of county roads, were made across the feature (Figure 2).Total magnetic field was recorded with a Geometrics G858 cesium magnetometer.Data were acquired along a 12.8-km-long east-west line (longitude −95.0281 degrees) at the latitude 38.7229 degrees from the longitude −95.0914 to −94.9441 degrees and a 12.8-km southnorth line near the center of the eastwest line from latitude 38.65 to 38.7821 degrees.Another Geometrics G856 Proton magnetometer was used to measure the diurnal changes of the Earth's field every 10 minutes at a fixed station at the center of the two lines.The maximum diurnal changes during the survey period of time were about 20 nT.The normal geomagnetic field in Miami County is 53,505 nT. The field measurements were corrected for diurnal variation.Magnetic spikes in the field measurements resulting from a highway overpass, a railroad, and utility lines were removed and replaced by the normal earth field value (53,505 nT).Small spikes resulting from culture noises were removed by the wavelet analysis [18].A regional magnetic field was removed by a linear trend and Figure 3 to note that there are almost identical anomaly highs (40 nT) at positions −94.9649 and −95.0593 degrees and a weak high at the center of the line (−6 km). If we use the horizontal cylinder formula to estimate the depth to an anomalous body [19], a maximum depth of about 1 km is computed for locations of −94.9649 and −95.0593 degrees.These locations suggest edges of the feature.To verify the ground survey results, an east-west line of an aeromagnetic survey (750 m above the sea level, about 450 m above the ground surface) along the latitude about N 38.7382 degrees was processed (from [20]).The location of this line is approximately 1.6 km north of the ground east-west line.Figure 4(a) shows the aeromagnetic data with a straightline representing the regional magnetic field.After removing this regional magnetic field, a residual magnetic anomaly was obtained (Figure 4(b)).The shape of this residual anomaly mimics the ground survey results (Figure 3). The magnetic field surrounding Edgerton is fairly consistent with no other anomalous pattern (e.g., oscillations) evident from available data [21]. The same data-processing procedure was applied to data acquired along the south-north line.The main shape of the residual field (Figure 5) is similar to residual anomalies of the west-east line (Figure 3).There are two anomaly highs: 12 nT at 38.6911 and 38.7446 degrees and a weak high at about the center of the line (38.7138degree).With the horizontal cylinder formula [19], estimated maximum depths to anomalous bodies are about 1 km at a location of 38.7138 and 1.6 km at a location of 9.2 km, respectively. Geologic Interpretation The geological and geophysical evidences suggest the feature is at or near the buried Precambrian surface.From the size of the surface topographic expression and the amount of sediment overburden, it is suggested that the relief on the Precambrian surface is on the order of 90 m [12].A diagrammatic geologic interpretation based on the magnetic survey is given in Figure 6. The geophysical signature between Edgerton and Haswell Hole and Ames, which is a presumed and a known meteorite impact feature, is similar (Table 1).The size and shape of Merna and Big Basin, which are depressions, suggest that they are solution features.Manson, Ames, and Haviland are in a class by themselves as known meteorite impact features.Edgerton and Belton, both unproven impact features, more resemble impact than solution features in their topographic expression.Haswell Hole and Edgerton both are domed and have radial drainage, but many features in eastern Kansas have this shape and are not thought to be impact features.For example, the Big Springs anomaly in northwestern Douglas County is attributed to a Precambrian granite intrusive as is the Beagle anomaly in southwestern Miami County [12].Both of these anomalies have magnetic highs. There are a few known intrusive igneous plugs located in Riley and Woodson counties.These small features have been investigated in detail, and their surface expression, size, and age are not similar to Edgerton [11]. Summary There are two possible scenarios that can be made on the limited amount of data: (1) the impact on the Precambrian surface could have created the topographic relief on the surface, or (2) the impact feature could have been beveled and an intrusive body could have been emplaced in the crust weakened by the impact causing the relief.At this time there is no way to tell which scenario is the correct one although we support the second one (2).Unfortunately, no boreholes have been drilled in the Edgerton and no seismic profiles are available across the feature.So, an interpretation on whether it is an impact feature will have to await further data. Figure 1 : Figure 1: Suggested or suspected impact features in Kansas. Figure 2 :Figure 3 : Figure 2: Surface topographic expression (a circle with a dash line) and location of magnetic profiles (two solid lines). Figure 4 : Figure4: (a) Aeromagnetic data along latitude about N37.7382 degrees with linear regional trend; (b) aeromagnetic anomaly with linear regional field removed. Figure 5 : Figure 5: Ground magnetic anomalies along southnorth line after diurnal correction, spike removal, and linear trend removal. Figure 6 : Figure 6: Diagrammatic reconstructed east-west geologic cross section of Edgerton feature."ma" is the unit of million years. Table 1 : Comparative data on features discussed here.
v3-fos-license
2019-04-23T13:21:35.621Z
2018-12-20T00:00:00.000
128024172
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.5054849", "pdf_hash": "ce7671777f53685ebef1e510f53d35537b117b23", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42307", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "dc11ca340fa1d71bc97f793cd7b981a57fbe3796", "year": 2018 }
pes2o/s2orc
Direct access to the moments of scattering distributions in x-ray imaging The scattering signal obtained by phase-sensitive x-ray imaging methods provides complementary information about the sample on a scale smaller than the utilised pixels, which offers the potential for dose reduction by increasing pixel sizes. Deconvolution-based data analysis provides multiple scattering contrasts but suffers from time consuming data processing. Here, we propose a moment-based analysis that provides equivalent scattering contrasts while speeding up data analysis by almost three orders of magnitude. The availability of rapid data processing will be essential for applications that require instantaneous results such as medical diagnostics, production monitoring and security screening. Further, we experimentally demonstrate that the additional scattering information provided by the moments with an order of higher than two can be retrieved without increasing exposure time or dose. The scattering signal obtained by phase-sensitive x-ray imaging methods provides complementary information about the sample on a scale smaller than the utilised pixels, which offers the potential for dose reduction by increasing pixel sizes. Deconvolution-based data analysis provides multiple scattering contrasts but suffers from time consuming data processing. Here, we propose a momentbased analysis that provides equivalent scattering contrasts while speeding up data analysis by almost three orders of magnitude. The availability of rapid data processing will be essential for applications that require instantaneous results such as medical diagnostics, production monitoring, and security screening. Further, we experimentally demonstrate that the additional scattering information provided by the moments with an order higher than two can be retrieved without increasing exposure time or dose. V C 2018 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/ licenses/by/4.0/). https://doi.org/10.1063/1.5054849 In the context of phase-sensitive x-ray imaging techniques, scattering refers to the contrast channel arising from sample inhomogeneities that are smaller than the utilised pixels. The utilisation of such sub-pixel signals allows for increasing the pixel size while maintaining the signal and simultaneously decreasing dose and/or scan times significantly. The sensitivity towards sub-pixel information has been established for different x-ray imaging methods, such as analyser-based imaging (ABI), 1,2 grating interferometry (GI), 3-5 speckle-based imaging, 6-8 and edge-illumination (EI). 9,10 The potential of x-ray scattering is investigated for mammography, 10-12 bone structure determination, 13 and the diagnosis of several pulmonary diseases in both small [14][15][16] and large animals. 17 Commonly used data analysis procedures provide a single contrast related to sub-pixel information, which is called the dark-field 3 or the scattering signal. 10 An alternative deconvolution-based approach that provides multiple and complementary scattering contrasts was originally developed for GI 18 and extended to tomography 19 and recently translated to EI. 20 In some applications, it was shown that deconvolution can provide a higher contrast to noise ratio and improved dose efficiency. 21,22 It was also demonstrated that the complementary contrasts can be exploited for quantitative imaging 23 without the need for additional scans required by other approaches. 5,24,25 While the deconvolution-based analysis is suitable for ABI, GI, and EI, the approach proposed below is not directly applicable to GI due to the sinusoidal nature of the provided signal. Thus, we will introduce the approach for EI and note that all results are directly applicable to ABI. EI is a non-interferometric, phase-sensitive x-ray imaging technique that uses a pair of apertured masks (Fig. 1). The pre-sample mask confines the incident x-rays into smaller beamlets, which are broadened by the sample due to scattering. The broadening is transformed into a detectable intensity variation by the detector mask that features apertures covering most of the detector pixels. The comparably large structure sizes of the optical elements (typically tens of microns) allow for simple mask fabrication 23 and render EI robust against vibrations and thermal variations. EI is readily compatible with laboratory-based x-ray tubes due to the achromaticity of the optical elements and the entire x-ray spectrum contributes to the signal. 26,27 Accessing multiple scattering contrasts by deconvolution is based on the following approach. Scanning the presample masks laterally by a fraction of its period provides a Gaussian-like intensity curve in each detector pixel. Repeating the scan with and without the sample yields the signals s(a) and f(a), respectively. Here, the scattering angle a is defined in a plane perpendicular to the line apertures of the utilised mask ( Fig. 1). Scattering in the orthogonal direction does not change the detectable signal and, thus, can be omitted for the rest of the discussion. The angularly resolved scattering distribution g(a), which represents the sample's scattering signal within one pixel, is then implicitly defined by 9,10,20,28,29 sðaÞ ¼ f ðaÞ gðaÞ; (1) where denotes the convolution operator. The scattering distribution g(a) can be accessed from experimental data by deconvolving s(a) with f(a) and iterative Lucy-Richardson deconvolution 30,31 has been established as a reliable method. 20,21 The kth iteration step of the deconvolution is performed by computing where f denotes f mirrored at the origin. Usually, the sample signal is chosen as the starting value: g 0 ¼ s. constraint and is guaranteed to converge to the maximum likelihood solution if the experimental noise is given by Poisson statistics, which is commonly the case in x-ray imaging. 32,33 In order to retrieve multiple contrasts relating to the shape of g, a moment analysis can be applied to the scattering distributions. Depending on normalisation and centralisation, different definitions of the moments need to be distinguished. The un-normalised, un-centralised moments of an arbitrary function h(a) are given by where n is an integer denoting the order of the moment. Dividing by M 0 yields the normalised, un-centralised moments M n ðhÞ ¼ M n =M 0 for n ! 1; (4) and shifting by M 1 leads to the normalised, centralised moments Þ n hðaÞ da=M 0 for n ! 2: It has been experimentally demonstrated that the first moment of the scattering distribution M 0 (g) corresponds to absorption, M 1 ðgÞ to the differential phase signal, andM 2 ðgÞ to scattering strength. 20 The relation of these moments to sample properties is provided in Ref. 23. Given typical noise levels in experiments, about 1000 iterations steps are required to ensure convergence of the Lucy-Richardson deconvolution, which may lead to cumbersome data processing times. For example, data processing of the dragon fly in Ref. 20 took around 1 h for a 400 Â 300 pixel field of view on a standard desktop PC. This renders iterative deconvolution unsuitable for time sensitive applications. Therefore, we propose an alternative data analysis approach that uses the known moments of convolutions, 34 the derivation of which is briefly sketched in the following. The moments defined in Eq. where the symbol^denotes the Fourier transform and q the variable in Fourier space. Since s is given by a convolution, its Fourier transform corresponds to a product Inserting Eq. (7) into Eq. (6) and dividing by M 0 lead to M n ðsÞ ¼ X n k¼0 n k M k ðf Þ M nÀk ðgÞ; with the binomial coefficient n k . Similar equations hold true for the normalised, centralised momentsM n , 34 which can be solved for the moments of g. The result for the first five moments is M 2 ðgÞ ¼M 2 ðsÞ ÀM 2 ðf Þ; M 3 ðgÞ ¼M 3 ðsÞ ÀM 3 ðf Þ; M 4 ðgÞ ¼M 4 ðsÞ ÀM 4 ðf Þ À 6M 2 ðf ÞM 2 ðgÞ: First moment terms do not appear in equations with n > 1 becauseM 1 ¼ 0. For the scattering widthM 2 , the above equation is in agreement with published results. 36 Since the moments of s and f can be directly calculated from experimental data, Eqs. (9)-(13) provide direct access to the moments of the scattering distribution g without the need for time consuming iterative deconvolution. In order to experimentally compare the results of deconvolution and direct moment analysis, we used an EI-based imaging system at University College London. A Rigaku MM007 rotating anode with a Mo target was used as an xray source and operated at a 25 mA current and a 40 kVp voltage. The pre-sample mask consisted of a series of Au lines on a graphite substrate with a pitch of 79 lm and an opening of 10 lm, while the detector mask had a pitch of 98 lm and an opening of 17 lm. Both masks were manufactured by Creatv Microtech (Potomac, MD). The x-ray detector was a Hamamatsu C9732DK flat panel sensor featuring a binned pixel size of 100 lm. The sample to detector distance was 0.32 m, and the total setup length was 2 m. The sample was a dragon fly, which was known to provide a sufficient signal for the first five moments. The sample mask was scanned over one pitch with 32 steps and an exposure time of 25 s per step. The same dataset was used for the deconvolution [Eq. (2)] and moment analysis (Eqs. 9-13). The resulting scattering contrasts (Fig. 2) show an excellent visual agreement between the two approaches, while data processing for direct moment analysis was about 600 times faster than for deconvolution. Furthermore, direct moment analysis eliminates the number of iteration steps as a necessary parameter for deconvolution. Table I presents a performance comparison of deconvolution and moment analysis. The high degree of visual agreement between the approaches is confirmed by the correlation factors (!0.9 for all contrasts). Columns 3 and 4 compare the standard deviation of the signals in a 50 Â 50 pixel background area as a measure of the noise level in the two analysis approaches. With the exception ofM 2 (details discussed below), both approaches deliver similar noise levels. For the 2nd moment, the scatter plot of values retrieved by deconvolution and moment analysis (Fig. 3) reveals a discrepancy for scattering strengths that are small compared to the width of the flat-field scan (M 2 ðgÞ 0:05 ÂM 2 ðf Þ). In this case, the deconvolution approach [Eq. (2)] does not retrieve the correct d-shaped signal for g due to the presence of noise, 22 but will retrieve a signal withM 2 ðgÞ > 0. The moment analysis, on the other hand, is not subject to such a restriction. The difference in bias between the two approaches is also reflected in the mean of the background areas, which areM 2 ¼ 2:1 Â 10 À11 rad 2 for deconvolution andM 2 ¼ 2:8 Â 10 À12 rad 2 and, thus, an order of magnitude smaller for direct moment analysis. However, Table I shows that deconvolution providesM 2 values with a smaller standard deviation than moment analysis in the background area. Nevertheless, moment analysis would be the preferred option for quantitative data analysis. For large scattering values (M 2 ðgÞ ! 0:1 ÂM 2 ðf Þ), the two approaches deliver the same sensitivity (bracketed entries forM 2 in Table I). Finally, we investigated the influence of the acquired number of sample points on the functions s(a) and f(a) (i.e., number of images per scan). Since at least n þ 1 scan points are required for the linear independence of nth moment, increasing the number of scan points increases the number of accessible and complementary scattering information. To this end, we acquired an additional dataset, where we varied the number of scan points from 5 to 11, while keeping the total exposure time constant (200 s). We used the standard deviation of the different scattering contrasts in a background area retrieved by direct moment analysis to quantify the dependency. As can be seen in Fig. 4, the noise levels vary within a small 15% interval, which implies that the sensitivity of the different contrasts does not change significantly with the number of scan points. In essence, this means that moment analysis provides the additional scattering contrasts (i.e., moments with order higher than 2) without the need to increase total exposure time or dose. In conclusion, we have established a direct moment analysis as an alternative approach for retrieving multiple scattering contrasts for EI. We also suggest that this approach can be readily extended to ABI. Direct moment analysis delivers results equivalent to previously utilised deconvolution, while speeding up data processing by almost three orders of magnitude and providing unbiased values for small or absent scattering signals. Furthermore, we have experimentally demonstrated that increasing the number of scan points while keeping total exposure time and dose constant provides additional scattering information without losing sensitivity. Fast data processing that provides reliable scattering contrasts will be crucial for applications demanding rapid feedback, such as medical diagnostics, production monitoring, and security screening.
v3-fos-license
2018-04-24T23:29:16.889Z
2018-04-24T00:00:00.000
5074571
{ "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcsystbiol.biomedcentral.com/track/pdf/10.1186/s12918-018-0566-x", "pdf_hash": "2dd3d28383c62cb23860e3f45cabf09d458833aa", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42308", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "2dd3d28383c62cb23860e3f45cabf09d458833aa", "year": 2018 }
pes2o/s2orc
KF-finder: identification of key factors from host-microbial networks in cervical cancer Background The human body is colonized by a vast number of microbes. Microbiota can benefit many normal life processes, but can also cause many diseases by interfering the regular metabolism and immune system. Recent studies have demonstrated that the microbial community is closely associated with various types of cell carcinoma. The search for key factors, which also refer to cancer causing agents, can provide an important clue in understanding the regulatory mechanism of microbiota in uterine cervix cancer. Results In this paper, we investigated microbiota composition and gene expression data for 58 squamous and adenosquamous cell carcinoma. A host-microbial covariance network was constructed based on the 16s rRNA and gene expression data of the samples, which consists of 259 abundant microbes and 738 differentially expressed genes (DEGs). To search for risk factors from host-microbial networks, the method of bi-partite betweenness centrality (BpBC) was used to measure the risk of a given node to a certain biological process in hosts. A web-based tool KF-finder was developed, which can efficiently query and visualize the knowledge of microbiota and differentially expressed genes (DEGs) in the network. Conclusions Our results suggest that prevotellaceade, tissierellaceae and fusobacteriaceae are the most abundant microbes in cervical carcinoma, and the microbial community in cervical cancer is less diverse than that of any other boy sites in health. A set of key risk factors anaerococcus, hydrogenophilaceae, eubacterium, PSMB10, KCNIP1 and KRT13 have been identified, which are thought to be involved in the regulation of viral response, cell cycle and epithelial cell differentiation in cervical cancer. It can be concluded that permanent changes of microbiota composition could be a major force for chromosomal instability, which subsequently enables the effect of key risk factors in cancer. All our results described in this paper can be freely accessed from our website at http://www.nwpu-bioinformatics.com/KF-finder/. Background Cervical cancer is the second most common cancer in women [1]. Over 500,000 women worldwide die of cervical cancer each year [2]. It is known that a persistent human papillomavirus (HPV) infection appears to be one of major causes of cervical carcinoma. HPV-16 or HPV-18 [8]. Genome-wide association studies and subsequent meta-analyses showed that differentially expressed genes (DEGs) in cervical cancer are more likely to locate in the region of frequent chromosomal aberration [9][10][11][12]. It indicates that cancer may strongly associate with the chromosomal instability [13]. A recent study suggests that microbiota might play important roles in the development of cervical cancer [14]. There exists a significant difference in microbiota's diversity between non-cervical lesion (NCL) HPV negative women and these with cervical cancer. Further, compared to the microbial community in NCL-HPV negative ones, these in cervical cancer samples have higher variation within groups. All these findings implicate that cervical microbiota is an important clue in the research of the cervical cancer pathology. In order to understand how the microbial community interplay with host genes and cause cell carcinoma in the molecular level, more and more research groups make efforts of identify key factors, also known as cancercausing agents, which can drive the progress of cervical carcinogenesis. Microbiota is a possible suspect causing the frequent gains and losses in chromosome. It is abundantly distributed in women cervices. They are involved in many of the host's normal life processes, but also can destroy the host's normal gene regulatory network by gene transfer, which may activate oncogene expression and lead to cancer [15]. Therefore, many researchers take efforts to study how the human microbiota cause structural variation of human genomes and alter the immune system and metabolic system to support the development of cervical pathogenesis [16]. Permanent changes of microbiota may be a major cause of chromosomal instability, subsequently discharge the tumor suppressor gene retinoblastoma (RB) and tumor protein TP53. Some association measures can be used to build a covariance network for microbes and host genes [17]. Host-microbial networks provide a systematic way to study the regulation system between microbiota and host genes [18]. However, the role of host response to the change of microbiome in cervical cancer is still unknown. And there are only a few public tools specifically designed for analyzing hostmicrobial networks [19][20][21]. Therefore, there is a pressing demand to develop fast and efficient computational tools to examine how microbiota regulate the gene expression, chromosomal instability and cell carcinoma. As a remedy for these limitations, we proposed a new computational framework to identify the key risk factors using 16s rRNA and gene expression data of 58 squamous and adenosquamous cell carcinoma in uterine cervix. A series of meta-analyses was performed, which include error correction, spearman rank correlation, differential expression analysis, and bi-partite betweenness centrality. A web-based tool KF-finder was developed, which can provide users a fast-and-easy way to query and visualize the knowledge of microbiota and genes in cervical cancer. Further, a set of novel risk factors were identified that may give helpful suggestions for these researchers focusing on drug design and pharmacology. Methods In order to investigate gene expression and microbiome composition in cervical cancer, we collected 133 squamous and adenosquamous cell carcinoma samples, 58 out of which were used for microbial DNA library preparation. The 16s rRNA sequencing was performed using Illumina MiSeq. Human gene expression was quantified using WG-6 BeadArray. OTU assignment Each 16s sequence was assigned to an operational taxonomic unit (OTU). To count the reads number for each OTU (microbe), 16s sequences obtained from MiSeq were aligned to the reference Greengene OTU builds. The Qiime script assigne_taxonomy.py (see more at http://qiime.org/scripts/assign_taxonomy.html) was performed in the data processing. Reference sequences are pre-assigned with OTU described in the id_to_taxonomy file. Any sequence alignment tools, such as uclust, Sort-MeRNA, blast, RDP, Mothur etc, can be called by the assign_taxonomy script for the sequence alignment between the 16s sequences and reference sequences. For example, the script will assign taxonomy with the uclust consensus taxonomy assigner by default using the following command, assign_taxonomy.py -i repr_set_seqs.fasta -r ref_seq_set.fna -t id_to_taxonomy.txt. OTU redundancy matrix was normalized from the sequence number of each sample. Since these less abundant microbes are unlikely to be a destroying force for host immune system, we selected the top-259 most abundant OTUs for further studying. Comparison with the controls To study the remarkable difference of microbiota between cancer cases and the controls, we compared our 16s raw data to those data from 300 healthy human subjects released by Human Microbiome Project (HMP) [22] (http://www.hmpdacc.org). To find a map between OTUs from our data and OTUs from healthy data, a commonly used alignment tool blastn was performed to compare their representative sequences. These pairs with evalue<1e-5 and pident>80% were used for establishing the map. These OTUs matched with a same OTU in HMP were collapsed into one OTU. The Qiime scripts were performed to analyze the 16s raw data [23]. Calculation of correlation Abundant microbes and DEGs were selected for reconstructing host-microbial networks. DEGs in cervical cancer were collected from published data [9], which were verified in five cohorts of tumor and normal samples. Hence, the DEGs are more reliable than these obtained from only one cohort. The spearman rank correlation method was employed to calculate the correlation between each pair of nodes. Note that, the gene expression data and 16s rRNA were tested on the same sample. Therefore, the spearman correlation in the network makes sense. In contrast to pearson correlation, spearman correlation coefficient can efficiently avoid the environmental noise and experimental errors caused from the non-uniform samples. Error correction To improve the confidence of the host-microbial network calculated by spearman correlation, we removed these edges that are less likely to be a true one (false positive errors) and added some new edges that are very likely to correlate with each other (false negative errors). The false positive edges include two scenes: 1) these negatively correlated edges that connected two interactors with a same type of regulation (i.e. both of them are up regulated or down regulated); 2) these positive correlated edges that connected two interactors with different types of regulation (i.e. one is up regulated, the other down regulated); 3) self-loops; 4) multiple loops. All these false positive edges are removed in our network. These false negative edges are these pairs of nodes between OTUs and DEGs which satisfying two conditions: 1) the OTU was collapsed from a set of sub-nodes; 2) all these sub-nodes strongly correlated with the DEG. All these false negative edges were added in the host-microbial network. False positive and false negative edges were detected and corrected according to the coherence of regulation and correlation relationships. A workflow of the reconstruction of host-microbial network was illustrated in Fig. 1. Bi-partite betweenness centrality To search for risk factors from host-microbial network, bipartite betweenness centrality (BpBC) [24], adapted from betweenness centrality, was used to quantify the risk of a given node, written as g(v). The definition can be for- Here, s and t are two nodes from two separate sub-networks. And δ st represents the number of shortest paths from s to t, δ st (v) the number of shortest paths going through node v from s to t. Given a node v, g(v) reflects the probability of how likely a shortest path could go through v from one sub-network to another. Composition of the microbiota To study the microbial community in cervical cancer, we examined the 16s raw data of cancer cases and assigned taxonomy to each sequence. The definition of operational taxonomic unit (OTU) was used to classify groups of closely related microbiome based on sequence similarity. Reference data sets and id-to-OTU maps for 16s rRNA sequence was downloaded from the Greengenes reference OTU builds [25]. All these sequences were grouped into different categories based on their familylevel OTU labels. As shown in Fig. 2, prevotellaceade followed by tissierellaceae appears to be the most abundant microbes, accounting for 13.7% of the microbiota Fig. 1 A workflow of the reconstruction of host-microbial network. Through the comparison between 16s rRNA and HMP data, each sequence was mapping to an operational taxonomic unit (OTU). Error correction was performed for these false positive and false negative nodes, which were detected according to the coherence of regulation and correlation Fig. 2 The microbial community in cervical carcinoma. Each 16s rRNA sequence was assigned to an operational taxonomic unit (OTU), and all these sequences were grouped into different categories based on their family-level OTU labels community. There are four other groups accounting for more than 5% of the microbiota, which are fusobacteriaceae, porphyromonadaceae, planococcaceae and bacteroidaceae. Totally, twenty-six family-level OTU groups make up more than 87% of the whole community. To examine the diversity of cervical microbiota, the PCoA analysis was performed to analyze the microbial community in cervical carcinoma, skin, mouth and vagina. As shown in Fig. 3, microbiota in cervical carcinoma (red dots) is less diverse than microbiota in any other body sites. Hence, we indeed found remarkable changes of microbial composition in the cancer cases. Reconstruction of host-microbial network A host-microbial network was reconstructed from the 16s raw data and gene expression data. Nodes in the network refer to microbes or DEGs, edges the regulation relationships between each pair of microbes. Two nodes were connected if and only if they are strongly correlated (i.e. |γ | > 0.4 and p-value < 0.05). As show in Fig. 4, a network with 997 nodes was connected by 4262 edges. Nodes in the network consist of 259 microbes and 738 DEGs. We grouped all the DEGs into four categories, named as cell cycle, antiviral response, epithelial cell differential and the other DEGs, according to their function in the development of cervical cancer. The three functional DEGs groups (excluding the other DEGS) are three major densely connected sub-networks in the hostmicrobial networks. They are functionally enriched by GO terms cell cycle, response to virus, epithelial cell differentiation respectively. They don't have any overlap between each pair of groups. In the whole network, 403 edges are negatively correlated, 3859 positively correlated. Negative correlation indicates inhibition between two biological subjects. In a negative correlation, one variable increases as the other decreases. Positive correlation indicates activation or co-existence between two subjects of interest. In a positive correlation, one variable increases as the other increase, or one variable decreases while the other decreases. This network integrates all the regulation relationships between host genes and microbiota. Risk factors in cervical cancer The risk factors in cancer may activate oncogene expression and cause a series of functional disorder in metabolic and immune systems. In the development of cancer, the most remarkable differences between tumor and normal samples are: 1) the up-regulation of viral responses; 2) the speed-up in the progression of cell cycle; 3) the inhibition of epithelial cell differentiation. To study how microbiota regulates the viral response, cell cycle and epithelial cell differentiation, we searched for key risk factors using BpBC. These key factors are thought to be cancer-causing agents that can drive the progress of cervical carcinogenesis. Nodes that organizing communication between two cancer-related groups are more likely to be key factors. Since BpBC is such a measure to evaluate the importance of a node in the network topology, we choose these nodes in the top list of BpBC as candidates of key factors. These key factors with high BpBC value may play crucial roles in the communication between two different sub-networks. The results show that Anaerococcus (labeled as OTU_97.18428) and proteasome subunit beta 10 (PSMB10) are significantly higher than the others (see in Fig. 5 left) between the sub-networks of microbe and antiviral response genes. PSMB10 was an up-regulated gene in cervical cancer. Between the sub-networks of microbe and cell cycle, KCNIP1 and Hydrogenophilaceae (labeled as OTU_97.2777) are the most important regulators (see in Fig. 5 middle). Eubacterium (labeled as OTU_97.10051) and KRT13 are the most important regulators between the sub-networks of microbes and epithelial cell differentiation (see in Fig. 5 right). It proves that the interplay between microbiota and differentially expressed genes might be the driving force that regulates the progress of cell cycle, epithelial cell differentiation and viral response. Query and visualization In order to fast and easily query and visualize the hostmicrobial networks, we developed a web-based tool KFfinder. Multiple web programming languages were used in the development, which includes PHP, mysql and javascripts. Each node and its neighborhood in the network can be searched by a query term in the panel of Search. And the induced sub-network will be visualized in the panel of View. For example, one can input a gene symbol CYP2A7 as a query term in the Search panel. A list of nodes associated with CYP2A7 will show out in a userfriendly panel, as well as a graphic view of the induced subnetwork (see in Fig. 6). Except for visualization and query, KF-finder can also sort microbes and DEGs in a decreasing order by the value of BpBC in microbe-antivirus, microbe-cell cycle or microbe-epithelial cell differentiation. Download and advanced search have been enabled on the web server. All our test datasets and results of users' personal jobs can be downloaded. Advanced search allows us search for genes and microbes based on string patterns or value constriction. KF-finder enables us to query and visualize the knowledge of host-microbial network in a fast-and-easy way. It can be accessed at http://www.nwpubioinformatics.com/KF-finder/. A case study of PSMB10 in cervical cancer Most vertebrates express immunoproteasomes (IPs) that possess three IFN-γ -inducible homologues: PSMB8, PSMB9 and PSMB10. Many studies show that expression of IP genes including PSMB10 is up-regulated in most cancer types [26]. IP genes can be expressed by non-immune cell, and that differential cleavage of transcription factors by IPs has pleiotropic effects on cell function. Indeed, IPs modulate the abundance of transcription factors that regulate signaling pathways with prominent roles in cell differentiation, inflammation and neoplastic transformation (e.g., NF-kB, IFNs, STATs and Wnt) [27]. Therefore, PSMB10 is indeed a risk factor involved in the antiviral response of cervical caner. A case study of KRT13 in cervical cancer KRT13's full name is keratin 13 in human, also known as K13 and CK13, located in a region of chromosome 17q21.2. It is a down-regulated gene in cervical carcinoma, and a risk factor that involves in the progress of uncontrolled epithelial cell differentiat,ion. Previous work suggests that the loss of K13 or low K13 mRNA expression is associated with invasive oral squamous cell carcinoma (OSCC) [28,29]. Epigenetic alteration of K13 is one major reason resulting the inhibition of K13 in OSCC. Besides, K13 was also reported that it played a directive role in prostate cancer bone, brain and soft tissue metastases [30]. More than 1000 single nucleotide polymorphisms of K13 were found in the dbSNP database. Totally, 51 variations mentioned K13 in ClinVar, seven out of which are pathogenic. All these evidences suggest KRT13 is very likely to be a key risk factor involved in cervical cancer. Conclusions In this paper, we examined the microbiota composition and gene expression in 58 squamous and adenosquamous cell carcinoma. A host-microbial network was reconstructed from the 16s rRNA and gene expression data. The main contributions of this paper can be concluded in three aspects: (1) microbial community distributed in cervical carcinoma cells is less diverse than that of other body sites; (2) a web-based tool MiteFinder was developed which enables users to query and visualize host-microbial networks, microbes and differentially expressed genes in a fast-and-easy way; (3) a set of key risk factors have been identified, which have proven to have association with cancers in several previous publications. Our results show that six groups of OTU abundantly distributed in cervical cancer samples, including prevotellaceade, tissierellaceae, fusobacteriaceae, porphyromonadaceae, planococcaceae and bacteroidaceae. Besides these six groups of OTU, we found that three differentially expressed genes and three microbes may be key risk factors and play crucial roles in the pathology of cervical carcinoma. All of these results suggest that permanent changes of microbiota composition might be the key driving force in the pathology of cervical carcinoma, which result in the abnormality of epithelial cell differentiation, cell cycle and viral response.
v3-fos-license
2018-04-03T00:37:26.329Z
2014-02-18T00:00:00.000
13559281
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1111/jcmm.12251", "pdf_hash": "9624a4476d96b021e5cb077e8eda56d19e953dfb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42312", "s2fieldsofstudy": [ "Biology" ], "sha1": "9624a4476d96b021e5cb077e8eda56d19e953dfb", "year": 2014 }
pes2o/s2orc
Valsartan ameliorates ageing-induced aorta degeneration via angiotensin II type 1 receptor-mediated ERK activity Angiotensin II (Ang II) plays important roles in ageing-related disorders through its type 1 receptor (AT1R). However, the role and underlying mechanisms of AT1R in ageing-related vascular degeneration are not well understood. In this study, 40 ageing rats were randomly divided into two groups: ageing group which received no treatment (ageing control), and valsartan group which took valsartan (selective AT1R blocker) daily for 6 months. 20 young rats were used as adult control. The aorta structure were analysed by histological staining and electron microscopy. Bcl-2/Bax expression in aorta was analysed by immunohistochemical staining, RT-PCR and Western blotting. The expressions of AT1R, AT2R and mitogen-activated protein kinases (MAPKs) were detected. Significant structural degeneration of aorta in the ageing rats was observed, and the degeneration was remarkably ameliorated by long-term administration of valsartan. With ageing, the expression of AT1R was elevated, the ratio of Bcl-2/Bax was decreased and meanwhile, an important subgroup of MAPKs, extracellular signal-regulated kinase (ERK) activity was elevated. However, these changes in ageing rats could be reversed to some extent by valsartan. In vitro experiments observed consistent results as in vivo study. Furthermore, ERK inhibitor could also acquire partial effects as valsartan without affecting AT1R expression. The results indicated that AT1R involved in the ageing-related degeneration of aorta and AT1R-mediated ERK activity was an important mechanism underlying the process. Introduction Population ageing is now a worldwide problem. In the United States, there are 35 million people of more than 65 years old. It is estimated that the number will be doubled in the year 2030 [1]. In the process of ageing, many physiological functions will be impaired or degenerate, resulting in a number of detrimental consequences for human health [2,3]. Therefore, attenuating ageing-induced damage to human health is of great significance in improving the quality of human lives in the ageing society. As is known, ageing will inevitably lead to the changes in the activity or responsiveness of hormonal systems, one of which is the angiotensin II (Ang II). Experiments in ageing animals demonstrated that selective blocking of Ang II type 1 receptor could significantly decrease the expression of senescence markers and thus retard the progression of ageing [4,5], indicating that Ang II played an important role in ageing-related pathologic processes and its type 1 receptor was an important mediator. Cardiovascular disease is a typical one of diseases that are closely related to the ageing. It has been reported that ageing is the largest risk factor for cardiovascular disease [6,7]. Meanwhile, data have demonstrated that Ang II played an important role in vascular ageing as well as in the initiation and progression of atherosclerosis [8,9]. Therefore, in the past decades, independent groups have focused on Ang II as a mediator of vascular cell dysfunction in various cardiovascular disorders [10,11]. In addition, the mechanisms that involved in the Ang II-mediated vascular injury in several cardiovascular diseases were also partially revealed, e.g. Ang II causes arteriolar vasoconstriction, superoxide anion production and endothelin release through its type 1 receptor (AT1R), resulting in increased vascular resistance and promoting atherosclerosis [12]. However, in ageing-related vascular injury, the role and mechanisms of Ang II were yet unclear. Valsartan is a selective Ang II receptor blocker (ARB). Over the past decades, experimental studies and clinical trials have demonstrated that valsartan seceratively blocked Ang II type 1 receptor (AT1R) and thus, it was widely used in treatment of Ang II-involved cardiovascular disorders, such as hypertension, endothelial dysfunction [12] and abdominal aortic aneurysm [13]. However, the potential effect of valsartan on normal ageing-induced vascular injury has not been investigated. Whether valsartan could attenuate the ageinginduced vascular injury and the underlying mechanisms still remains unknown. In this study, we investigated the potential role and mechanisms of Ang II in the structural and functional degeneration occurring in the normal ageing. Then, we tested the hypothesis that long-term administration of valsartan would protect vessels from ageing-induced injury in a rat model of normal ageing and furthermore, we explored the underlying mechanism involving in the process. Experimental animals Twenty young (or adult, 3-month-old) and 40 aged (18-month-old) male Wistar rats were purchased from the Department of Laboratory Animals, China Medical University. Animals were maintained at controlled temperature of 21°C and in a 12-hour day/night cycle. All the experimental procedures were approved by the Institutional Animal Care and Use Committee of China Medical University. Young or adult animals were used as control group. Aged animals were randomly divided into two groups: the ageing group and Valsartan group (n = 20 in each group). The control and the ageing animals had free access to water and standard rat chow. The valsartan group animals continually took valsartan (Novartis Pharma Stein AG; 30 mg/kg/ day) in their drinking water for 6 months. The concentration of valsartan dissolved in the drinking water was determined based on the previously established rats drinking patterns. Isolation, culture and treatment of rat aorta endothelial cells Isolation and characterization of aorta endothelial cells were performed according to the previous report with modifications [14]. Thirty minutes before the isolation of aorta, Wistar rats were intraabdominally injected with 6500 U Heparin Sodium. Then, animals were anaesthetized by injection of pentobarbital sodium. The thoracic aorta was identified and isolated. Connective tissues outside the aorta were removed. After washing in PBS, the minute arterial branch was removed and then vascular cells were isolated and cultured in DMEM (Gibco, Pascagoula, MS, USA) supplemented with 20% foetal bovine serum (Gibco). For treatment of aorta endothelial cells, the culture medium was supplemented with 10 À6 mol/l valsartan or PD 98059, a specific extracellu-lar signal-regulated kinase (ERK) inhibitor. The cells were treated for 48 hrs and then used for RT-PCR or Western blotting analysis as described below. Histological staining At the completion of the given observation periods, the rats were killed by intraperitoneally injected with an overdose of 2% sodium pentobarbital, the thoracic aorta tissues were harvested and fixed in 10% formaldehyde for 0.5 hr, then they were embedded in paraffin for preparation of paraffin-embodied sections. Sections were performed with Masson trichrome staining for relative content of smooth muscle and collagen fibre. Morphologic and structural changes of aorta tissue of every group were observed under the standard light microscope in 10 random fields of each section. Photographs were taken by using automatic image processing system (MetaMorph Imaging System; Universal Imaging Corp., Downington, PA, USA). Ultrastructure of aorta endothelial cell The aorta were cut into small pieces and fixed in 2.5% glutaraldehyde in 0.2 M cacodylate buffer (pH 7.4) at 4°C for 2 hrs, then washed in PBS. The materials were incubated in a 2% OsO 4 solution, dehydrated in a series of increasing ethanol concentrations and propylene oxide, and finally were immersed in Spurr resin. Ultrathin sections (50 nm) were cut on a Leica ultracut UCT ultramicrotome (Leica Microsystems Inc, LKB-II, Wetzlar, Germany), mounted on copper grids, and examined under a JEM 1200EX transmission electron microscope (Jeol, Tokyo, Japan). Immunohistochemical staining Expressions of Bcl-2 and Bax were analysed by immunohistochemical staining. Briefly, the aorta tissue samples were fixed with ice-cold 4% formalin solution in PBS for 10 min. at 4°C. After blocking with 0.2% normal goat serum for 20 min., they were incubated with monoclonal rabbit antibodies against rat Bcl-2 and Bax overnight at 4°C. After extensive washing, they were incubated with a goat antirabbit IgG (Invitrogen, Carlsbad, CA, USA) at room temperature for 1 hr. Bcl-2 and Bax dilutions were used according to the manufacturer guideline. RT-PCR Total RNA from the aorta tissues or cells was extracted with RNAprep pure Cell/Bacteria Kit (TIANGEN, Beijing, China) according to manufacturer's guideline. The purity and yield of RNA were analysed by UVspectrophotometer (UV 300, Eppendorf, Hamburg, Germany). The value of A260/280 was 1.8-2.0 and 2 lg of total RNAs were used for reverse transcription in a 20 ll reaction with SuperscriptII (reverse transcriptase from Gibco) according to the manufacturer's protocol. 2 ll of the reverse transcripts were used for PCR. The primers used in this study are shown in the Table 1. Amplification was performed for 35 cycles at 94°C for 40 sec., 63°C for 40 sec. Western blotting The aorta tissue or cell samples were homogenized in lysis buffer A (20 mM Tris-HCl, pH8.0, 150 mM NaCl, 1% Triton X-100, 2 mM EDTA, 1 mM phenylmethylsulfonyl fluoride, 20 lg/ml aprotinin, 10 lg/ ml leupeptin, 20 mM ß-glycerophate, and 2 mM NaF) for 30 min. The homogenates were centrifugated and protein concentration was determined with BCA protein assay reagent kit (Piece Biotech Inc., Rockford, IL, USA). An equal amount of protein (20 lg/lane for most proteins, while 100 lg/lane for p-p38 and p-JNK detection) from each sample extract was loaded in a 12.5% SDS-PAGE gel for Electrophoresis, and electroblotted onto PDVF membrane. Membrane was blocked with 5% non-fat dried milk (in TBST) for 2 hrs at room temperature and then incubated with primary antibody overnight at 4°C Then, membrane was washed with TBST (10 min. 93) and incubated with horseradish peroxidase-conjugated secondary antibodies for 1 hr at room temperature (All the antibodies were purchased from Cell Signaling Technology, Boston, MA, USA). After washing with TBST (10 min. 93), the immunoblots were developed using an ECL Western blotting detection system (Amersham Pharmacia Biotech, Piscataway, NJ, USA) and recorded by exposure of the immunoblots to an X-ray film (Piece Biotech Inc.). Statistical analysis All data were expressed as means AE SD. Differences were evaluated by t-test analysis. Statistical significance was defined as P < 0.05. Results Systolic blood pressure, morphological degeneration of aorta and the effect of valsartan Systolic blood pressure (SBP) of Wistar rats increased with age, from 139.1 AE 6.4 mmHg in the control group to 158.6 AE 5.5 mmHg in the ageing group. In the valsartan group, SBP was 149.0 AE 4.3 mmHg (Table 2). To characterize the aorta remodelling, we measured intima-media thickness (T), diameter (D) and T/D, and relative contents of smooth muscle (SM, Aa%) and collagen fibre (CF, Aa%). The results showed that T, D, T/D, SM (Aa%) and CF (Aa%) in the ageing group were significantly higher than that in the control group (P < 0.05). However, the above items were all decreased in valsartan treatment group compared with ageing group (data not shown), indicating that valsartan may play a role in preventing the development of aorta ageing. Ultrastructural degeneration of aorta with ageing and the effect of valsartan Ultrastructural analysis of aorta was focused on vascular endothelial cells, because the senescent endothelial cells may critically disturb the integrity of the endothelial monolayer and may thereby contribute to vascular injury and atherosclerosis [15][16][17][18][19].The endothelial cell in control group showed a fusiform, smooth shape and even chromatin (Fig. 1A). While the endothelial cells in ageing group were flattened and enlarged, there were swollen mitochondria with distorted or lost internal cristae, the volume of the Golgi complex was decreased, and there were abundant unequal vesicles and marrow-like cytolysosome (Fig. 1B). By contrast, valsartan treatment significantly improved the ultrastructure of aorta endothelial cell. In the ageing animals receiving valsartan treatment, a fusiform shape and regularly scattered chromatin appeared in the endothelial cells, where the cytoplasm contained numerous mitochondria, well-developed Golgi complex and abundant ribosome (Fig. 1C). Expression of Bcl-2 family proteins during ageing and the effect of valsartan The Bcl-2 family has been demonstrated to play an important role in regulation of apoptosis and senescence [20,21]. The ratio of Bcl-2 and Bax has been found to be an important determinant of apoptosis; a high ratio favour cell survival, while a low ratio promotes cell ageing or death [21,22]. Therefore, we determined the expression of Bcl-2 and Bax in aorta in different group. As shown in Figure 2, the expressions of Bcl-2 and Bax were examined by immunohistochemical staining, RT-PCR and Western blotting respectively. From immunohistochemical stained sections, we observed that the expression of Bcl-2 was significantly decreased in ageing animals compared with that in adult ones ( Fig. 2A). Meanwhile, the expression of Bax was significantly increased with ageing (Fig. 2C). Quantitative analysis with RT-PCR and Western blotting demonstrated similar results, that expression of Bcl-2 was significantly higher in adult animals than that in ageing ones (P < 0.01), while the expression of Bax was significantly lower in adult animals than that in ageing ones (P < 0.01). Apparently, the ratio of Bcl-2/Bax in aorta was decreasing with ageing, indicating that ageing resulted detrimental effects on vascular cells. However, the decreased ratio of Bcl/Bax because of ageing could be significantly reversed by long-term administration of valsartan though the value was still significantly lower than that in adult animals (P < 0.01). As shown in Figure 2, in ageing animals receiving valsartan treatment, the Bcl-2/Bax ratio was significantly improved compared with control ageing animals (Bcl-2 expression increased, P < 0.01; while Bax expression decreased, P < 0.01). These results suggested that valsartan produced protective effects on aorta cells from ageing. AT1R mediates p-ERK activity during ageing Mitogen-activated protein kinases (MAPKs) are a family of proteinserine/threonine kinases that include at least three distinctly regulated subgroups in mammals: ERK, Jun amino-terminal kinase (JNK), p38MAPK [23]. These enzymes phosphorylate different intracellular proteins and play important roles in regulating cell ageing, survival and death [24,25]. To determine the underlying mechanisms of ageing-induced aorta injury and valsartan exerted protection, we detected the expression of AT1R (the target of valsartan) and the possible involvement of MAPKs during ageing. As shown in Figure 3, AT1R expression in ageing animals was significantly higher than that in adult ones (P < 0.01), while administration of valsartan significantly reversed ageing-induced increase in AT1R (P < 0.01 compared with control ageing animals). However, valsartan produced no effect on the expression of AT2R, indicating the selective blockage of valsartan on AT1R. Analysis of three subgroups (ERK, JNK and p38MAPK) of MAP-Ks demonstrated that both p-JNK and p-p38MAPK were expressed at very low levels in aorta. Compared with adult control, the expression levels of p-p38 and p-JNK seem to be higher in ageing groups, but the valsartan treatment produced no effects on the expression of p-p38 and p-JNK. The results suggested that the protection of valsartan on aorta against ageing is independent of p-p38 and p-JNK ( Fig. 3C and D). However, ERK activity, that is, p-ERK level significantly increased with ageing (Fig. 3E, P < 0.01). More importantly, selective blocker of AT1R, valsartan significantly reversed ageing-accompanied increase in p-ERK activity (Fig. 3E, P < 0.01 compared with control ageing group). These results indicated that AT1R mediated the activity of p-ERK during ageing, which may involve in the ageing-induced aorta injury. To further determine the effects of AT1R on p-ERK, we performed parallel experiments in vitro. As shown in Figure 4, aorta endothelial cells from adult rats express a low level of AT1R and p-ERK, however, the expression of AT1R and p-ERK were significantly higher in ageing rat-derived aorta endothelial cells (P < 0.01). Valsartan treatment significantly decreased the expression of AT1R (P < 0.01), accompanying with the decrease in p-ERK (P < 0.01). Furthermore, we observed consistent results about the expression of Bcl-2 and Bax with in vitro experiment, that is, ageing resulted in the decrease in Bcl-2 and increase in Bax, while valsartan reversed ageing-induced changes in Bcl-2 and Bax ( Fig. 4C and D). These in vitro results provided additional evidence that AT1R mediated the increase in p-ERK activity during ageing. AT1R-mediated p-ERK activity involves in the ageing-induced aorta injury To test whether AT1R-mediated p-ERK activity is correlated with aorta injury during ageing, we investigated the protective effect of ERK inhibitor on ageing rat-derived aorta endothelial cells and compared it with valsartan. As shown in Figure 5A, valsartan significantly decreased the expression of AT1R in ageing aorta endothelial cells, while no effect of ERK inhibitor on AT1R was observed. However, both valsartan and ERK inhibitor significantly decreased the expression of p-ERK in ageing aorta endothelial cells compared with control (P < 0.01). In terms of Bcl-2 and Bax, we observed that valsartan and ERK inhibitor treatment reversed the expression of Bcl-2 and Bax in ageing aorta endothelial cells in a similar manner (Fig. 5C and D). The above results suggested that regulation of p-ERK activity should be an important pathway in the protection of aorta by valsartan during ageing. We could also observe that inhibition of ERK inhibitor on p-ERK activity was more significant than valsartan (Fig. 5, P < 0.05). However, the protection of ERK inhibitor on ageing aorta was less significant than valsartan. One possible explanation was that AT1R-mediated p-ERK activity may be only one of the mechanisms involving in the pathology of ageing-induced aorta injury, i.e. AT1R should also mediate other pathways that contributed to the ageing-induced aorta injury. Therefore, inhibition of AT1R by valsartan was more effective than inhibition of p-ERK by ERK inhibitor in aorta protection. Discussion Cardiovascular disease is the most common cause of death among the elderly, and the ageing is the largest risk factor to the diseases. Accompanying with ageing, lots of adverse changes (or injury) will develop on cardiovascular structure and function, which will directly influence vascular disease threshold and seriousness [2,3,25,26]. Therefore, clarifying the mechanisms underlying the ageing-induced vascular injury is of great significance for prevention and attenuation of such diseases. In this present study, we revealed in the ageing aorta that (i) AT1R expression and p-ERK activity in aorta increase with ageing, which may be closely related with structural and functional degeneration of aorta; (ii) administration of selective AT1R blocker, valsartan significantly reversed the increase in AT1R occurring with ageing. Simultaneously, p-ERK was also depressed and further, ageing-induced aorta degeneration was improved; (iii) ERK inhibitor depressed the level of p-ERK in ageing aorta cells without affecting AT1R, but protective effects on ageing aorta cells were also observed. These data suggested that AT1R-mediated ERK activity was at least one of the important mechanisms in ageing-induced aorta degeneration. Actually, research on vascular degeneration with ageing is attracting great interest from scientists, but the underlying mechanisms are still not well understood. It has been showed that AT1R mediates most of biological effects, such as vascular constriction, cell proliferation, senescence and reactive oxygen production. Therefore, over the past decades, many studies have focused on the roles of angiotensin system in ageing, especially Ang II. Kosugi et al. demonstrated in mouse model that the expression of ageing markers were directly correlated with cardiac Ang II [4]. Yano et al. further revealed that up-regulation of AT1R may be involved in the initiation and progression of atherosclerosis [27]. In this study, we demonstrated that AT1R expression in aorta tissue significantly increased, whereas AT2R were significantly decreased compared with that in adult ones (P < 0.01). Selective blockage of AT1R significantly reversed aorta degeneration during ageing, indicating the roles of AT1R in the process. It has been demonstrated that Bcl-2 and Bax genes were critical components for mediating numerous cellular responses, including apoptosis [21,22]. Bcl-2 is an anti-senescence factor, which has been shown to inhibit senescence and apoptosis [28], while Bax was found to promote cell death [29]. Therefore, the ratio of Bcl-2/Bax is closely related to apoptosis. In this study, we found that blocking of Ang II type 1 receptor by valsartan resulted in the significant increase in the ratio of Bcl-2/Bax and decrease in ERK activity in ageing vascular cells, suggesting that both Ang II type 1 receptor and ERK activity may be related to vascular apoptosis during ageing. More interestingly, ERK inhibitor, which did not influence the expression of Ang II type 1 receptor, produced similar effect on the ratio of Bcl-2/Bax as valsartan to some extent. One possible explantation was that Ang II type 1 receptor contributed to the ageing-induced vascular apoptosis and ERK was one of its down effectors, i.e. ageing-induced vascular apoptosis partially through AT-1 receptor-mediated ERKs activity. The activation of MAPKs may vary in different cells under various physiological or pathological conditions [30]. Recent studies have revealed that MAPK signalling transduction pathway is able to mediate senescent signals and regulate ageing process [31,32]. JNK and p38 are important subgroups of MAPKs. In our experiment, we found that it was difficult to detect the expression of p-JNK and p-p38 by Western blotting until much more proteins (about 100 lg) were loaded in electrophoresis. Further, the exposure time of X-ray film was greatly prolonged (about 30 min. or more). We observed that very low levels of p-p38 and p-JNK were detected in vascular cells, this is consistent with the previous reports [33][34][35][36]. Compared with adult control, the expression levels of p-p38 and p-JNK seem to be higher in ageing groups. However, the valsartan treatment produced no obvious effects on the expression of p-p38 and p-JNK. The results Fig. 5 Comparison of valsartan and ERK inhibitor in the protection of ageing aorta cells. The cells used in the experiment were derived from ageing rat aorta, and they were treated with valsartan and ERK inhibitor respectively. Control cells received no treatment. Data are expressed as the means AE SD. *P < 0.01 compared with control group; **P < 0.01 compared with ageing group. suggested that the protection of valsartan on aorta against ageing is independent of p-p38 and p-JNK. ERK is another important subgroup of MAPKs, and it has been demonstrated crucial roles in different pathologic processes. In this present study, we observed that the phosphorylation of ERK was associated with ageing and the increase of AT1R. In previous studies, accumulating evidence have identified that the enhanced activity of ERK signalling transduction pathway contributed to the ageing, which was coincident with our research. Further, we observed in vivo and in vitro that inhibition of AT1R in ageing aorta also inhibited phosphorylation of ERK, while inhibition of ERK produced no effect on AT1R, but both of them produced beneficial effects on ageing aorta, indicating that protection of ageing aorta by AT1R blocker was at the least partially through the activity of ERK. To further confirm the underlying mechanisms of AT1R-involved aorta degeneration, we further compared the effects of selective AT1R blocker and ERK inhibitor on the protection of aorta during ageing. Though ERK inhibitor was more effective in ERK inhibition than valsartan, it produced significant less effects than valsartan, that is, inhibition of ERK activity could only acquire partial effects as valsartan. A reasonable explantation was that regulation of ERK activity was just one of the mechanisms in AT1R-mediated aorta degeneration during ageing, other mechanisms also existed. These mechanisms are deserved further investigation. However, we could not consider all these mechanisms or mediators in a single study. In conclusion, we demonstrated that AT1R involved in the ageingrelated structural and functional degeneration of aorta, selective blockage of AT1R could significantly attenuate the pathological process, providing a strategy for prevention of ageing-induced vascular diseases and revealing a novel action of valsartan. Further, we demonstrated that AT1R-mediated ERK activity was an important pathway in ageing-induced aorta degeneration, which is of great significance in future developing intervention for preventing or attenuating ageingrelated vascular diseases.
v3-fos-license
2021-09-29T05:25:11.791Z
2021-09-01T00:00:00.000
238200645
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/26/18/5619/pdf", "pdf_hash": "6c24afeab07bfa4483065338bd408265ddd09c5a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42313", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "6c24afeab07bfa4483065338bd408265ddd09c5a", "year": 2021 }
pes2o/s2orc
Effect of Continuous and Discontinuous Microwave-Assisted Heating on Starch-Derived Dietary Fiber Production Dietary fiber can be obtained by dextrinization, which occurs while heating starch in the presence of acids. During dextrinization, depolymerization, transglycosylation, and repolymerization occur, leading to structural changes responsible for increasing resistance to starch enzymatic digestion. The conventional dextrinization time can be decreased by using microwave-assisted heating. The main objective of this study was to obtain dietary fiber from acidified potato starch using continuous and discontinuous microwave-assisted heating and to investigate the structure and physicochemical properties of the resulting dextrins. Dextrins were characterized by water solubility, dextrose equivalent, and color parameters (L* a* b*). Total dietary fiber content was measured according to the AOAC 2009.01 method. Structural and morphological changes were determined by means of SEM, XRD, DSC, and GC-MS analyses. Microwave-assisted dextrinization of potato starch led to light yellow to brownish products with increased solubility in water and diminished crystallinity and gelatinization enthalpy. Dextrinization products contained glycosidic linkages and branched residues not present in native starch, indicative of its conversion into dietary fiber. Thus, microwave-assisted heating can induce structural changes in potato starch, originating products with a high level of dietary fiber content. Introduction Nowadays, there is growing interest in the correlation of nutrition with human health. Several studies have been conducted to develop functional foods that provide caloric intake and contain bioactive compounds [1]. Considering its beneficial effects on the human body, dietary fiber (DF) is considered to be part of a group of functional foods. Benefits of adequate intake of DF include prevention of diseases of civilization, such as obesity, cancer, cardiovascular diseases, and diabetes mellitus [2][3][4][5]. Some DF also promotes the growth of beneficial gut microbiota [6]. Despite the many advantages of DF consumption, its consumption is still below the recommended intake. The main sources of DF are vegetables, fruits, and whole grains. However, these products are still seldom consumed Solubility of Dextrins The solubility of dextrins at 20 • C was affected by the microwave-assisted heating conditions, including the microwave power intensity and the processing time ( Figure 1). When continuous heating was carried out, the lowest solubility (31.8%) was observed for the 40 W 75 s sample, while the 50 W 90 s dextrin was the one with the highest solubility (43.4%). When using discontinuous heating, the lowest solubility (48.5%) was determined for the sample obtained at 100 W for 15 s, and the highest (81.0%) for dextrin prepared at 120 W for 30 s. The dextrins obtained by the discontinuous heating showed higher solubility than dextrins prepared by the continuous heating. Given the same processing time, the higher the radiation power was, the higher the solubility of dextrins was. Moreover, the solubility increased with increasing heating time. Modification of starch caused by microwave-assisted heating in the presence of acids resulted in an increase in solubility, similar to dextrins obtained by using conventional heating in the presence of acids [8,16,29]. This might be due to the hydrolysis reaction during pyroconversion [8,13,30]. The high solubility of dextrins obtained from starch using microwave heating can ensure good homogenization of dextrins in the aqueous environment of food products [31]. for the sample obtained at 100 W for 15 s, and the highest (81.0%) for dextrin prepared at 120 W for 30 s. The dextrins obtained by the discontinuous heating showed higher solubility than dextrins prepared by the continuous heating. Given the same processing time, the higher the radiation power was, the higher the solubility of dextrins was. Moreover, the solubility increased with increasing heating time. Modification of starch caused by microwave-assisted heating in the presence of acids resulted in an increase in solubility, similar to dextrins obtained by using conventional heating in the presence of acids [8,16,29]. This might be due to the hydrolysis reaction during pyroconversion [8,13,30]. The high solubility of dextrins obtained from starch using microwave heating can ensure good homogenization of dextrins in the aqueous environment of food products [31]. Dextrins Dextrose Equivalents (DE) Although the dextrose equivalents (DE) values ( Figure 2) were low for both dextrins obtained by continuous and discontinuous microwave-assisted heating, the ones prepared by the discontinuous heating showed relatively higher DE values (DE = 1.52-2.12) than the dextrins obtained by continuous heating (DE = 0.94-1.47). To have dextrins with such low DE values is important when it is planned to add dextrins into healthy foods without the addition of a high amount of sugars [16]. It can be assumed that, during the heating, a significant depolymerization of starch occurred, as the DE for native potato starch was only 0.2. It is well known that dextrinization processes significantly increase DE values [12]. A higher modification level of dextrins obtained by discontinuous microwave-assisted heating was confirmed also by their higher solubility values, as shown earlier [24]. Dextrins Dextrose Equivalents (DE) Although the dextrose equivalents (DE) values ( Figure 2) were low for both dextrins obtained by continuous and discontinuous microwave-assisted heating, the ones prepared by the discontinuous heating showed relatively higher DE values (DE = 1.52-2.12) than the dextrins obtained by continuous heating (DE = 0.94-1.47). To have dextrins with such low DE values is important when it is planned to add dextrins into healthy foods without the addition of a high amount of sugars [16]. It can be assumed that, during the heating, a significant depolymerization of starch occurred, as the DE for native potato starch was only 0.2. It is well known that dextrinization processes significantly increase DE values [12]. A higher modification level of dextrins obtained by discontinuous microwave-assisted heating was confirmed also by their higher solubility values, as shown earlier [24]. Color Parameters (L* a* b*) After exposure to microwave irradiation, the white color of native potato starch changed (Table 1). All dextrins were characterized by the easily visible color differences, when compared with standard native potato starch, as evidenced by ∆E values higher than 5 [13]. Depending on the processing conditions, dextrins with coloration ranging from cream to brown-yellowish were obtained ( Figure 3). Positive parameters a* and b* confirm share of red and yellow colors of dextrins The reduction in whiteness and predominance of beige/brown can be related to the progress of the caramelization reaction [12,32,33]. It can be concluded that increasing both the microwave power intensity and the duration of heating allowed to obtain dextrins with an increasingly darker coloration, which is indicated by even lower values of the L* parameter and higher values of the ∆E parameter. This is in line with previous observations, which showed that harsher conditions (e.g., longer reaction time, higher temperature, or acid concentration) resulted in a darker color of dextrins [8,12,13,34]. Color Parameters (L* a* b*) After exposure to microwave irradiation, the white color of native potato starch changed (Table 1). All dextrins were characterized by the easily visible color differences, when compared with standard native potato starch, as evidenced by ∆E values higher than 5 [13]. Depending on the processing conditions, dextrins with coloration ranging from cream to brown-yellowish were obtained ( Figure 3). Positive parameters a* and b* confirm share of red and yellow colors of dextrins The reduction in whiteness and predominance of beige/brown can be related to the progress of the caramelization reaction [12,32,33]. It can be concluded that increasing both the microwave power intensity and the duration of heating allowed to obtain dextrins with an increasingly darker coloration, which is indicated by even lower values of the L* parameter and higher values of the ∆E parameter. This is in line with previous observations, which showed that harsher conditions (e.g., longer reaction time, higher temperature, or acid concentration) resulted in a darker color of dextrins [8,12,13,34]. Different superscript lowercase letters (a, b, …) in the same column indicate significant differences (p < 0.05) bet parameter for each dextrin. Total Dietary Fiber Content Total dietary fiber content for dextrins obtained by continuous micro heating ranged from 14.5% to 21.5% ( Figure 4), which was unsatisfactorily trins obtained by discontinuous heating were characterized by higher DF c dextrin 100 W 15 s (TDF = 15.6%) prepared in the mildest conditions am heated 10 times. The TDF content increased both with the increasing of mic intensity and by extending the heating time. That is in line with other pr lished results, where increasing content of non-digestible carbohydrates w process intensity was observed [13,35,36]. The highest TDF content amount termined for the dextrins prepared by discontinuous heating of starch at 1 This may indicate that the use of microwave-assisted discontinuous heating tain dextrins with even twice the content of dietary fiber than dextrins ob vectional heating of potato starch acidified with the same amounts of hyd citric acids [11]. Additionally, the proportion in dextrins of high-molecular w fiber (HMWDF) and low molecular weight dietary fiber soluble in water an tated in 78% aqueous ethanol (SDFS) varied depending on the conditions process. In almost all obtained dextrins, the fraction of compounds of l weight constituted the majority. Similar results can be seen in the case o ducted by other authors [35,36]. Total Dietary Fiber Content Total dietary fiber content for dextrins obtained by continuous microwave-assisted heating ranged from 14.5% to 21.5% (Figure 4), which was unsatisfactorily low. All dextrins obtained by discontinuous heating were characterized by higher DF content, except dextrin 100 W 15 s (TDF = 15.6%) prepared in the mildest conditions among samples heated 10 times. The TDF content increased both with the increasing of microwave power intensity and by extending the heating time. That is in line with other previously published results, where increasing content of non-digestible carbohydrates with increasing process intensity was observed [13,35,36]. The highest TDF content amounted to 45%, determined for the dextrins prepared by discontinuous heating of starch at 120 W for 30 s. This may indicate that the use of microwave-assisted discontinuous heating allows to obtain dextrins with even twice the content of dietary fiber than dextrins obtained by convectional heating of potato starch acidified with the same amounts of hydrochloric and citric acids [11]. Additionally, the proportion in dextrins of high-molecular weight dietary fiber (HMWDF) and low molecular weight dietary fiber soluble in water and not precipitated in 78% aqueous ethanol (SDFS) varied depending on the conditions of the heating process. In almost all obtained dextrins, the fraction of compounds of low molecular weight constituted the majority. Similar results can be seen in the case of studies conducted by other authors [35,36]. Molecules 2021, 26, x FOR PEER REVIEW 6 of 17 Figure 5 shows the influence of microwave-assisted heating on the shape of potato starch granules. In the native form, the potato starch was a mixture of smooth spherical shape granules with a size lower than 20 µm and oval shape granules with sizes ranging from 30 µm to 60 µm, possessing edges clearly visible and without damage on their surfaces ( Figure 5a). All these characteristics are specific to granules of native potato starch [37]. For starch heated once for 75 s, regardless the microwave power intensity used, the starch granular integrity was maintained. Additionally, single microwave-assisted heating for 90 s did not cause any significant change in the granular morphology. When exposed to the discontinuous heating, the granular shape of starch was altered (Figure 5bg). Noticeable damage on the surface of the granules was observed, while preserving their granular nature, size, and shape. The greatest damage was observed on the surface of larger starch granules. This might be due to the fact that it is easier to gelatinize the larger starch granules than the smaller ones that have higher gelatinization temperatures, thus requiring higher energy and/or processing time till the granules achieve swelling and rupture [38]. Moreover, the magnitude of observed changes increased with the extension of the heating time and the increase in the microwave power intensity. For dextrin obtained by heating 10 times at 120 W for 30 s, aggregation of starch granules into lumps was observed. This is consistent with the results of other authors concerning changes in starch granules during dry heating treatment [39]. Figure 5 shows the influence of microwave-assisted heating on the shape of potato starch granules. In the native form, the potato starch was a mixture of smooth spherical shape granules with a size lower than 20 µm and oval shape granules with sizes ranging from 30 µm to 60 µm, possessing edges clearly visible and without damage on their surfaces ( Figure 5a). All these characteristics are specific to granules of native potato starch [37]. For starch heated once for 75 s, regardless the microwave power intensity used, the starch granular integrity was maintained. Additionally, single microwave-assisted heating for 90 s did not cause any significant change in the granular morphology. When exposed to the discontinuous heating, the granular shape of starch was altered (Figure 5b-g). Noticeable damage on the surface of the granules was observed, while preserving their granular nature, size, and shape. The greatest damage was observed on the surface of larger starch granules. This might be due to the fact that it is easier to gelatinize the larger starch granules than the smaller ones that have higher gelatinization temperatures, thus requiring higher energy and/or processing time till the granules achieve swelling and rupture [38]. Moreover, the magnitude of observed changes increased with the extension of the heating time and the increase in the microwave power intensity. For dextrin obtained by heating 10 times at 120 W for 30 s, aggregation of starch granules into lumps was observed. This is consistent with the results of other authors concerning changes in starch granules during dry heating treatment [39]. X-ray Diffraction (XRD) The XRD pattern for potato starch contained diffraction peaks (2θ) at 5. (Figure 6), which correspond to the B-type crystalline structure of potato starch [40][41][42]. The potato starch used in this study showed a crystallinity index (Xc) of 0.47. Regardless of the type of microwave-assisted heating used (continuous or discontinuous), a decrease in crystallinity for all the obtained dextrins was observed ( Table 2). For potato starch continuously heated (Figure 6a), a 2-fold (even 3 times for one sample) decrease in crystallinity was observed, compared with the native potato starch. For potato starch heated once for 75 s in the microwave reactor, the degree of crystallinity decreased with microwave power increasing. The same behavior was observed for dextrins obtained after heating for 90 s. Comparing the compounds subjected to the same power and different heating times, a slight decrease in the crystallinity of dextrins obtained in the longer heating was observed. For discontinuous heating conditions (Figure 6b), the crystallinity index decreased approximately twice for dextrins obtained at 100 W and 110 W for 15 s and much more for subsequent dextrins-up to almost 7 times for the last one obtained at 120 W for 30 s. For dextrins heated 10 times for 15 s or 30 s, it was observed that the higher the power, the lower the crystallinity degree. The crystallinity degree was also influenced by the heating time at a given power. For dextrins heated for 30 s, the crystalline form share was approximately twice as small as in dextrins heated for 15 s. The degree of crystallinity for dextrins heated 10 times for shorter periods of time was lower than for dextrins heated once for longer time. The presented results are in line with the observations of other authors who showed unambiguously that modification of native potato starch by microwave heating at a certain power and time decreased the crystallinity index, which was observed as a decrease or absolute disappearance of diffraction reflections typical for B-type X-ray diffraction pattern [43,44]. Consequently, this led to an increase in the amorphous form. The percentage crystallinity decrease verified during dextrinization was also similar to that observed by other authors [30,[45][46][47]. Thermal Properties of Dextrins (DSC) For dextrins obtained under all tested operational conditions, the temperature values of T o , T p , and T c were significantly higher than the ones obtained for the native potato starch (Table 3). These thermal changes indicate that the applied microwave-assisted heating parameters significantly affected the potato starch structure reorganization. T o , T p , and T c temperatures characterize the susceptibility of starch to gelatinize and are known to depend on the intra-granular interactions strength [48]. High T o and T p values mean that more energy is required to initiate the starch gelatinization [49]. The obtained results suggest that dextrinization process caused significant changes in the starch crystalline region, resulting in a narrower endothermic gelatinization peak (lower ∆T values compared with the native starch) with a lower enthalpy value. Dextrinization, reducing the crystallinity of dextrins (Table 2), simultaneously increased their solubility (Figure 1), dextrose equivalent value (Figure 2), and the share of SDFS soluble dietary fiber (Figure 4). It can be assumed that during the modification of starch, less perfect crystallites with short double helices were formed [50]. A good example of clear changes in the crystal structure of starch during its modification are the samples dextrinized under the following conditions: 120 W 15 s × 10 as well as 110 W and 120 W 30 s × 10, which had the highest dissolving power in water (Figure 1) among all tested dextrins. For these trials, no endothermic transformation during heating was observed in the DSC analysis, indicating the loss of starch crystallinity and gelatinization ability. All depolymerization products obtained showed significantly lower values of ∆H than those for native potato starch. The values of ∆H are correlated with the degree of starch crystallinity because melting of crystallites (formed by amylopectin) requires more energy [48]. As we have shown, stronger processing conditions, including high microwave power intensity and long operating times, and the method of discontinuous heating, can cause a greater degree of hydrolysis, resulting in a complete loss of the ordered structure of the starch. The greater degree of starch depolymerization resulted, first of all, in a significant increase in the soluble fiber in the tested dextrins and in an increase in their solubility compared with native starch. Based on the thermal analysis data of dextrins, when compared with native potato starch, there was a clear influence of the applied microwave heating power, the processing time, and method of microwave operation (continuous or discontinuous heating) on the degradation of potato starch structure. To, onset temperature; Tp, peak temperature; Tc, conclusion temperature; ∆Tr, gelatinization temperature range = (Tc − To); ∆H, enthalpy expressed in J g −1 dry starch. Different superscript lowercase letters (a, b, . . . ) in the same column indicate significant differences (p < 0.05) between each parameter for each dextrin compared with native starch. Different superscript uppercase letters (A, B or C) in the same column indicate significant differences (p < 0.05) between each parameter for dextrins from a given series, depending on the microwave power and operating time. Different superscript uppercase letters with asterisk (A* or B*) in the same column indicate significant differences (p < 0.05) between each parameter for dextrins with the same microwave power but different operating times. Glycosidic-Linkage Analysis According to GC-MS analysis, native potato starch contained more than 90% of (1 → 4)-linked Glcp and small amounts of terminal and (1 → 4,6)-linked Glcp (Table 4). For all dextrin samples, a significant decrease in the percentage of (1 → 4)-Glcp and a marked increase in terminal and (1 → 6)-Glcp were observed. The results are in line with observations of Nunes et al. [51], who used dry thermal treatments at 265 • C for amylose and amylopectin. The majority of the samples contained small amounts of (1 → 2) and (1 → 3)-Glcp. In the context of dextrins' resistance to enzymatic digestion, in addition to the presence of such bonds, the presence of branched molecules, i.e., with more than two groups -OH involved in the formation of glycosidic bonds, is also beneficial. In each sample low amounts of (1 → 2,4) and (1 → 3,4)-Glcp were presented, and in some dextrin samples it was also possibly to quantify small amounts of (1 → 2,6)-Glcp. Increasing microwave power intensity and heating time seemed to favor the molecules branching. For dextrins prepared under continuous and discontinuous microwave-assisted heating, the presence of molecules other than (1 → 4)-Glcp, terminal, and (1 → 4,6)-Glcp differed depending on the modification conditions used. The values of the relative percentage of non-starch glycosidic linkages ranged from 4.6% to 7.2% for samples heated once for 75 s; 8.0% to 9.2% for samples heated once for 90 s; 10.6% to 12.3% for samples heated 10 times for 15 s; and 13.4% to 17.3% for samples heated 10 times for 30 s. These results were in line with the 17.8% and 5.8% of linkages other than (α1 → 4) reported by Bai and Shi [52] for pyrodextrin and maltodextrin samples obtained from waxy maize starch. Materials Potato starch and analytical grade reagents were purchased from Sigma-Aldrich, Poznan, Poland; enzymatic kits were purchased from Megazyme, Wicklow, Ireland. Preparation of Dextrins (Continuous and Discountinuous Process) Using Microwave-Assisted Heating Dextrins were prepared by weighing 80 g of potato starch, spreading it onto a glass tray, and spraying it with 0.5% (v/v) solutions of hydrochloric and citric acids to a final concentration of both acids amounting to 0.1% w/w related to dry starch basis. Acidified starch was mixed and distributed on the surface of the tray. The prepared material was dried at 110 • C for 2 h in order to obtain a final water content of less than 5%. Afterwards, it was weighed (5 g) into 35 mL glass vessels and heated in a Discover SP microwave reactor (CEM Corporation, Matthews, NC, USA) at 40 W, 45 W or 50 W for 75 s or 90 s, for the continuous process, and at 100 W, 110 W or 120 W for 15 s or 30 s, for the discontinuous process. During the continuous process, the samples inside the vessels were heated once, while during the discontinuous process each of the samples inside the vessels was heated 10 times under the selected conditions (the vessels' content was mixed after each heating cycle to increase uniformity of microwave heating and modification of starch level). The different microwave irradiation conditions used in the study and operational advantages and disadvantages are presented in Table 5. Conditions were proposed based on screening tests conducted on a large group of samples. These studies clearly showed that the mildest conditions resulted in white, non-dextrinized samples, while the most extreme conditions favored a caramelization process. Additionally, from our preliminary studies, we found a correlation between lightness and total dietary fiber content (McPearson coefficient was −0.872). For this reason, samples could be preliminary selected by color screening. Finally, six dextrins using continuous microwave heating and six by discontinuous heating were prepared and subjected to further analyses. Solubility of Dextrins Solubility in water at 20 • C was measured according to Richter's method [53]. Dextrins in the amount 0.5 g were suspended in 40 mL of distilled water and were stirred at 20 • C for 30 min. The suspension was subsequently centrifuged at 21,381× g for 10 min, and 10 mL of supernatant was transferred into weighing vessels of known weight. Then, vessels with supernatants were dried to constant weight at 130 • C. Afterwards, the obtained residue was weighed and the solubility (S) in water was calculated using Equation (1): where: a-sample weight, b-weight of residue after drying, volume of evaporated supernatant (10 mL), and total volume of added water (40 mL). Dextrose Equivalent (DE) of Dextrins Dextrose equivalent of dextrins was determined using Schoorl-Regenbogen's method [54]. Dextrins were weighed (0.5 g), suspended in distilled water (10 mL), and then stirred at room temperature for 30 min. Then, Fehling's solution I (10 mL), Fehling's solution II (10 mL), and distilled water (20 mL) were added, and the mixtures were brought to a boil in 3 min and boiled for 2 min. After cooling down, potassium iodide (10 mL), sulfuric acid (10 mL), and starch colloidal solution (5 mL) were added, and the mixtures were titrated with sodium thiosulfate. The blank tests were carried out analogically with distilled water. Dextrose equivalent of dextrins was calculated according to [54]. Color Parameters (L* a* b*) The color of dextrins was measured using a Chroma Meter CR-400 (Konica Minolta Sensing, Osaka, Japan). L* (luminosity), a* (red/green color), b* (yellow/blue color) components were determined with the CIELab color profile. Color difference was calculated from Equation (2): where: ∆L * , ∆a * , ∆b * were the differences in the values of L*, a*, b* between native starch and dextrins, respectively. Measurements were performed 10 times for each sample. Total Dietary Fiber Content According to AOAC 2009.01 Method High molecular weight dietary fiber (HMWDF), comprising insoluble dietary fiber (IDF) and dietary fiber soluble in water but precipitated in 78% aqueous ethanol (SDFP), and dietary fiber soluble in water and not precipitated in 78% aqueous ethanol (SDFS) were determined following the AOAC 2009.01 method [55]. Briefly, the samples were suspended in ethanol and digested with pancreatic α-amylase/amyloglucosidase mixture in maleic buffer (50 mM) for 16 h at 37 • C. The enzymes were inactivated by using TRIS buffer (0.75 M) and boiling. In the next step, the proteins were digested with protease for 30 min at 60 • C. The enzymes were inactivated by using acetic acid (2 M). Then, ethanol was added to form the HMWDF precipitate. After 1 h, the samples were filtered through vacuum to constant weight and weighed crucibles with diatomaceous earth. The filtrate was recovered for SDFS determination. Residues of HMWDF were washed and dried overnight and then used for determination of proteins and ash content. The recovered filtrate was concentrated, deionized, and analyzed with HPLC. Obtained results were used to determine HMWDF and SDFS content in dextrins. SEM The granular shape and surface morphology of native potato starch and the prepared dextrins were observed using a Tescan VEGA 3SBU scanning electron microscope (Tescan, Brno, Czech Republic). The accelerating voltage was 3 kV. Adhesive tape was attached to circular stubs, then all samples were sprinkled onto tape without coating with any conductive material. All samples were observed, and micrographs were taken at magnification of ×2000. X-ray Diffraction (XRD) Phase analysis of dextrins was carried out using powder X-ray diffraction (XRD) Rigaku Miniflex D 600 diffractometer (Rigaku, Tokyo, Japan) with D/teX Ultra silicon strip detector, Cu-Kα radiation. To assess the crystallinity, the method described by Hulleman et al. [56] was used. The values of the crystallinity index (X c ) for all samples were obtained using Equation (3): where H c and H a are the intensities for the crystalline and amorphous profiles with typical diffraction reflex (121) at a value of 2θ between 17 • and 18 • , as shown in Figure 7, respectively. XRD of native potato starch was used as control. Thermal Properties of Dextrins (DSC) Gelatinization properties of native starch and dextrins were determined by differential scanning calorimetry (DSC), following a previously described method, but with some modifications [16,57]. For this purpose, a MICRO DSC III differential scanning calorimeter from Setaram Instrumentation (Caluire, France) was used. Triplicate starch samples (approximately 40 mg) were weighed in stainless steel, high-pressure type 'batch' cell at the water/starch ratio of 70:30 (w/w). Samples were heated from 10 °C to 120 °C at 3 °C min −1 . The onset (To), peak (Tp), and conclusion (Tc) temperatures; gelatinization temperaturę range ΔTr = (Tc − To); and enthalpy change (ΔH) expressed in J g −1 dry starch were calculated from thermograms. Thermal Properties of Dextrins (DSC) Gelatinization properties of native starch and dextrins were determined by differential scanning calorimetry (DSC), following a previously described method, but with some modifications [16,57]. For this purpose, a MICRO DSC III differential scanning calorimeter from Setaram Instrumentation (Caluire, France) was used. Triplicate starch samples (approximately 40 mg) were weighed in stainless steel, high-pressure type 'batch' cell at the water/starch ratio of 70:30 (w/w). Samples were heated from 10 • C to 120 • C at 3 • C min −1 . The onset (T o ), peak (T p ), and conclusion (T c ) temperatures; gelatinization temperaturę range ∆T r = (T c − T o ); and enthalpy change (∆H) expressed in J g −1 dry starch were calculated from thermograms. Methylation Analysis For determination of glycosidic linkages composition, dextrins were converted to partially O-methylated alditol acetylates [51]. Briefly, dextrins were dissolved in DMSO overnight, then pellets of NaOH were added, and the solutions were mixed for 30 min. Then 80 µL of CH 3 I was added and allowed to react in room temperature under vigorous stirring. After 20 min, 2 mL of distilled water was added, and the solutions were neutralized with 1 M HCl. Subsequently 3 mL of dichloromethane was added, and the solutions were manually shaken and further centrifuged at 11,600× g. The water phase was removed, and the precipitate was washed twice with distilled water. The samples were evaporated to dryness. To ensure a complete methylation, this step was repeated. Afterwards, the samples were hydrolyzed with trifluoroacetic acid (TFA) at 121 • C for 1 h, with subsequent evaporation of the acid. For the carbonyl reduction, 300 µL of 2 M NH 3 and 20 mg of sodium borodeuteride were added. The mixtures were incubated at 30 • C for 1 h and then excess of borodeuteride was removed by addition of glacial acetic acid. The partially methylated alditols were acetylated by adding 1-methylimidazole and acetic anhydride. After 30 min at 30 • C, the excess of acetic anhydride was removed and the partially Omethylated alditol acetates (PMAA) were extracted with dichloromethane in two steps. Then, the dichloromethane was evaporated, and the samples were dissolved in anhydrous acetone, which was evaporated prior the GC-MS analysis. GC-MS Analysis PMAA obtained in the Section 3.9 were analyzed by gas chromatography mass spectrometry (GC-MS) using an Agilent Technologies 6890N Network (Santa Clara, CA, USA). The GC was equipped with a DB-1 column (J&W Scientific, Folsom, CA, USA). 0.2 µL of the samples were injected with the injector operating at 220 • C. The helium carrier gas had a flow rate of 0.2 mL/min. The GC was connected to an Agilent 5973 (Santa Clara, CA, USA) mass quadrupole selective detector. Statistical Analysis The results were subjected to statistical analysis using Statistica 13.3 software (StatSoft, Tulsa, OK, USA). A completely randomized design was applied for all the experiments. Analysis of variance was performed. Mean comparison was done using Duncan's new multiple range test. All assays, except for color measurement (where the tests were repeated 10 times), were performed in triplicate, and their results were averaged if the difference was not statistically significant at p < 0.05. Conclusions Dextrinization of starch acidified with hydrochloric and citric acids using microwave radiation in a single-mode reactor was successfully carried out. The use of a single-mode microwave reactor allowed high repeatability of conducted processes to be created. The use of discontinuous process (10-fold heating with mixing between cycles) proved to be more effective than the continuous one (single heating). The dextrins obtained by discontinuous heating showed higher solubility and higher content of, among others, (1 −→ 2), (1 −→ 3), (1 −→ 4,6), (1 −→ 2,4), and (1 −→ 3,4) Glcp linkages, absent in the native starch, thus highlighting the higher total dietary fiber content. Moreover, the discontinuous heating decreased the starch crystallinity, changing the granules surface morphology and originating samples with higher dextrose equivalent values and darkest coloration, thus revealing
v3-fos-license
2016-06-17T20:56:36.414Z
2014-05-21T00:00:00.000
901283
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnsys.2014.00095/pdf", "pdf_hash": "2380661c7cc5b8ee4fde8781a98e53bbb7d256d8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42314", "s2fieldsofstudy": [ "Biology" ], "sha1": "2380661c7cc5b8ee4fde8781a98e53bbb7d256d8", "year": 2014 }
pes2o/s2orc
Early hypersynchrony in juvenile PINK1−/− motor cortex is rescued by antidromic stimulation In Parkinson’s disease (PD), cortical networks show enhanced synchronized activity but whether this precedes motor signs is unknown. We investigated this question in PINK1−/− mice, a genetic rodent model of the PARK6 variant of familial PD which shows impaired spontaneous locomotion at 16 months. We used two-photon calcium imaging and whole-cell patch clamp in slices from juvenile (P14–P21) wild-type or PINK1−/− mice. We designed a horizontal tilted cortico-subthalamic slice where the only connection between cortex and subthalamic nucleus (STN) is the hyperdirect cortico-subthalamic pathway. We report excessive correlation and synchronization in PINK1−/− M1 cortical networks 15 months before motor impairment. The percentage of correlated pairs of neurons and their strength of correlation were higher in the PINK1−/− M1 than in the wild type network and the synchronized network events involved a higher percentage of neurons. Both features were independent of thalamo-cortical pathways, insensitive to chronic levodopa treatment of pups, but totally reversed by antidromic invasion of M1 pyramidal neurons by axonal spikes evoked by high frequency stimulation (HFS) of the STN. Our study describes an early excess of synchronization in the PINK1−/− cortex and suggests a potential role of antidromic activation of cortical interneurons in network desynchronization. Such backward effect on interneurons activity may be of importance for HFS-induced network desynchronization. INTRODUCTION Exaggerated resting-state synchronization of oscillatory activities in the sensorimotor cortex notably at β frequency (13-30 Hz) is positively correlated with Parkinson's disease (PD) severity and attenuated together with motor symptoms by deep brain stimulation (DBS) of the subthalamic nucleus (STN-HFS; Brown and Marsden, 2001;Silberstein et al., 2005;Eusebio et al., 2012;Whitmer et al., 2012). These data were reproduced in the anesthetized or freely moving rodent or primate models of PD (Goldberg et al., 2002;Sharott et al., 2005) validating the links between hypersynchronization and motor symptoms. Yet it is still not known whether cortical hypersynchronization is first observed when motor signs are already present or precedes them. We used the PINK1 − / − mice model of PD to determine whether juvenile (P14-P21) motor cortex networks are hypersynchronized and typify the effect of parkinsonian treatments on this signature. The PINK1 − / − mice is a model of the autosomal recessive PARK6-linked Parkinsonism, an early-onset variant of familial PD caused by loss-of-function mutations in the mitochondrial protein PINK1 (Bentivoglio et al., 2001). PINK1 − / − mice show electrophysiological signs of dopaminergic dysfunction already at the age of 3-6 months and a reduction of locomotor activity 10 months later (Kitada et al., 2007;Dehorter et al., 2012). This model is therefore highly relevant for investigating the time course between hypersynchronization and motor signs. We focused our study on juvenile mice because the shift from immature to mature cortical activities occurs during the second post natal week (Allene et al., 2008;Dehorter et al., 2011) and thus constitutes the earliest possible stage at which dysfunction of M1 activities can occur. We used two-photon calcium imaging techniques to record the activity of large neuronal populations simultaneously in M1 and a horizontal tilted slice where the motor cortex M1 and the STN were connected via the hyperdirect cortico-STN pathway only. We report excessive correlation and synchronization in PINK1 − / − M1 cortical networks as early as 15 months before motor impairment. The percentage of correlated pairs of neurons and their strength of correlation were higher in the PINK1 − / − M1 than in the wild type network and synchronized network events involved a higher percentage of neurons. Both features were independent of thalamo-cortical pathways, insensitive to chronic levodopa treatment of pups, but totally reversed by high frequency stimulation of the STN (STN-HFS). HFS-evoked cortico-subthalamic spikes propagate antidromically and block orthodromic spikes of M1 pyramidal neurons. They also activate, via axon collaterals, cortical GABAergic interneurons. This last observation points to the potential role of antidromically activated cortical interneurons in network desynchronization. TWO PHOTON CALCIUM IMAGING We incubated cortico-subthalamic slices for 30 min in 2.5 ml of oxygenated ACSF (35-37 • C) with 25 µl fura 2AM (1 mM, in DMSO + 0.8% pluronic acid; Molecular Probes). We performed imaging with a multibeam two-photon laser scanning system (Trimscope-LaVision Biotec) coupled to an Olympus microscope with a high numerical aperture objective (20X, NA 0.95, Olympus). We acquired images of the scan field (444 µm × 336 µm) via a CCD camera (4 × 4 binning; La Vision Imager 3QE) with a time resolution of 137 ms (non-ratiometric 1000 images, laser at 780 nm) as previously described (Crépel et al., 2007;Dehorter et al., 2011). We previously estimated that around 10% of fura2loaded cells were astrocytes (Dehorter et al., 2011). These fura2loaded astrocytes are silent and are thus erroneously counted as silent neurons. This introduces a negligible error in the calculation of the percentage of active neurons. STIMULATION Antidromic spikes were evoked in M1 cortical neurons using a bipolar stimulating electrode (1 MΩ impedance, reference TS33A10KT, WPI FL-USA) placed at the antero-lateral border of the STN, close to the internal capsule fiber tract. Rectangular pulses of fixed duration (100 µs) and 20-50 µA amplitude were delivered between the two poles of the electrode at a frequency of 0.1-100 Hz STN (Grass stimulator). 100 Hz stimulations were applied over periods of several minutes. DATA ANALYSIS Fluorescent traces were analyzed with custom developed routines in Matlab and calcium events identified using asymmetric least squares baseline and Schmitt trigger threshold. We took into account only fields where more than 15% of fura2-loaded cells were active to ensure sufficient numbers of active cells to detect network-wide synchronization if present. We used 5% of baseline noise as high threshold and 2% as low threshold. Onset and offset identification allowed the generation of raster plots and the estimation of the extent of cortical network correlation. We applied two main tests: pairwise and group tests. Throughout we define "co-active" as meaning those onset transients occurring simultaneously within ± 1 frame time window (411 ms). Pairwise correlation test detected repeated co-activation between two cells (cell pairs continuously correlated). We estimated the degree of similarity of their spiking patterns using normalized Hamming similarity (Humphries, 2011). Each calcium transient train was divided into discrete bins corresponding to the time frame (137 ms) of recording. In each bin the number 1 was given to the presence of any event onset, and the number 0 for their absence. The resulting binary vector thus recorded the pattern of activity and inactivity of the neuron. For each pair of binary vectors the normalized Hamming similarity was the proportion of bins that differed between the two vectors, given the ± 1 frame time window. The parameter approached 1 as the vectors became equal. In the graphic representation the lines connecting correlated cells have colors from red (high similarity of patterns, normalized Hamming similarity close to 1) to dark blue (low similarity of patterns, Hamming similarity close to 0 but never equal to 0, otherwise these cells could not be considered as correlated; see Figure 1A). To estimate the likelihood that the co-activations might be explained by chance we created random datasets for each active cell using inter-event interval reshuffling (Mao et al., 2001;Feldt et al., 2013). We generated 1000 random data sets for each recording session and computed the pairwise Hamming similarity for all pairs in those data-sets, giving a distribution of 1000 randomized similarity scores for each pair of recorded neurons. The threshold for a significantly correlated pair of recorded neurons was set to the 95th percentile of that distribution. Separately we characterized the existence of neural ensembles, i.e., groups of neurons with consistently correlated activity. To do so, each recording was analyzed using the method in Humphries (2011), which we briefly describe here. A similarity matrix was constructed using the computed Hamming similarity between each pair of retained neurons. Note that we do not set a threshold for correlation here, as we wish to retain all information about the correlation structure of the network. This matrix was then partitioned into groups using the community detection algorithm detailed in Humphries (2011), which selfdetermines the number and size of groups within the matrix that maximizes the benefit function Q data = [similarity within groups] − [expected similarity within groups]. The resulting partition thus corresponded to groups of neurons-ensemblesthat were more similar in activity patterns than was expected given the total similarity of each neuron's activity to the whole data-set. We then ran controls for checking if each clustering was significant compared to randomized data. For each recording we generated 20 randomized versions by shuffling each neuron's inter-event intervals. We clustered each randomized version using the algorithm above, giving us 20 values of Q control . These allow us to estimate whether or not the actual recording's Q data value exceeded the 95% confidence level: thus a recording was deemed significant at this level if Q data > max (Q control ). The group event test identified incidences where a significant fraction of the network's cells were active together. The minimal size of these groups for each recording session was defined by the corresponding simulated random sets: for each of the 1000 random datasets we counted the number of co-active cells within each time-window; this defined a distribution of expected counts of co-active cells due to random activity. The threshold for significant group co-activation was set at the 95th percentile of this distribution. Such clusters of correlated events were called network events. In our analyses, we did not take into account fields showing a clear rhythmic pattern reminiscent of the giant depolarizing potentials (GDP) pattern (9% of the wild-type (wt) and 11% of all the PINK1 − / − M1 recorded fields). This GDP-like pattern consisted of regularly-spaced network events occurring at a frequency of around 0.1-0.3 Hz in wt and PINK1 − / − M1 networks and was identical in wt and PINK1 − / − M1 networks. We removed it because it represents the last step in the sequence of immature network activities previously described in the cortex, hippocampus and striatum (Crépel et al., 2007;Allene et al., 2008;Dehorter et al., 2011). STATISTICS Average values are presented as means ± SEM and we performed statistical comparisons with Mann-Whitney rank sum test (Prisme TM ). We set the level of significance as (*) for P < 0.05; (**) for P < 0.01; (***) for P < 0.001. THE CORTICOSUBTHALAMIC SLICE AND SPONTANEOUS CALCIUM TRANSIENTS The 32 • tilted horizontal slices contained portions of the M1 cortical region and STN (Figure 1). We checked the presence and functionality of M1 corticofugal axons to the STN with two methods. We visualized M1 corticofugal axons using the axonal tracer DiI (Figures 1A1, 2), or tested their functionality by stimulating the STN area and recording the responses (orthodromic or antidromic) in M1 pyramidal neurons (layers V/VI) (see Figure 5). For technical reasons, DiI and electrophysiological experiments were conducted in different slices cut exactly the same way. Fifteen days after DiI injection in the M1 region, labeled corticofugal axons were seen running in numerous bundles radially through the whole striatum, and in the internal capsule (n = 22/22 slices). In the STN DiI labeling was seen around pitx2positive somas ( Figure 1A3). In PINK1 − / − slices, stimulation of the rostral pole of the STN in control ACSF did not evoke EPSPs or orthodromic spikes in the recorded M1 pyramidal neurons (n = 5) whereas it clearly evoked antidromic spikes. We further assessed the antidromic functionality of the cortico-subthalamic connection by recording antidromic spikes in M1 pyramidal neurons in the continuous presence of blockers of ionotropic glutamate receptors and their effect on the network (see Figure 5A). The above results showed that cortico-subthalamic axons are present and functional in the cortico-subthalamic slice. Single neurons (projection neurons and interneurons) of the deep layers (V/VI) of the M1 cortex in slices from aged-matched wt or PINK1 − / − mice generated spontaneous calcium transients in control ACSF ( Figure 1B). The voltage-sensitive sodium channel blocker tetrodotoxin (TTX, 1 µM) or the calcium channels blocker Cd 2+ (200 µM) blocked all calcium transients (not shown) indicating that they resulted from the opening of voltagegated calcium channels by sodium action potentials. On average, the ratio of active cells to the number of fura2-loaded cells was similar in wt (22 ± 5%, n = 709 out of 3085 neurons in 30 fields) as in PINK1 − / − (22 ± 6%, n = 1034 out of 4763 neurons in 39) M1 networks (P = 0.4). Also, the average number of identified onsets of calcium transients per field (1000 images) was similar in wt (444 ± 222 per 137 s, n = 709 active neurons in 30 fields) as in PINK1 − / − (406 ± 231 per 137 s, n = 1034 active neurons in 39 fields) M1 active neurons ( Figure 1C). To disambiguate our terminology, we henceforth refer to the onsets of calcium transients for individual neurons as "calcium transients", reserving the term "network events" for synchronized transient onsets at the network-level. PAIR-WISE CORRELATIONS AND SPONTANEOUS NETWORK EVENTS PINK1 − / − M1 networks contained a significantly higher percentage of correlated pairs of cells (14 ± 11%, n = 4873 neurons in 40 fields) than wt M1 networks (8 ± 8%, n = 3541 neurons in 35 fields) (P = 0.01). The strength of correlation, represented by the similarity of patterns of calcium transients between pairs of cells (1 means that patterns were totally similar and simultaneous), was also significantly higher in PINK1 − / − than in wt M1 networks: 0.44 ± 0.24 (n = 40 fields) vs. 0.26 ± 0.14 (n = 35 fields), respectively (P = 0.0005) (Figure 2A). In the majority of M1 slices from wt (87%) and PINK1 − / − (98%) mice we identified synchronized network events characterized by statistically significant synchronous network-level activity (coactive within a 411 ms time window). Their time distribution had two main patterns: network events at very low frequency (0.007 Hz), which we named single, and irregularly spaced network events synchronized at low frequency (0.03 Hz), which we named random ( Figure 2B). Both types of network events involved a significantly higher percentage of active cells in PINK1 − / − (45 ± 18%, n = 140 events in 39 fields) than in wt (38 ± 15%, n =123 events in 30 fields) M1 networks (P = 0.0004). Also, there was a significantly higher proportion of active cells involved in network events (signal) than in basal activity (noise) in PINK1 − / − (4.4 ± 3.0) than in wt (3.1 ± 2.6) M1 networks (P < 0.0001; Figure 2C). Therefore, the activity is more concentrated in network events in PINK1 − / − than in age matched wt M1 networks. Antagonists of ionotropic glutamate receptors (CNQX-APV, 10-40 µM) decreased by 50% the number of active M1 neurons (from 16 ± 6% to 8 ± 3%, n = 7, P = 0.012) and abolished all network events ( Figure 2D). Subsequent application of the GABA A antagonist gabazine (10 µM) totally abolished spontaneous activity. In contrast, cutting the slices between thalamus and M1 networks to interrupt thalamo-cortical pathways slightly decreased the percentage of spontaneously active cells (22 ± 5% in control and 17 ± 5% in cut slices, P = 0.03), but did not affect the percentage of correlated pairs (8 ± 8% in control and 6 ± 5% in cut slices, P = 0.64) and increased the percentage of cells involved in network events (38 ± 15% in control and 41 ± 6% in cut slices, P = 0.008) (n = 30 M1 fields and 120 events in control slices and n = 8 M1 fields and 29 events in cut slices; Figure 2D). Therefore, pair-wise correlations and network events are driven by local circuit synapses. NEURAL ENSEMBLES The shuffled controls showed that 28% (10/35) of wt and 30% (12/40) of PINK1 − / − networks were organized into ensembles of consistently co-active neurons ( Figure 3A). We found no significant difference between the wt and PINK1 − / − ensembles for average discreteness (Q values), average number of ensembles, or average number of neurons per ensemble (P > 0.05). However, the ensembles were more tightly grouped in PINK1 − / − (107 +/− 3.8 µm, n = 40) than in wt (141 +/− 8.4 µm, n = 38) (P = 0.008) networks ( Figure 3C). Thus, despite the absence of detectable difference in their functional structure (in their existence, number, size or discreteness) neural ensembles of wt and PINK1 − / − M1 networks significantly differed in their physical structure. The smaller spatial extent of the ensembles may reflect an anatomical change in pyramidal cell connectivity in the PINK1 − / − motor cortex (for example a higher probability of synaptic contact between neighbors). Given the clear difference in significant pairwise correlation between wt and Pink1 − / − networks (Figure 2A), this lack of differences in ensemble structure was unexpected. To examine this issue, we pooled all calculated Hamming similarity values (significant or not) across recordings to compare the global correlation statistics between wt and Pink1 − / − M1 cortex. The resulting distributions were very similar ( Figure 3B) but translated from each other, suggesting that hypersynchrony in Pink1 − / − M1 cortex was due to a uniformly-random increase in correlation compared to the wt circuit. Consistent with this hypothesis, we found that randomly increasing ∼10% of wt correlation values by a randomly chosen value between 0.05 and 1 perfectly transformed the wt distribution into the Pink1 − / − distribution of correlations (black line in Figure 3B). We thus concluded that the difference of pair-wise correlation between wt and Pink1 − / − M1 cortex is not related to a difference of ensemble structure but results from a uniformly-random increase in correlation. To understand the relationship between the neural ensembles and network events, we then examined the participation of ensembles within those network events. Following Feldt et al. (2013), we counted an ensemble as participating if at least half its members were active during the network event. We found that ensemble participation was very weak in both wt (11 +/− 0.5%, n = 404 events) and PINK1 − / − (11 +/− 0.6%, n = 497 events) networks; median participation was 0% in both wt and PINK1 − / − networks ( Figure 3D). However, there was a clear difference in the distributions of ensemble participation (P = 0.0017, two-sample Kolmogorov-Smirnov test): as Figure 3D shows, PINK1 − / − networks had a higher probability either of no ensembles participating at all or of the majority of ensembles participating in a network event. Thus, only in PINK1 − / − networks could network events ever recruit the majority of neural ensembles. CORTICAL DESYNCHRONIZATION DURING STN-HFS HFS of the antero-lateral pole of the STN (STN-HFS) significantly decreased the percentage of M1 active cells to the number of M1 fura2-loaded cells, from 100% before HFS to 67 ± 26% (P < 0.01 paired t-test) and significantly increased the average number of onsets of calcium transients per M1 active cell to 170 ± 90% (n = 212 active cells in 9 fields, P < 0.05 paired t-test). STN-HFS significantly decreased the percentage of correlated cell pairs from 16 ± 12% to 5 ± 4% in PINK1 − / − M1 fields (n = 9 fields with 212 active cells, P = 0.03, paired t-test), a value not significantly different from the percentage of correlated cell pairs in wt fields (8 ± 8%, n = 35, P = 0.23 unpaired t-test; Figures 4A-C). After the same time of recording (at least 3 min) but without HFS (data not shown), the percent of correlated M1 active cells in non-stimulated PINK1 − / − slices non-significantly increased (to 185 ± 310%, n = 732 active cells out of 3386 imaged cells in 27 fields, P = 0.72 paired t-test). STN-HFS also significantly decreased by 25% the amplitude of unique or randomly distributed network events in PINK1 − / − M1 fields (152 active cells out of 768 imaged cells in 7 fields, P = 0.03 paired t-test). Low frequency (0.1-10 Hz) STN-HFS failed to produce these effects (not shown) that were also not due to the recording duration ( Figure 4D). Finally, STN-HFS had no significant effect on the percentage of correlated cell pairs in wt M1 fields (9 ± 6% pre HFS vs. 10 ± 7% during STN-HFS, n = 5 fields with 105 active cells, paired t-test, P = 0.5). Collectively, these observations suggest that STN-HFS produced a de-correlation of PINK1 − / − M1 networks. Neural ensembles were present both before (2 out of 9 recordings) and during (4/9 recordings) STN-HFS. STN-HFS did not significantly change the proportion of neurons comprising an ensemble (29 ± 5% of significant ensembles before HFS, n = 7 and 29 ± 5%, n = 14 during-HFS, P = 0.94), the discreteness of the ensembles or the physical size of ensembles. Interestingly, HFS significantly increased the participation of ensembles in network events from 6.5 ± 0.8% per network event before HFS to 24 ± 1% per network event during HFS (P ∼ 8 * 10 −14 , two-sample Kolmogorov-Smirnov test; see Figures 3E, F). These changes were due to HFS since after the same recording duration (at least 3 min) without HFS, the percent of M1 active cells non-significantly decreased compared to control (to 94 ± 36%), the number of onsets per active cell non-significantly increased (to 130 ± 78%) (n = 732 active cells out of 3386 imaged cells in 27 fields) and the amplitude of network events did not significantly increase with time (to 116% ± 69% of the initial amplitude, n = 18 fields, P = 0.73 paired t-test; Figure 4D). To study the mechanisms of action, we recorded in whole-cell configuration PINK1 − / − M1 pyramidal neurons. 100 Hz STN-HFS evoked antidromic spikes that we studied in the continuous presence of CNQX-APV (10-40 µM, n = 8/12 neurons). These spikes were antidromic as they were not preceded by EPSPs, had a fixed latency, collided with spontaneous spikes and followed a short train of 100 Hz stimuli ( Figure 5A). Therefore, stimulation of the rostral STN at 1-100 Hz evoked antidromic spikes that propagated along cortico-subthalamic axons. When we performed the same experiment in the absence of APV-CNQX we still recorded antidromic spikes but failed to record orthodromic EPSPs in pyramidal neurons in response to STN stimulation (n = 0/15). To get rid of the antidromic spike that could occlude short latency EPSPs, we hyperpolarized the recorded pyramidal neuron to V m = −80 mV. Even in this configuration where other pyramidal cells were still antidromically invaded, orthodromic excitatory responses were absent. This suggests that functional subthalamo-cortical synapses (Degos et al., 2008) are rare or absent and that recurrent collaterals between pyramidal neurons were not activated by antidromic stimulation. We next determined whether HFS-evoked antidromic spikes also activate GABAergic interneurons via axonal collaterals of pyramidal cells. We stimulated the STN (100 µs, 100 Hz) and recorded GABA A -mediated IPSPs in around 10% of pyramidal neurons (n = 1/11, V m = −50 mV; Figure 5B), suggesting that a small fraction of GABAergic interneurons was activated by antidromic spikes. This was confirmed in slices from aged-matched wt mice expressing GFP in GAD neurons. The stimulation of the rostral STN (100 µs, 0.1 Hz) in the continuous presence of gabazine (10 µM) to block GABA A receptors, evoked glutamatergic EPSCs from around 10% of GABAergic interneurons (voltage clamp mode, V H = −70 mV, n = 1/12; Figure 5C). Therefore, HFS-evoked antidromic spikes can also engage GABAergic signals in the effects of STN-HFS. DISCUSSION We show here that PINK1 − / − cortico-cortical networks are engaged in hypersynchrony at juvenile stage, well before motor symptoms. This early signature is dopamine-insensitive but STN HFS-sensitive. The mechanism of HFS-induced desynchronization includes invasion of the cortical network (pyramidal cells and interneurons) by HFS-generated antidromic spikes. Since, in our preparation, the only connections between M1 and STN originate from layer V/VI to the STN, it appears that the rescue of cortical hypersynchronization results from distal actions of HFS via antidromic axonal spikes. This study provides a first insight into early cortical hypersynchronization of a rodent model of a familial form of PD. Further experimental investigations will be needed to understand how it evolves in the course of the disease, once dysfunction of dopaminergic transmission has begun and dopamine-sensitive synchronizations develop. The synchronous network-level activity recorded here is reminiscent of cortical up states of the wt mouse primary visual cortex (Mao et al., 2001;Cossart et al., 2003) and of the so-called avalanches, the spatio-temporal clusters of synchronous activity interrupted by periods of low activity described in cultured slices from wt cortex (Beggs and Plenz, 2003;Yang et al., 2012). Pair-wise correlations and network events were synaptically and locally produced since they were blocked by antagonists of ionotropic glutamatergic and GABAergic channels but were still present after the mechanical interruption of the thalamo-cortical pathway. The vast majority of the glutamatergic excitatory synapses originate in the cortex itself, with recurrent excitatory interactions in groups of neurons inducing the slow rhythmic depolarizations (depolarized "up" states). Pyramidal-pyramidal neuron connections play a fundamental role in the generation of synchronized network events (Deuchars et al., 1994;Markram et al., 1997;Thomson and Deuchars, 1997;Morishima and Kawaguchi, 2006;Berger et al., 2010;Sippy and Yuste, 2013). Recurrent excitations occur both locally within a minicolumn and distally through cortico-cortical connections and intracortical horizontal fiber systems (Foehring et al., 2000;Douglas and Martin, 2004;Kalisman et al., 2005;Song et al., 2005;Perin et al., 2011). The larger number of correlated pairs and increased strength of correlations in PINK1 − / − vs. wt M1 cortex may result from (i) a difference of spontaneous activity in the network; or (ii) a change in the number and/or strength of synaptic connections (Mao et al., 2001). The former hypothesis can be ruled out because we did not identify any change of spontaneous activity (number of active cells or mean frequency of onsets). Whether the second hypothesis is valid remains to be determined and this might be a difficult task because of the very low "hit rate" for recording synaptically coupled layer V pyramidal cells in paired recordings (Markram et al., 1997). Indeed in spite of systematic recordings in various experimental conditions to facilitate the occurrence of EPSPs, we failed to evoke them by antidromic invasion of pyramidal collaterals in PINK1 − / − as in wt M1. Several data led us to postulate that chronic L-dopa treatment of PINK1 − / − pups could reverse the excess of synchronization of the juvenile M1 network. The midbrain dopaminergic pathway is present early in development (Specht et al., 1981) and projects to superficial and deep layers of the M1 motor cortex (Descarries et al., 1987;Lewis et al., 1987). Dopaminergic receptors of the D1 and D2 subtypes are present in rodent M1 (Boyson et al., 1986;Dawson et al., 1986), dopamine decreases the probability of glutamate release in layer V pyramidal neurons via presynaptic D1 receptors (Gao et al., 2001) and dopaminergic signaling in M1 is necessary for synaptic plasticity and motor skill learning (Hosp et al., 2009;Molina-Luna et al., 2009).The lack of effect of L-dopa treatment that we found could result from an inadequate dose of injected L-dopa for an optimal spontaneous release of dopamine in the pup M1 cortical network Plenz, 2006, 2008) but the most likely hypothesis is that juvenile M1 hypersynchronization is independent of dopaminergic transmission. This is in agreement with our observation that the levodopa-sensitive signature of dopaminergic dysfunction in the striatum, the giant GABA A currents, is not yet present in medium spiny neurons of 2 month-old PINK1 − / − mice (Dehorter and Hammond, 2014), suggesting that midbrain dopaminergic neurons do not yet dysfunction at that stage. Desynchronization by STN-HFS resulted from the backward modulation of the M1 cortical network activity by HFS-evoked axonal spikes that antidromically propagate to a subpopulation of cortical neurons via the hyperdirect pathway (Li et al., 2007(Li et al., , 2012. This is in keeping with studies in hemiparkinsonian rats suggesting that HFS of the hyperdirect pathway is essential for the amelioration of PD motor symptoms (Gradinaru et al., 2009;Li et al., 2012). In our study it is unlikely that the effect of STN-HFS were mediated by orthodromic spikes in the subthalamocortical pathway which impinges upon layer III/IV neurons (Degos et al., 2008) because we did not record orthodromic mono or polysynaptic excitatory responses in layer V/VI pyramidal neurons. Cortico-striatal neurons whose axons do not project to the pyramidal tract (IT-type) should not be antidromically activated from the STN area and only a subpopulation of layer V/VI neurons that project to the pyramidal tract (PT-type) (Lei et al., 2004) is probably affected. The thin collaterals reaching the STN (Kita and Kita, 2012) did not always reliably transmit consecutive antidromic spikes (Chomiak and Hu, 2007;Li et al., 2012). STN-HFS-evoked antidromic spikes also propagated in some of the recurrent axon collaterals of cortico-subthalamic neurons on their way to somas of pyramidal neurons and activated synaptic transmission that impinges onto local GABAergic interneurons. These in turn decreased pyramidal neuron activity. The overall result is the desynchronization of the M1 network (Li et al., 2012;Sippy and Yuste, 2013) and the decreased influence of M1 cortical neurons on STN activity. The originality of the present result is to show that antidromic activation of a network is sufficient to reverse its abnormal pattern of synchronization and to emphasize the potential role of cortical interneurons in cortical desynchronization.
v3-fos-license
2023-08-16T15:17:05.367Z
2023-08-01T00:00:00.000
260914133
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-4418/13/16/2667/pdf?version=1691984542", "pdf_hash": "2803a956763d9cd36f8e032ba116980fea8cb641", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42315", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7dcda9e9186b047e95da8a829f3dae86eebd1c42", "year": 2023 }
pes2o/s2orc
Cardiac Magnetic Resonance Imaging (CMRI) Applications in Patients with Chest Pain in the Emergency Department: A Narrative Review CMRI is the exclusive imaging technique capable of identifying myocardial edema, endomyocardial fibrosis, pericarditis accompanied by pericardial effusions, and apical thrombi within either the left or right ventricle. In this work, we examine the research literature on the use of CMRI in the diagnosis of chest discomfort, employing randomized controlled trials (RCTs) to evaluate its effectiveness. The research outlines the disorders of the chest and the machine learning approaches for detecting them. In conclusion, the study ends with an examination of a fundamental illustration of CMRI analysis. To find a comprehensive review, the Scopus scientific resource is analyzed. The issue, based on the findings, is to distinguish ischemia from non-ischemic cardiac causes of chest pain in individuals presenting with sudden chest pain or discomfort upon arrival at the emergency department (ED). Due to the failure of conventional methods in accurately diagnosing acute cardiac ischemia, individuals are still being inappropriately discharged from the ED, resulting in a heightened death rate. Introduction Chest pain is a commonly encountered ailment in the ED. The prompt and precise differentiation between grade 1 acute myocardial infarction (AMI) and other causes of myocardial injury is crucial for individuals with abnormal troponin levels [1]. The notable capability of high-sensitivity troponin tests to predict the absence of AMI highlights the significance of diagnostic methods in identifying the root cause of AMI [2]. CMRI is a valuable technique for stratifying patients in the ED who experience chest discomfort. Unlike invasive coronary angiography, CMRI is a non-invasive procedure that is less expensive and results in a shorter hospitalization duration. Additionally, CMRI provides significant information about the structure, function, tissue edema, and the location and nature of tissue damage in the heart, all of which can assist in determining various etiologies of cardiac injury. CMRI may assist in discriminating between chest symptoms caused by type 1 AMI and supply-demand imbalances caused by acute cardiac non-coronary artery disease. After conducting a comprehensive research review, it was determined determining the etiology of cardiac injury. CMRI, when combined with stress monitoring or other techniques like PET, allows for risk stratification and confirmation of diagnoses without the need for invasive testing. It can assess myocardial edema, perfusion, necrosis, inflammation, and other characteristics, playing a crucial role in diagnosing conditions like MINOCA. Therefore, CMRI is an essential adjunct to accurately diagnose and manage patients, particularly in cases where the troponin levels are abnormal but epicardial stenosis is absent. CMRI offers numerous advantages, making it a powerful and versatile imaging technique in the diagnosis and management of cardiac injuries and related conditions. One of its primary strengths lies in its non-invasiveness, which minimizes patient discomfort and reduces the risk of complications associated with invasive procedures. Additionally, compared to other imaging modalities, CMRI tends to be more cost-effective, making it accessible to a broader range of patients. The detailed information obtained through CMRI about the heart's structure and tissue damage is invaluable for physicians in determining the underlying cause of cardiac injury. It allows for a comprehensive assessment of various cardiac parameters, including myocardial edema, perfusion, necrosis, inflammation, and other key characteristics. This wealth of information aids in the accurate diagnosis of complex conditions like MINOCA, where traditional diagnostic methods may fall short. An essential aspect of CMRI's utility is its versatility in combination with other diagnostic tools. When integrated with stress monitoring or techniques like PET, CMRI enables risk stratification and confirmation of diagnoses without resorting to invasive testing. This capability proves particularly valuable in cases where clinicians need to assess the extent of cardiac damage and the potential risks posed to the patient's health [17]. Application of CMRI in the Diagnosis of Different Types of Chest Pains CMRI provides a wide range of different diagnoses unrelated to acute coronary syndrome (ACS) that can effectively explain the symptoms, and it also includes incidental findings. The identification of these unforeseen findings could have implications for patient management, leading to the establishment of new diagnoses or the need for further investigations. Between 2011 and 2015, adult patients with suspected ACS who visited an academic ED showed no indications of ischemia on initial electrocardiogram (ECG), had a minimum of one negative cardiac biomarker, and then had CMRI as a component of their diagnostic assessment were prospectively recruited. This finding suggests that CMRI can be used to diagnose symptomatic coronary artery disease (CAD) and potentially non-CAD severe cardiac abnormalities. These considerations may influence its usage in ACS workups in the emergency department [18]. In the investigation of the prognostic and diagnostic utility of CMRI in the diagnosis of ischemic heart disease (IHD), researchers have explored the current improvements, limitations, and future directions. For example, Fagiry et al. examined these aspects to enhance the effectiveness of CMRI in clinical practice. For this upcoming study, a group of 100 individuals clinically diagnosed with ischemic heart disease (IHD) were selected as participants. The findings of this study showed that while CMRI is a comprehensive prognostic and diagnostic tool for assessment of LV function, myocardial perfusion, viability, and coronary anatomy, in the diagnosis of IHD in the patients, it has a sensitivity, specificity, and accuracy of 97%, 33.33%, and 95.15%, respectively [19]. An inherent problem associated with cardiac catheterization and CT coronary angiography is the considerable radiation exposure endured by the patient during the procedure. Consequently, employing CMRI technology to solve such problems is highly useful to patients [20]. Jalnapurkar et al. [21] conducted a study to examine the diagnostic importance of stress CMRI in women presenting with suspected ischemia. The study focused on 113 female patients who underwent stress CMRI, encompassing anatomic, functional, adenosine stress perfusion, and delayed-enhancement photography. Prior to this, these patients had undergone assessment for indications and manifestations of ischemia; however, there was no indication of obstructive CAD detected. From 113 patients, 65 were diagnosed with coronary microvascular dysfunction (CMD) on the basis of subendocardial perfusion abnormalities consistent with myocardial ischemia on stress CMRI, 10 with CAD, 2 with left ventricular (LV) hypertrophy, and 3 patients were diagnosed with congenital coronary anomalies or cardiomyopathy that had not been detected in prior cardiac evaluations. The rest (33 patients) were normal. These findings indicate that stress CMRI often reveals abnormalities and offers diagnostic value in identifying CMD in women who display symptoms and indications of ischemia but do not show any signs of obstructive CAD. Stress CMRI appears helpful for diagnostic assessment in these diagnostically challenging people. To detect LGE patterns via cardiac MRI in high-risk patients with right ventricular dysfunction following the placement of a left ventricular assist device (LVAD), Simkowski et al. [22] proposed an unsupervised machine learning (ML) method. They utilized the 17-segment model to extract LGE patterns from CMRI scans performed on patients who had received an LVAD at a medical facility within a 12-month timeframe. Employing an unsupervised ML technique for hierarchical agglomerative clustering, the patients were subsequently classified based on similarities in the LGE patterns. The clusters which resulted from this were then statistically compared. Based on the findings, the application of unsupervised ML to analyze the LGE patterns observed on CMRI has the capability to identify groups of patients who are prone to developing right ventricular failure (RVF). Patients diagnosed with non-ischemic and mixed etiologies of heart failure may face an increased risk of developing RVF compared to those with purely ischemic causes. This heightened risk can be attributed to the extensive involvement of biventricular myocardium indicated by the observed LGE patterns on CMRI. Alsunbuli [23] evaluated several imaging modalities by using their inherent features to benchmark against a simulated ideal test, utilizing a qualitative approach to the comparison, as well as the various societies' guidelines. According to the findings, CMRI poses no danger of radiation exposure but provides lesser resolution than CT. It requires more time from physicians and patients and hence is more demanding. It requires fewer operators than echocardiography and enables the identification of small changes in serial follow-up evaluations, particularly for the LV volume and function. CMRI can also be used during pregnancy. In terms of the drawbacks, it cannot be used intra-procedurally and is contraindicated in the presence of certain pacemakers. CMRI is also susceptible to artifacts that may be detected using a chest X-ray, such as a retrocardiac surgical needle in one instance. Grober et al. [24] conducted a comparison between diffusion-weighted MRI (DMRI) and conventional MRI techniques to detect microadenomas in patients with Cushing's disease. They further evaluated the efficacy of a 3D volumetric interpolated breath-hold examination, a 3D T1 sequence known as a spoiled gradient echo (SGE), which offers enhanced soft-tissue contrast and improved resolution. SGE has better sensitivity for identifying and localizing pituitary microadenomas than DMRI. However, DMRI is rarely used to diagnose adenoma. SGE should be included in the routine MRI procedure for Cushing's disease patients. Moonen et al. [25] used CMRI to determine the frequency of Fabry disease in a group of individuals with inexplicable LGE. Fabry disease is a rare X-linked genetic condition with cardiac symptoms such as LVH, contractile failure, and fibrosis, which can be seen as LGE of the myocardium on CMRI. Fabry disease is a critical diagnosis to establish, since the missing enzyme can be replaced for the rest of one's life. In terms of detecting and pinpointing pituitary microadenomas, the SGE technique demonstrates greater sensitivity compared to DMRI. It is uncommon for an adenoma to be solely detected using DMRI. Therefore, it is recommended to include SGE as a standard component of the MRI protocol for patients diagnosed with Cushing's disease. In a study conducted by Moonen et al. [25], the presence of Fabry disease was examined in a cohort of patients displaying unexplained LGE on CMRI. Diagnosing Fabry disease holds significant importance due to the availability of lifelong enzyme replacement therapy as a treatment option for the deficient enzyme. According to the findings, the presence of unexplained LGE on CMRI could potentially indicate the presence of late-onset Fabry disease. Figure 1 depicts the chest pain approach utilized in the ED. A nurse performs triage targeted toward the main complaint throughout the triage screening procedure. The person's signs and symptoms, onset, personal history, drugs taken, and allergies are all discussed. The existence of breathing and pulse, as well as the detection of circumstances that indicate a high risk of mortality, are all evaluated. When a patient complains of chest discomfort, they are referred for an ECG. Following that, the medical team evaluates the patient and prescribes the appropriate treatment. The risk stratification is divided into five stages. This technique was characterized as positive based on the American Heart Association criteria when the patient was assessed as a high priority. deficient enzyme. According to the findings, the presence of unexplained LGE on CMRI could potentially indicate the presence of late-onset Fabry disease. Figure 1 depicts the chest pain approach utilized in the ED. A nurse performs triage targeted toward the main complaint throughout the triage screening procedure. The person's signs and symptoms, onset, personal history, drugs taken, and allergies are all discussed. The existence of breathing and pulse, as well as the detection of circumstances that indicate a high risk of mortality, are all evaluated. When a patient complains of chest discomfort, they are referred for an ECG. Following that, the medical team evaluates the patient and prescribes the appropriate treatment. The risk stratification is divided into five stages. This technique was characterized as positive based on the American Heart Association criteria when the patient was assessed as a high priority. The studies conducted on the utilization of CMRI in identifying chest discomfort caused by various disorders have been gathered in Table 1. This study was designed as a prospective cohort study The study enrolled 100 patients with ischemic heart disease (IHD) Explore CMRI's usefulness in IHD diagnosis CMRI showed sensitivity, specificity, and accuracy in diagnosing IHD compared to angiography The study utilized a retro- The study included a total of 31 Not men-Apply ML technique to detect No significant association found between The studies conducted on the utilization of CMRI in identifying chest discomfort caused by various disorders have been gathered in Table 1. Utilizing Machine Learning for the Diagnosis of Chest Pain through CMRI Artificial intelligence (AI) and machine learning (ML) are quickly gaining traction in medicine [19,20]. In the coming years, they are anticipated to profoundly change clinical practice, notably in the field of medical imaging [4,29]. AI is a broad term that refers to using robots to do activities that are common to human intellect, such as inferring conclusions through deduction or induction. On the other hand, ML is a more limited kind of computer processing that learns how to generate predictions using a mathematical model and training data. By being subjected to more instances, ML learns parameters from examples and can perform better at tasks like identifying and distinguishing patterns in data. The most sophisticated ML techniques, also known as DL, are particularly well-suited for this task. DL segmentation methods have recently been proven to outperform classic methods such as cardiac atlases, level set, statistical models, deformable models, and graph cuts. Nevertheless, a recent study of a number of automated techniques revealed that in more than 80% of CMRIs, even the top performing algorithms produced anatomically implausible segmentations [30]. When specialists perform segmentation, such mistakes do not occur. To gain acceptability in clinical practice, the automated methods' flaws must be addressed through continued research. This can be accomplished by producing more accurate segmentation results or developing techniques that automatically detect segmentation errors. By combining automated segmentation and evaluating segmentation uncertainty, Sander et al. employed CMRI to identify regions in the images where local segmentation failures occur. They utilized a convolutional neural networks (CNNs) uncertainty to discover local segmentation problems that may require expert repair. To compare the performance of manual and (corrected) automatic segmentation, the Dice coefficient, 3D Hausdorff distance, and clinical markers were utilized. The findings suggest that combining automated segmentation with manual correction of identified segmentation errors results in enhanced segmentation accuracy and a significant 10-fold decrease in the time required by experts for segmentation compared to manual segmentation alone, as demonstrated in the studies [31]. During segmentation training, Oktay et al. [32] devised an auto-encoder-based anatomically restricted neural network (NN) that utilizes constraints to make inferences about limitations. In a study, Duan et al. [33] incorporated atlas propagation to explicitly enforce shape refinement in a DL-based segmentation approach for CMRIs. This was extremely convenient when there were photo capture artifacts present. By employing cardiac anatomical metrics, Painchaud et al. [34] devised a post-processing technique to identify and transform anatomically questionable heart segmentations into accurate ones. Employing an ML-based method, Park et al. [35] predicted AMI. The occlusion of coronary arteries is responsible for the occurrence of AMI, and prompt revascularization is necessary to improve the prognosis. However, AMI has been misdiagnosed as other illnesses, and reperfusion delay has been linked to a poor outcome in patients. The authors used ML algorithms to anticipate AMI in patients with acute chest discomfort based on data collected at admittance. The best area under the curve was obtained in this research, demonstrating that ML is a more powerful technique for AMI prediction. Although the fast growth of ML offers many benefits, there are still several issues in relation to large-scale clinical use [36]. CMRI includes a lot of scanning layers and a complicated process, thus there are certain to be some low-quality pictures. Controlling the quality of cardiovascular images is so critical. Varied manufacturers, various machine types, and different MCE scanning parameters all influence ML. At the moment, the ML system based on DL suffers from a lack of explainability. After much training, an ML model could identify myocardial fibrosis based on a picture. However, it might not explain what effective characteristics it learned to reach such a conclusion. As a result, explainability is a vital study topic concerning medical ML [37,38]. To ensure high accuracy and optimize the algorithm, it is crucial to utilize a substantial amount of highquality data during the initial learning phase of ML, leveraging its inherent capabilities. Furthermore, the acquisition cost and time required to cardiovascular imaging, particularly Diagnostics 2023, 13, 2667 9 of 20 MCE data from CMRI, are significant. The critical task at hand is to establish a model that can effectively learn the best optimal solution even when provided with limited samples. Through the utilization of migration learning, it becomes feasible to transfer valuable information from prior ML models to novel models, resulting in a reduction in the required data resources for DL [39]. Table 2 outlines numerous CMRI machine-learning applications for diagnosing chest discomfort based on various experiments accomplished by various authors in various years. Research Statistics This section presents the statistics and numbers of papers published in the Scopus journals between 2012 and 2021. To analyze the number of papers and find the research trend in this field, an algorithm for data summarization is utilized. To start the evaluation of studies, the related keywords are chosen. The initial keywords are "chest pain" and "CMRI," in which you can search the keywords with "+" symbols. Then, limit the research based on years of publication. After analyzing the studies, new keywords are identified based on word frequency. Some additional keywords include "heart disease", "machine learning", and "image processing". These keywords are also analyzed in combination with the "CMRI" keyword. The number of articles in this range is from 132 to 185, which has shown a steady increase. The highest amount belongs to 2020, with 185 articles, and the lowest amount belongs to 2016, with 132 articles. This indicates growing interest in the field and highlights the importance of research in understanding and diagnosing chest pain using CMRI techniques. The research papers constitute the highest proportion among the different types of studies. This suggests that a significant amount of research in this field is dedicated to presenting original findings and contributing to the existing body of knowledge. Additionally, other types of publications, such as reviews, chapters, and conference papers, also contribute to the dissemination of information in this domain. Furthermore, the research is categorized based on the fields of study, providing insights into the interdisciplinary nature of CMRI research. The majority of researchers, approximately 74%, belong to the medical field. This reflects the significance of CMRI in the medical community for diagnosing and understanding chest pain and related conditions. Additionally, about 8% of the research is related to biochemistry, genetics, and molecular biology, indicating the importance of studying the underlying molecular mechanisms associated with chest pain. Based on the analysis, several countries stand out as leading contributors to the CMRI research. The United States, United Kingdom, Germany, and Canada have the highest number of publications, showcasing their active involvement and research output in this field. Implementation Results We offer a tiny example of implementation in this section of the study to provide a clearer overview of CMRI use in the ED. Late enhancement imaging tests were performed 15 min following gadolinium-DTPA injection utilizing a 3D-gradient faulty turbo fast-field echo (FFE) sequence that includes an individually designed 180 • inversion pre-pulse (Look-Locker) to provide appropriate myocardium suppression [47]. A series of images were acquired using a 2D-sequence approach, which included short-axis images with a 5 mm slice thickness encompassing the entire left ventricle, along with two to three long-axis views. The presence of dark patches within the enlarged myocardium supplied by the infarct artery indicated the presence of persistent microvascular obstruction. Various patterns of late enhancement, including subendocardial, transmural, intramural, subepicardial, and diffuse patterns, were detected ( Figure 2). To visualize cardiac edema, a T2-weighted turbo spin-echo sequence was employed, along with a fat saturation pulse. Images were acquired in a continuous short-axis orientation, covering the entire left ventricle, with a slice thickness of 15 mm. Myocardial edema was defined as a relative myocardial EPCs intensity exceeding 2.0 times that of skeletal muscle. Coronary artery disease leads to myocardial damage, which can be identified through subendocardial or transmural late enhancement patterns. In contrast, acute myocarditis is often associated with the presence of late enhancement, as characterized by a diffuse, intramural, or subepicardial pattern. Patients with ST-segment elevation myocardial infarction (STEMI) had the greatest levels of creatine kinase (CK), troponin-I, and leukocytes. They gradually dropped from individuals with non-ST-segment elevation myocardial infarction (NSTEMI), acute myocarditis, and takotsubo cardiomyopathy to takotsubo cardiomyopathy ( Figure 3). In terms of the C-reactive protein (CRP) levels, patients with acute myocarditis exhibited the highest initial and peak values. There were statistically significant differences in the levels of CK, troponin-I, and the first CRP among the different groups. were acquired using a 2D-sequence approach, which included short-axis images with a 5 mm slice thickness encompassing the entire left ventricle, along with two to three longaxis views. The presence of dark patches within the enlarged myocardium supplied by the infarct artery indicated the presence of persistent microvascular obstruction. Various patterns of late enhancement, including subendocardial, transmural, intramural, subepicardial, and diffuse patterns, were detected ( Figure 2). To visualize cardiac edema, a T2weighted turbo spin-echo sequence was employed, along with a fat saturation pulse. Images were acquired in a continuous short-axis orientation, covering the entire left ventricle, with a slice thickness of 15 mm. Myocardial edema was defined as a relative myocardial EPCs intensity exceeding 2.0 times that of skeletal muscle. Coronary artery disease leads to myocardial damage, which can be identified through subendocardial or transmural late enhancement patterns. In contrast, acute myocarditis is often associated with the presence of late enhancement, as characterized by a diffuse, intramural, or subepicardial pattern. Patients with ST-segment elevation myocardial infarction (STEMI) had the greatest levels of creatine kinase (CK), troponin-I, and leukocytes. They gradually dropped from individuals with non-ST-segment elevation myocardial infarction (NSTEMI), acute myocarditis, and takotsubo cardiomyopathy to takotsubo cardiomyopathy (Figure 3). In terms of the C-reactive protein (CRP) levels, patients with acute myocarditis exhibited the highest initial and peak values. There were statistically significant differences in the levels of CK, troponin-I, and the first CRP among the different groups. The volumes and ejection percentages of the ventricles were considerably different. Acute myocarditis patients had the greatest LV volumes. The LV ejection fraction of STEMI patients was considerably lower than that of NSTEMI patients (p = 0.006). Acute myocarditis patients had substantially greater RV volumes than other categories (p = 0.03). In the group of patients experiencing their first episode of severe chest pain, wall motion abnormalities were detected in all 95/95 (100%) cases of STEMI, 51/68 (75%) cases of NSTEMI, 18/27 (66.7%) cases of acute myocarditis, and 12/12 (100%) cases of takotsubo cardiomyopathy. The observed differences were statistically significant (p < 0.001). A random distribution of wall motion anomalies was seen in individuals with acute myocarditis. The aberrant wall motion in individuals with takotsubo cardiomyopathy was concentrated in the midventricular-apical regions. The volumes and ejection percentages of the ventricles were considerably different. Acute myocarditis patients had the greatest LV volumes. The LV ejection fraction of STEMI patients was considerably lower than that of NSTEMI patients (p = 0.006). Acute myocarditis patients had substantially greater RV volumes than other categories (p = 0.03). In the group of patients experiencing their first episode of severe chest pain, wall motion abnormalities were detected in all 95/95 (100%) cases of STEMI, 51/68 (75%) cases of NSTEMI, 18/27 (66.7%) cases of acute myocarditis, and 12/12 (100%) cases of takotsubo cardiomyopathy. The observed differences were statistically significant (p < 0.001). A random distribution of wall motion anomalies was seen in individuals with acute myocarditis The aberrant wall motion in individuals with takotsubo cardiomyopathy was concentrated in the midventricular-apical regions. Discussion This research work focuses on investigating the literature concerning the utilization of CMRI for diagnosing chest discomfort. The study encompasses a comprehensive anal- Discussion This research work focuses on investigating the literature concerning the utilization of CMRI for diagnosing chest discomfort. The study encompasses a comprehensive analysis of chest disorders and explores the application of machine learning techniques in their detection. Moreover, the research concludes by providing a detailed illustration of the fundamental aspects of CMRI analysis. To ensure a thorough investigation, the Scopus scientific resource was extensively reviewed, allowing for a comprehensive examination of the topic. The primary concern addressed in this study is the differentiation between ischemic and non-ischemic cardiac causes of chest pain in individuals who present with sudden chest pain or discomfort upon their arrival at the ED. Conventional diagnostic methods struggle with accurately diagnosing acute cardiac ischemia, causing inappropriate discharge and increased mortality rates. CMRI can enhance accuracy and prevent misdiagnoses, emphasizing the importance of effective utilization. The overall objective of the study is to improve the diagnosis of chest discomfort by boosting CMRI's capacity to identify abnormalities and investigating machine learning techniques. By addressing the shortcomings of traditional diagnostic techniques, it seeks to improve patient outcomes, lower death rates, and enhance cardiology. This article has the following limitations, despite its positive aspects: • Selection bias: The article does not provide details about the criteria used to select the studies included in the research review. It is important to consider that studies with positive results may be more likely to be published, while studies with negative or inconclusive results may be overlooked. This selection bias can lead to an overestimation of the effectiveness of CMRI in diagnosing chest discomfort. • Interpretation bias: CMRI interpretation requires expertise and subjective judgment. The article does not mention whether the researchers or reviewers were blinded to the clinical information. If they were not blinded, knowledge of the patient's clinical status or symptoms could introduce bias into the interpretation of CMRI findings. • Interobserver variability: Different observers' interpretations of CMRI may differ. The study offers no inquiry into whether the analysis employed several reviewers or whether steps were taken to evaluate and reduce interobserver variability. The trustworthiness of the study's findings may be impacted by different reviewers' inconsistent interpretations of CMRI data. • Lack of gold standard comparison: Although the article cites the use of CMRI as a substitute for traditional diagnostic methods, it supplies no information regarding the reference standard or gold standard that was used to determine the accuracy of CMRI. It is difficult to adequately assess the genuine diagnostic performance of CMRI without a direct comparison to a recognized gold standard. • Generalizability: The study populations' characteristics or the environments in which the investigations included in the research review were carried out are not described in the article, which limits its potential to generalize. The findings' applicability to other patient demographics or healthcare environments must be taken into account. Depending on the patient's demographics, comorbidities, and access to knowledge and resources, CMRI may or may not be useful for detecting chest discomfort. • Potential conflicts of interest: Conflicts of interest that might have existed between the researchers or the funding sources are not addressed in the publication. Financial ties to businesses that make CMRI equipment or drugs related to it can skew the results of studies. Any conflicts of interest must be disclosed in order to maintain transparency and reduce potential bias. • In addition to the previously mentioned limitations, it is important to address the practical applicability of cardiac MRI in daily clinical practice: 1. Feasibility in emergency settings: The paper does not thoroughly discuss the feasibility of performing cardiac MRI in emergency departments or emergency rooms (ED/ER). Given the time-consuming nature of cardiac MRI, it may not be practical to perform this imaging modality in acute situations where timely interventions are crucial. 2. Resource utilization: Cardiac MRI requires specialized equipment, trained personnel, and dedicated facilities. Assessing the availability and allocation of these resources, as well as their cost-effectiveness, is crucial in understanding the practicality and sustainability of widespread cardiac MRI implementation. 3. Patient selection criteria: Not all patients with chest discomfort or suspected cardiac conditions may be suitable candidates for cardiac MRI due to factors such as contraindications, patient stability, or the urgency of intervention. Understanding the limitations of and specific indications for cardiac MRI in the emergency setting is essential for its optimal utilization and decision-making. • Limited evidence in the acute setting: The article predominantly focuses on studies conducted in the chronic/subacute setting, where patients were likely admitted to the ward. The lack of effective data on the clinical application of MRI in the acute setting raises concerns about the generalizability of the findings to emergency situations. The article should acknowledge the limitations of the available evidence and highlight the need for further research specifically targeting the acute setting. The generic messages derived from predominantly chronic/subacute studies may not be directly applicable or reproducible in acute clinical scenarios. In order to advance the field of CMRI and its application in diagnosing chest discomfort, several key areas for future research have been identified. One area of focus is the refinement of machine learning approaches. By leveraging the power of artificial intelligence, researchers aim to develop more robust algorithms that can analyze CMRI data with increased accuracy and efficiency. This would enable clinicians to obtain more reliable and timely diagnostic information, leading to improved patient outcomes. Another crucial aspect is validating the effectiveness of CMRI in real-world clinical settings. While CMRI has shown promise in research studies, it is essential to assess its performance in everyday clinical practice. By conducting large-scale studies and comparative analyses, researchers can gather valuable insights into CMRI's diagnostic capabilities and identify any limitations or challenges that need to be addressed. Exploring advanced imaging techniques is also a priority for future research. This includes investigating new CMRI sequences and protocols that can provide even more detailed and comprehensive cardiac information. By pushing the boundaries of imaging technology, researchers can potentially identify subtle cardiac abnormalities that may have been previously missed, thereby improving diagnostic accuracy. Furthermore, improving the diagnosis and management of MINOCA is an important area of research. CMRI has shown promise in identifying the underlying causes of MINOCA, such as myocarditis or microvascular dysfunction. Future studies should aim to refine the CMRI protocols specifically tailored for MINOCA diagnosis, leading to more personalized and effective management strategies. Integration of CMRI with other imaging modalities is also a promising avenue for future research. By combining the strengths of CMRI with other imaging techniques, such as PET or coronary angiography, a more comprehensive assessment of cardiac function and perfusion can be achieved. This multimodal approach has the potential to provide a more holistic understanding of cardiac conditions, aiding in treatment planning and monitoring long-term outcomes. Conclusions The management of patients with chest pain or discomfort is a common and challenging clinical problem. In this paper, the review of CMRI research highlights its practical implications for emergency department management, providing comprehensive information on cardiac structure, tissue damage, and myocardial fibrosis. Machine learning methods, particularly deep neural networks, have potential for accurate diagnosis and treatment planning. Future CMRI advancements aim to develop accurate, adaptable methods for routine clinical applications, ensuring efficiency in emergency settings. This involves the academic community, healthcare institutes, and medical imaging industry integrating research findings. CMRI and machine learning advancements improve patient care and decision-making in emergency departments. Such methods offer detailed assessment of cardiac conditions, improving risk stratification and accurate diagnoses. CMRI's evolution will significantly impact emergency department management of chest pain patients. To enhance the credibility of future research, it is essential to prioritize the inclusion of RCTs and prospective studies specifically conducted in acute settings, focusing on individuals presenting with sudden chest pain or discomfort upon arrival at the ED. RCTs play a crucial role in providing a more robust assessment of CMRI's diagnostic accuracy and its potential utility in promptly diagnosing chest discomfort in acute cases. By conducting well-designed RCTs, researchers can effectively compare CMRI's performance against other imaging techniques and conventional diagnostic methods, thus yielding more reliable and reproducible findings. These comprehensive RCTs have the potential to significantly aid in accurately differentiating between ischemic and non-ischemic cardiac causes of chest pain, ultimately leading to improved patient outcomes. Moreover, the integration of CMRI in the acute diagnostic pathway has the potential to reduce the rate of inappropriate discharges from the ED, which, in turn, can contribute to a lower mortality rate associated with undiagnosed cardiac conditions.
v3-fos-license
2021-01-07T09:12:06.855Z
2021-01-01T00:00:00.000
234255928
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2673-5768/2/1/2/pdf", "pdf_hash": "e6fd2c6e0b85e2d8438c3eb68493a7944983643e", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42319", "s2fieldsofstudy": [ "Business" ], "sha1": "63c079972944758a78234c68d7661c4ee5e8c1af", "year": 2021 }
pes2o/s2orc
Augmenting the Role of Tourism Governance in Addressing Destination Justice, Ethics, and Equity for Sustainable Community-Based Tourism : Sustainable tourism development (STD) serves as a founding and guiding concept that can be applied to all forms of tourism, whereas community-based tourism (CBT) has been largely practiced as an alternative form of tourism development. Past research has suggested critical theoretical and practical omissions in both STD and CBT related to issues of community well-being, justice, ethics, and equity. With an objective of bridging these gaps, this research developed an integrated framework of sustainable community-based tourism (SCBT) based on a comprehensive literature review, which identified that there was a significant under-representation of key elements such as justice, ethics, and equity in the domain of governance both in the STD and CBT literatures. The qualitative research mixed emergent data with theory driven data and conducted semi-structured interviews with 40 diverse tourism stakeholders in the twin cities of Bryan–College Station (BCS) in Texas. Results revealed that tourism helped to promote cultural preservation and community pride and promoted the sense of mutual respect and understanding among visitors and stakeholders. However, some ethnic minorities felt they were not receiving full benefits of tourism. The study concluded that a more proactive, inclusive, ethic of care oriented tourism governance to help ensure sustainable tourism development is needed. Introduction Sustainable tourism (ST) development has long been promoted and practiced as an alternative model of tourism development that seeks balanced development while minimizing social, cultural, and environmental impacts of tourism [1][2][3][4]. Various forms of alternative tourism including ecotourism, rural tourism, community-based tourism, agrotourism, volunteer tourism, and responsible tourism have remained in practice since 1980s as an adaptancy approach to sustainable tourism development [5][6][7][8]. Much literature has been published in the past four decades delineating sustainable tourism (ST) development and community-based tourism (CBT). ST has been claimed to be originated by international organizations such as the United Nations (Earth Summit), United Nations Environment Program (UNEP), United Nations World Tourism Organization (UNWTO), and the World Travel and Tourism Council (WTTC) [9,10], while CBT has been claimed to have origins at various local and regional scales spanning across different countries and continents around the world [9,11,12]. However, scholars have suggested that the road to sustainable tourism development has not been straight-forward, including conceptual, implementation, and governance challenges [1,[13][14][15][16][17][18][19]. Further, despite the availability of a plethora of definitions, principles, indicators, criteria and practices related to ST and CBT, there remains little guidance how these diverse perspectives can be integrated to "help inform a sustainability-oriented approach to tourism" [20] (p. 1). Jamal, Camargo, and Wilson [10] outlined the need "to develop a comprehensive framework of justice and care to guide and evaluate sustainable tourism" (p. 4606). In order to address the issue pointed out by Jamal, Camargo, and Wilson [10] and Jamal and Camargo [21], a preliminary framework of "Sustainable Community-Based Tourism" (SCBT henceforth) was developed (for details see [20] (pp. [65][66][67][68] by conducting a comprehensive literature review (CLR) both on ST and CBT in order to bridge the existing gap (more details on CLR in literature review section). The CLR identified that three dimensions of sustainable development, economic, social-cultural, and environmental, were given impetus by a majority of scholars and institutions, including [4, 22,23]; however, the fourth dimension/domain of sustainability-governance, though ignored by many, has been emphasized by other scholars, including [17,[24][25][26], in the form of governance, institutional arrangements, or as political/administrative environments. Further, other scholars including [10,15,27,28] have suggested that salient issues of justice, ethics, and equity related to governance have been largely ignored or found misrepresented in both ST and CBT settings. The research argues that the critical elements of justice, ethics, and equity within the dimension of governance have been under-examined in both the ST and CBT literatures. Inclusion of such issues could help tourism planners and practitioners make decisions that are more oriented towards overall community wellbeing and that align with the principles of Theory of Justice (Rawls [29,30]), justice tourism, and ethical tourism. Taking those existent issues into consideration, the research had purposes (i) to explore (through a CLR) the elements of destination justice, ethics, and equity in the domain of tourism governance and propose an integrated framework of SCBT; (ii) to conduct an empirical study in Bryan-College Station (BCS) tourism community, TX, USA applying some SCBT criterions for exploring the elements of justice, ethics, and equity within the dimension of governance. Literature Review Leading to SCBT Framework In order to develop a robust framework and approach to SCBT, an in-depth and comprehensive literature review (CLR) was conducted aiming to trace the history and common and contrasting elements relating to the definitions, principles, concepts, and critical success factors of ST and CBT development and to explore gaps therein as suggested by earlier studies [10,21]. The CLR followed some of the steps suggested by Arksey and O'Malley [31] and Grant and Booth [32] for the scoping review process. Buckley [33] and Arksey and O'Malley [31] suggested that a scoping review could be undertaken in the same fashion as an exploratory literature review through the systematic application of key search terms. Therefore, the CLR replicated the techniques of scoping review suggested by Grant and Booth [32] and included the steps of search, appraisal, synthesis, and analysis (SALSA). The systematic literature review was conducted in two phases from June 2014 through spring 2016. The initial search was conducted in June 2014 applying the commercial literature database Business Source Complete. The terms used in the search process included "sustainable tourism; community-based tourism; responsible tourism; (sustainable tourism) (community based tourism) framework/model/criteria/indicators/principles/definitions/certifications" [34] (p. 475). Another expanded, but focused, search was conducted in the Scopus database (in spring 2016) applying a similar procedure aforementioned. However, this search also explored domains of governance including issues such as justice, ethics, and equity. Overall, around 260 peer reviewed journal articles, book chapters, and seminal conference papers in English language received full review; though all were not considered worthy for references (For details on CLR, please see [20,34]). Further, to make the literature review current, a review of sixteen additional peer-reviewed journal articles and book chapters was conducted in 2020 and added in the research. The CLR made an in-depth exploration of the history of sustainable development, its bearing on ST and CBT, with focused search on tourism governance, justice, ethics, and equity which are presented below with some details. History-The Emergence of "Sustainable Development" Presumably, the first official definition of sustainable development (SD) was forwarded by the World Commission on Environment and Development (WCED), defining SD as, "development that meets the needs of the present without compromising the ability of future generations to meet their own needs" [35] (p. 8). Sustainability seemed to emerge as a central theme and a guiding principle for development for governments, businesses, and other private organizations, following the WECD initiative. The notion of sustainability, however, had existence in both theory and practice long before this event. Publications such as Club of Rome's report "The Limits to Growth", 1972 [36], and the first United Nations Conference on the Human Environment, Stockholm, 1972 [37] (para. 1) seem to serve as the forerunners for sustainable development. Hardy, Beeton, and Pearson [38] suggested that the Stockholm conference promoted the concept of integrated eco-development where cultural, social, and ecological goals were interwoven together, requiring equal and serious consideration in the development agenda. The 1972 United Nations Educational, Scientific, and Cultural Organization (UNESCO) Convention forwarded an official definition of the world's natural and cultural heritage sites and made state parties accountable to the protection and conservation of UNESCO World Heritage sites [39]. Hall, Gossling, and Scott [40] contended that in WECD's (1987) report, sustainable tourism did not receive a high priority; however, it has become "one of the great success stories of tourism research and knowledge transfer" (p. 2) in the succeeding decades thereafter. The UN Conference on Environment and Development (UNCED), popularly termed as "Earth Summit", held in Rio de Janeiro (June 1992) produced a constitution of sustainable development known as "Agenda 21" [41]. As a comprehensive program of action, Agenda 21 was adopted by 182 governments at the UNCED conference. Though not legally binding, "Agenda 21 carries moral and practical suggestions for consideration . . . (which) is considered a blueprint for sustainable development in the 21st century" [20] (p. 14). Further, declaration of United Nations' eight Millennium Development Goals-MDGs (2000 and adoption of 17 Sustainable Development Goals (SDGs) by United Nations in 2015 have made a "universal call to action to end poverty, protect the planet, and ensure that all people enjoy peace and prosperity by 2030" [42] (para. 1), adding momentum to the journey to sustainability. Sustainable tourism development, as a sub-sector of sustainable development, endeavors to minimize the adverse socio-cultural and environmental impacts while enhancing the opportunities for community income, community well-being, and visitors' satisfaction [12,14,43]. Graci and Dodds [44] suggested the UNCED conference (1992) identified tourism as "one of the five main industries in need of achieving sustainable development" (p. 11). The World Summit on Sustainable Development (2002) emphasized poverty alleviation as a priority area while developing an implementation plan for sustainable development [4] when close ties emerged between poverty and environmental degradation. Emergence of Sustainable Tourism Development (STD) There is evidence that joint institutional endeavors for STD took place earlier than the WECD [35] initiatives. For example, an alliance emerged between UNESCO and the World Bank in 1970s in the area of heritage conservation and financing of tourism infrastructure development. As a joint initiative of these two organizations, a seminar was convened in 1976 "to discuss the social and cultural impacts of tourism on developing countries and to suggest ways to take account of these concerns in decision-making" [45] (p. ix). However, tourism did not receive an early recognition as one of the major sectors in sustainable tourism development. Hall, Gossling, and Scott [40] suggested that tourism could not draw much attention in the WECD (1987) report. However, following the joint publication of Agenda 21 for the Travel and Tourism Industry: Towards Environmentally Sustainable Development [2] by UNWTO, WTTC, and the Earth Summit (EC) in 1995, sustainable tourism development received wider publicity and planning impetus. This report noted that while Agenda 21 (product of Rio Summit, 1992) recognized the potential of low-impact nature-based tourism (ecotourism) enterprises (underestimating the size and significance of travel and tourism industry), Agenda 21 for the travel and tourism industry underlined the urgency of making all travel and tourism operations sustainable, detailing priority areas and guidelines for governments and the tourism industry. The UNEP-UNWTO [4] report and other scholarly works [46,47] identified major stakeholders including tourism enterprises, local communities, tourists, and governments that are accountable for the sustainability of tourism businesses. The UNEP-UNWTO report [4] presented a broader definition of sustainable tourism: Sustainable tourism development guidelines and management practices can be applied to all forms of tourism, for all types of destinations, including mass tourism and various niche tourism segments [4] (p. 11). The UNEP-UNWTO report [4] emphasized three pillars of STD-economic, social, and environmental-combined with 12 aims. The 12 aims were identified as economic viability, local prosperity, employment quality (within the economic pillar); social equity, visitor fulfillment, local control, community wellbeing, and cultural richness (within the social pillar); and physical integrity, biological diversity, resource efficiency, and environmental purity (within the environmental pillar) [4] (p. 9). Further, various institutions and scholars have forwarded principles integral to STD including holistic planning and strategymaking, preserving essential ecological processes, protecting both human heritage and biodiversity, sustaining productivity over the long term for future generations, pursing multi-stakeholder engagement, addressing global and local impacts, and considering the issues of equity in tourism operations [1,3,4]. CBT and Other Alternative Tourism Approaches to STD Departing from the Advocacy platform [48] of tourism popularized during the 1950s-1960s that tourism is a viable option for development with few negative impacts, the Cautionary platform of tourism (1970s) suggested that tourism can also bring negative impacts to destinations if they are not carefully planned. The Adaptancy platform of tourism was popularized in the 1980s, which favored alternative forms of tourism such as CBT, ecotourism, geo-tourism, responsible tourism, and volunteer tourism in place of mass tourism for destination sustainability [12,49,50]. Some of the characteristics of alternative tourism include small size operations, benefitting local people and communities, owned and managed by local residents or community, sensitive to environment, understanding of local culture, heritage and tradition, and poverty reduction through Pro-Poor Tourism (PPT) schemes [3,5,6,10]. A table is presented below that defines various forms of alternative tourism under the over-arching umbrella of STD (Table 1). Definition Sustainable tourism "Tourism that takes full account of its current and future economic, social and environmental impacts, addressing the needs of visitors, the industry, the environment, and host communities" [4] (p. 12). Community-Based Tourism "CBT is generally small scale and involves interactions between visitor and host community, particularly suited to rural and regional areas. CBT is commonly understood to be managed and owned by the community, for the community" [50] (p. 2). Ecotourism "Responsible travel to natural areas that conserves the environment, sustains the well-being of the local people, and involves interpretation and education" [51] (para.1). Geotourism "Tourism that sustains or enhances the distinctive geographical character of a place-its environment, heritage, aesthetics, culture, and the well-being of its residents" [52] (para. 1). Responsible Tourism Responsible Tourism is about "making better places for people to live in and better places for people to visit, in that order" [53] (para. 2). Pro-Poor Tourism "Tourism that puts those people living in poverty at the top of the agenda. PPT strategies are concerned with reducing both absolute and relative poverty by providing tourism-related income opportunities for disadvantaged groups" [54] (p. 10). Dimensions of Community-Based Tourism (CBT) CBT is one of the extended functional aspects of "Community" and has been suggested to have three common elements: (1) a geographical area/a locality, (2) common ties/bonding among its people, and (3) social interactions/collective actions [55,56]. Warren [57] defined community as a shared living based on the common geographical location of the individuals, the larger society, and culture. He also presented six approaches for the study of community: (1) community as a space (applied both in Rural and Urban Community studies), (2) community as people, (3) community as shared institutions and values, (4) community as interaction, (5) community as a distribution of power, and (6) community as a social system. Similar to other areas of community engagement such as education, health, infrastructure development, and social services, "CBT shows obvious parallels with broader community development and participatory planning philosophies" [58] (p. 40). Mtapuri and Giampiccoli [59] presented various modalities of CBT projects and stated they could be initiated from within and outside the community, led by public, private, and non-governmental agencies or a combination of those applying a top-down or bottom-up approach. On the basis of market priority, bottom-up CBT approaches have typically been associated with domestic/local markets, whereas top-down CBT approaches have typically been associated with international markets [59]. Irrespective of forms and shapes, scales, and geographical locations, common objectives of CBT operations include improving local economies, sharing social-economic benefits equitably, environmental conservation, preservation of local culture and heritage, empowerment and ownership of local businesses, and ensuring quality and authentic experience for visitors [12,20,50,59]. In some instances, CBT objectives can be more focused, such as "both poverty reduction and community development [59] (p. 155) and "redistribution of economic benefits among the most vulnerable of groups, such as indigenous communities" [12] (p. 2). CBT is believed to have started "in the early 1980s as the sine qua non of alternative tourism" [8] (p. 206). As an alternative to mass tourism [8,20,50,59], CBT emerged in the context of helping rural communities in the developing world through grassroots development, community empowerment and participation, and capacity enhancement of the local people (see [60,61]). However, evidences suggest that CBT practices can also be found in developed economies in the North, such as Canada during the 1980s [62,63]. Mtapuri and Giampiccoli [59] stated, "Canada's Northwest territories Government was possibly the first government to advance a CBT development strategy in its territory" (p. 155). CBT as an approach to STD has been practiced all over the world [12,59]. The CLR also found that some terminologies such as "Rural Tourism" have often been used alongside CBT in Latin America and other developed nations and alongside "Ecotourism" in Asia. It has further been suggested that sustainable tourism, CBT, rural tourism, and ecotourism carry identical objectives [50]. It was an interesting exploration during the CLR that a number of "critical success factors" (CSFs) for CBT were observed that are common both in the developed and developing countries. Those CSFs were categorized into four key dimensions of community empowerment as proposed by Scheyvens [60,61] and supported by other scholars [49,64,65]. The four dimensions of community empowerment Scheyvens [60] mentioned include economic empowerment, psychological empowerment, social empowerment, and political empowerment [60,61]. Basic elements of CSFs presented by various authors include engagement/participation, community assets, collaboration, cultural and heritage preservation, equity and local ownership, economic benefits, empowerment, leadership, and job opportunities. Environmental protection and management and infrastructure development are other salient factors [66,67] of CBT not to be ignored. These CBT success factors served as guiding resources in developing a preliminary framework of SCBT. Comparing ST and CBT: Similarities and Differences The CLR revealed both commonalities and subtle differences between ST and CBT. No considerable variations were found relating to their aims and objectives; rather, CLR revealed that "CBT incorporates the objectives of sustainable tourism with an emphasis on community engagement and development" [68] (3), and it incorporates the dimensions of local control and management of business for poverty reduction and providing the community with supplemental means of income [54]. Further, most scholars believe that in the absence of uniform principles of CBT, ST principles apply to community settings similar to other alternative forms of tourism. It has also been claimed that as overarching concepts, ST principles apply to all forms of tourism including mass tourism or CBT [4, 38,69]. Another common ground seemed to be that both ST and CBT are promoted by international and non-governmental agencies including the World Bank and Global Environmental Facility [70]. Regarding differences, the CLR found that ST principles primarily originated from international public-private organizations including the United Nations' Earth Summit, UNEP, UNWTO, and WTTC, and from tourism scholars and critics mainly from the West [9,10,16,19,43]. CBT, as an alternative to mass tourism, was found to emphasize grassroots development through participation, equity, and empowerment with an emphasis on small and medium-scale projects mostly owned by local businesses [12]. Further, another difference between ST and CBT remains that for "CBT operations, communities (hosts) and tourists (guests) have more mutually beneficial relationships: CBT projects are designed so that benefits/dividends rotate and/or are allocated among residents, and CBT initiatives are initiated by a family or a group based on community assets, sometimes joined by outside business partners" [20] (p. 40). Application of these principles supports retaining economic benefits and enhancing community well-being. Viewed from this perspective, CBT intersects with responsible tourism (RT), as it "benefits local community, natural and business environment and itself" [71] (p. 314). Judged from these perspectives, CBT and ST are neither fully similar nor dissimilar, but possess substantial intersections and overlaps as two focused approaches to tourism development. These findings suggest ST is more of an overarching concept, whereas CBT is a form/approach to tourism development rooted in the locale/community. Critique of ST and CBT It can be argued that the use of sustainable development objectives with institutional guidance (e.g., the UN and its specialized agencies) to monitor progress has been extremely beneficial. Yet, it has been argued they have failed to meet the goals and objectives within the set time-frame and do reframe sustainability goals for future events and conferences such as MDGs and SDGs [19,72]. Mahanti and Manuel-Navarrete [73] charged that the noble concept of SD has been downgraded due to "the meager performance of Rio+20 'landmark' conferences" (p. 417). Garrod and Fyall [13] charged, "Defining sustainable development in the context of tourism has become something of a cottage industry in the academic literature of late" (p.199). Further, STD has been labelled as greenwashing for its failure in balancing social and environmental issues while over emphasizing economic gains. It has also been charged for largely ignoring local voices and not adequately addressing the issues of equity, social justice, and equitable distribution of benefits. Johnston [74] stated that significant theoretical and practical gaps were visible with regard to sustainable tourism research and practices. Even CBT seems to have developed dependency on international markets, global capital, and expertise, though its premise was initially built on local values and empowering communities. To conclude, it can be stated that the ST principles are transferable to CBT research and practice settings (or vice-versa). However, a minute observation of the extant scholarship in the field presents some theoretical/conceptual, implementation, and governance challenges and omissions of justice, ethics, and equity in the domain of governance. CLR Summary on ST and CBT and Identification of Gaps The literature review provided a systematic and chronological history of sustainable tourism development and its various forms including CBT. Review of critical success factors (CSFs) of CBT and analysis of various criteria and dimensions of ST and CBT led to the acknowledgement of well-represented aspects and existing gaps. As stated earlier, the CLR revealed that three dimensions/pillars of sustainability: economic, social-cultural, and environmental, have been emphasized by the majority of scholars and institutions including [4,22,23,75]; however, the fourth dimension of sustainability-governance, though ignored by many, was emphasized by other scholars including [17,[24][25][26][76][77][78][79] in the form of governance, institutional arrangements, or as political/administrative environments (for details see [34] (p. 14). The CLR further revealed that a significant number of scholars including [10,15,27,28] (as detailed in Table 2) proposed that issues of ethics, justice, and equity in the domain of governance received less attention or they were largely ignored. Jamal, Camargo, and Wilson [10] suggested the need for "a clear framework of justice and ethics" (p. 4594) for sustainable tourism, and this study attempted to address the issue through an empirical study (case study) of the BCS tourism community. Jamal [80] stated that justice is a pluralistic and complicated concept, and it is not easy to separate justice from ethics. It is worth mentioning that in their recent work, Jamal and Camargo [81] reemphasized "justice as a key principle for good governance and policy in tourism" (p. 205). Further, in their very recent (2020) work on justice and ethics, Jamal and Higham [82] claimed "The subject of justice in tourism and hospitality studies is indeed slowly being advanced by the academic community, alongside closely related areas of ethics" (p. 147), underlining the need of further studies/research in this area. Further, Jamal and Higham [82] presented a conceptual, holistic, and interrelated (yet partial and processual) model of Justice and Tourism, wherein they described emerging principles and approaches to Justice and Tourism, which included (1) social justice, equity, and rights; (2) inclusiveness and recognition; (3) sustainability and conservation; (4) well-being, belonging, and capabilities; (5) posthumanistic justice; and (6) governance and participation. Guia's [83] recent (2020) work on justice tourism proposed four ethical approaches to tourism and justice study and research. This is how past and current research lead to a direction that issues of justice, ethics, and equity and other interrelated emerging approaches are in need of further exploration, which signifies the context and importance of what the current research investigated. Drawing upon the CLR (both past and current), the research developed a preliminary framework of SCBT, as presented in Table 2 below. To serve the purpose of this research, the dimension of governance and critically under-represented issues such as justice, ethics, and equity under the domain of governance are detailed in Table 2 below. Theoretical and Conceptual Perspectives for the Empirical Study Identification of under-represented, yet critical, aspects of ST/CBT such as justice, ethics, and equity in the domain of governance through the CLR directed the study to explore conceptual foundations of the issues under investigation. The conceptual and theoretical perspectives guiding the empirical study are presented below. Rawls' Theory of Justice: In A Theory of Justice, Rawls [29] came up with a notion of justice known as Justice as Fairness (JAF). JAF is theorized as a major departure from the normative, Anglo-American utilitarian ethic that champions for actions to be morally right if the majority of people benefit. JAF, on the contrary, "follows the tradition of Plato and Aristotle and emphasizes a quality of society and quality of persons through reciprocity and a system of cooperation, which is never aimed at perfection" [20] (p. 73). Rawls [29] claimed JAF to be superior to dominant utilitarian ethics. Further, Rawls' "liberal view of society and democracy emphasizes basic equal rights and liberties and fair equality of opportunity for all" [20] (73). Rawls' [29,30] two principles of justice are presented below: (a) Each person has the same indefeasible claim to a fully adequate scheme of equal basic liberties, which scheme is compatible with the same scheme of liberties for all; and (b) Social and economic inequalities are to satisfy two conditions: first they are to be attached to offices and positions open to all under conditions of fair equality of opportunity; and second, they are to be to the greatest benefit of the least-advantaged members of society (the difference principle)" [30] (pp. 42-43). Rawls' [30] explained that, "the first principle is prior to the second; also, in the second principle, fair equality of opportunity is prior to the difference principle" (p. 43). Rawls' emphasized the need of a basic structure of society (that can be associated to governance or institutional mechanism) to facilitate these two principles into action. Jamal found Rawls's justice as fairness to be an "ideal theory of (perfect) justice situated within a liberal social contract tradition. It is oriented towards setting up perfectly just institutions and equity and fairness in distributing society's basic goods" [80] (p. 34) suited to liberal democratic societies. Governance: Various forms and scales of governments have been discussed and suggested in tourism literature. From the perspectives of STD, Bramwell [17] defined governance as, "in order to develop and apply policies for tourism in destinations, there is usually a requirement for knowledge, thought, the application of power, resources and rules, and also coordination and cooperation among numerous actors. Together, these are key features of governance" (p. 459). Justice in Tourism: Higgins-Desbiolles [27] suggested that justice tourism "seeks to reform the inequities and damages of contemporary tourism . . . to chart a path to a more just global order" (p. 345). Though various forms of justice have been discussed in the literature; this study focuses two types of justices-distributive and procedural. Distributive justice ensures a fair and equitable (not equal) distribution of social and economic benefits among the members in the community/society, and procedural justice creates a mechanism for fair and just participation by the members of the community/society in decision-making processes that affect them [88,94]. Adding to the definition of justice/ethical tourism, Jamal stated, "Good tourism is tourism that is just, fair, and equitable, and contributes to the well-being of human beings and nonhuman others" [80] (p. 50), including cultural well-being. Ethics in Tourism: The margin of contrast between justice tourism and ethical tourism seems small, as rightly described by Jamal, "Justice, it turns out, is a complicated notion. It's not even easy to separate it from ethics" [80] (p. 28). Ethics has been defined as just and good action in tourism, which recognizes and respects the interest of other community members. Hultsman [85] defined ethics "as philosophical inquiry into values, and as practical application of moral behavior" (p. 554) that is virtuous, moral, and ethical. Jamal and Menzel [89] and Tribe [86] followed the notion of phronesis from Aristotle to better understand ethics in tourism, which included: "knowledge; 'the good'; actions, practice and experience; and disposition" [86] (p. 314). Highlighting the significance of ethics in sustainable tourism, Macbeth [87] attested that "Ethics is a simple imperative for living a moral life: informing all actions are ethical distinctions and decisions, values" (p. 963). Equity/Fairness: Sharpley [3] defined equity as "development that is fair and equitable and which provides opportunities for access to and use of resources for all members of all societies, both in the present and future" (p. 8). Further, UNEP-UNWTO [4] defined social equity as "a widespread and fair distribution of economic and social benefits from tourism throughout the recipient community, including improving opportunities, income and services available to the poor" (p. 18). Ethic of Care: The notion of ethic of care seeks a balance between humans and their socio-cultural environment, which originated from the work of justice tourism [90]. Smith and Duffy [15] argued for the inclusion of ethics of care to supplement justice and fairness for good governance. Jamal, Camargo, and Wilson [10] argued ethic of care denotes "respect for diversity, recognition of difference . . . support of social differentiation and diversity, sympathy, mercy, forgiveness, tolerance, and inclusiveness" (p. 4606). Guia's [83] recent (2020) study of justice tourism presented four ethical approaches to tourism and justice: (1) utilitarian ethics (neoliberalism), (2) deontological/duty ethics (social liberalism), (3) ethics of care (humanitarianism), and (4) affirmative ethics (posthumanism). Application of the first was argued to contribute to unjust tourism, the second to sustainable tourism, whereas the third and fourth contributed to justice through tourism and justice tourism, respectively. Further, Guia (2020) defined ethics of care as "the moral principles of care and benevolence" [83] (p. 509), guided by virtue and based on interpersonal relations. Based on the foundations of CLR, following the framework of SCBT proposed, and guided by theoretical and conceptual underpinnings relating to the principles of governance, ethics, justice, equity, and ethic of care, the study developed the following primary research question to explore the under-represented issues of justice, ethics, and equity in the domain of tourism governance in BCS, TX: Research Question: How do the various stakeholders feel about tourism development in BCS, specifically, with respect to the distribution of tourism related goods and resources (Distributive Justice), and with respect to ethic of care? The study asked each participant the following semi-structured interview questions: 1. How are tourism revenues (receipts?) and goods (benefits) being distributed among the tourism industry? Do you believe tourism revenues have been distributed fairly among the tourism industry stakeholders? Were there financial incentives and opportunities to encourage locals to own and operate their own tourism-related businesses? (c) Did tourism workers in BCS receive a fair wage (in relation to living standards and wages)? Should more be done to provide a "living wage"? (d) How were the minority operated tourism businesses and attractions faring? Should they get more assistance from tourism revenues and benefits in BCS? (e) Are there financial or other incentives (or special programs) for enabling lower income groups and residents (e.g., minority populations) to engage in tourism development? (f) Overall costs and benefits: Are the costs and benefits of tourism to BCS being fairly distributed? Do you feel you are getting a fair share of the overall benefits? How are the residents benefitting? How are minority populations and low income residents benefitting from tourism? (g) Who decides how the tourism revenues/benefits are to be distributed? (Government and CVB, but what role does the local industry play here?) 2. How much attention is being paid to fostering cultural pride and respect for the diverse cultural groups (residents) and traditions in BCS (through tourism)? 3. What do you (and other service providers) do to educate the visitors about the diverse history and culture of BCS? Study Context The two adjoining cities of Bryan-College Station (BCS) located in Brazos County, Texas, are both homes to higher education institutions. TAMU is one of the largest public universities in the world, which was established in 1876, while Blinn was established in 1883 in Bryan. Further, Texas A&M sports, such as college football, baseball, basketball, and softball, combined with various attractions in the twin cities such as annual cultural events and festivals, George Bush Presidential Library and Museum, and historic downtown Bryan add significance to this place as a tourism destination. Texas A&M's academic events including Graduation Days, Parents' Weekends, and Ring Days attract a significant number of visitors from student family members and the alumni community, making BCS more like a tourist town specifically during those events. For BCS economy tourism remains as one of the major economic drivers. A study by Oxford Economics [95] supported the claim, stating, "Texas A&M football is an economic engine, generating substantial business sales, employment, personal income, and local taxes" (p. 6). Borrowing from Warren's [57] definition of community, BCS is an urban community where CBT participants interact individually and institutionally through an established system of tourism governance to achieve their common goals of socio-economic development, community cohesion, and well-being through tourism. Joint organizations such as Bryan-College Station Convention and Visitors Bureau and Bryan-College Station Chamber of Commerce represent and serve both cities in partnership with governing agencies in tourism such as cities and county and individual businesses including hotels and motels, community organizations, art and cultural groups, and so forth. Against this backdrop, BCS represents a specific case study of an urban CBT from a highly developed country, USA broadening the avenues of CBT research. Study Participants and Design Applying a purposive snowball sampling method [96], the study conducted semistructured interviews with tourism entrepreneurs and employees in BCS. The researcher/s also gleaned through secondary sources of information (such as published brochures /leaflets/annual reports/websites) from major players in BCS tourism governance such as the BCS Chamber of Commerce, BCS Convention and Visitors Bureau (CVB), cities, and County offices in BCS, along with individual participants' business websites/publications. The process helped to gather preliminary information regarding stakeholders' businesses and to provide guidance for upcoming interviews and site visits. The semi-structured qualitative research included both structured (closed ended) and open ended questions (at the end of interview). This mixed-method qualitative research combined both theoryand data-driven themes to augment the quality of study findings. Research participants represented tourism related associations, city and county offices, community organizations, hotels, motels, and restaurants including owners of hotels and restaurants, executive and management level staff, and employees in frontline and backstage operations such as kitchen and room service (as detailed in Table 3). A total of 53 stakeholders were approached, and 40 of them were interviewed (13 refusals). Nine out of 10 backstage staff (mainly cooks/chefs and housekeepers) represented Hispanic and African-American ethnic minorities, while the other was White. The average duration of interviews was 45 min and took place between April 2015 and August 2016. Regarding the ethnic background, 24 participants were Caucasian, eight Hispanic, and eight African-American, with the majority (62.5%) being male. More than 55% were 41-60 years old, and 62.5% possessed an undergraduate degree or more. Of the 40 respondents, 39 were BCS residents, except #35, who was a resident of nearby city Navasota. Data Analysis The first author resided in the BCS area nearly for five years in connection with his PhD studies. During the study, the first author attended five cultural events and festivals in BCS including one of DBA's Thursday Morning meetings as a participant observer, which helped the author to enhance the cultural context of the study area. Following the completion of the first 10 interviews, the first author completed transcription of each interview verbatim. This early transcription process provided insights to improve upcoming interviews in terms of context and clarity (with remaining participants while starting the interviews). A process of iteration (moving back and forth between the literature and the data) was applied during data analysis for grasping a theory-driven scenario pertaining to the issues being explored. The data analysis process followed the seven analytic procedures suggested by Marshall and Rossman [98] (p. 206), which included (1) organizing the data, (2) immersion in the data, (3) generating categories and themes, (4) coding the data, (5) offering interpretations through analytic memos, (6) searching for alternative understandings, and (7) writing the report or other format for presenting the study. However, the data analysis also followed, in some instances, the direction by Hsieh and Shannon [99] that in directed qualitative content analysis, "codes are defined before and during data analysis", and "codes are derived from theory or relevant research findings" (p. 1286). The study applied guidelines for coding and analysis as suggested by Marshall and Rossman [98] and Hsieh and Shannon [99], where research question/s specifically guided the emergence of codes for analysis (following structural analysis procedures). Further, an extensive literature review helped establish themes and issues for the analysis process. First, an independent line-by-line coding of each interview took place, leading to the development of common categories structured around the theme/s and research question/s. This systematic process of developing line-by-line codes helped minimize bias and knowledge of the field and the influence it might have had in the formation of codes and categories. Common categories derived were applied in the interpretation of the specific research question/s, then leading to the discussion. A comparison of codes and categories across various groups and individual participants was performed with a view of uncovering patterns of similarities or differences [100,101] relating to the issues being explored. By applying a mixed qualitative data analysis approach, this research combined both theory-and data-driven themes (mixed of deductive and inductive methods) to support its robust findings. Findings/Results Guided by the research question, the study mainly explored distributive justice and difference principle from the theoretical perspectives of Rawls' [29,30] Theory of Justice and endeavored to address the issue of ethic of care as advocated by Smith and Duffy [15] and several other scholars. Based on the responses received from all 40 participants relating to the research question, findings were categorized under two theory/research generated themes (and their sub-themes): (i) distribution of tourism revenues and benefits, and (ii) consideration to ethic of care followed by three data-driven themes (which emerged during data analysis). Hotel occupancy tax (HOT) is one of the major sources of tourism revenues in BCS and relates to the mechanism of revenue distribution. Hotels, motels, tourist homes, lodges, inns, and bed and breakfast facilities, etc., are required (as per Texas State law) to collect HOT from the visitors for the twin cities and (Brazos) County office. The HOT money must be reallocated by the city and county offices for, among other reasons, tourism development and promotion, historical restoration and preservation, and/or establishment of a convention center [102]. At the time of research, BCS had a 15.75% HOT, of which 7% went to the city, 2% to the county, 6% to the state, and 0.75% to Kyle Field (rebuild). Distribution of Tourism Revenues and Benefits Mechanism for distribution of tourism revenues and its beneficiaries: The study found that a majority of participants (n = 22) across groups were aware how the HOT money was distributed; however, 18 participants including all 10 back-stage participants were not aware. Further, some participants expressed happiness on the effectiveness (#20, 23, 32, and 34) of HOT money spending and functions of offices such as cities and counties, whereas a few (#6 and 14) commented that more HOT money was allocated to the promotion of College Station than the City of Bryan. Participants who knew how HOT money was distributed suggested that HOT revenue was distributed by the cities and county to offices such as CVB, the Arts Council, DBA, Research Valley Partnership (RVP), GBPLM, TAMU Athletics Department, Kyle Field rebuild, and the Expo Center, among others. The HOT funds were distributed to the City Parks and Recreation Departments (in BCS) and to the Chamber of Commerce for some specific programs (#3). Hotel participants (#5, 9, 24, and 25) had detailed information on HOT distribution except one (#22). Participant #24 shared that there were some divergence of interests between the hoteliers and elected city officials regarding HOT money disbursement. For example, the City of College Station wanted to spend part of HOT money ($10 million) on a Southeast Park development, which the hoteliers preferred to invest either on Veteran's Park to organize national sports events or to allocate to the CVB for destination marketing. In the community/cultural group, two participants (#12 and 26) were aware of HOT distribution, whereas AAC (#30) made a cautionary statement that HOT criteria carried a degree of power. Government offices (#7, 23, and 28) were at the distribution side of HOT (contrary to other stakeholders). The City of Bryan (#7) received 1.5% out of general sales tax (GST), the County official (#23) suggested that HOT distribution remained transparent, and community members were satisfied how the money was spent. HOT money contributed to the construction of the Expo Center (as people voted for), which drew visitors and money to BCS and contributed to its overall economy (#23). The City of College Station official (#28) said that 41% of the City's budget was generated through sales tax primarily contributed through tourism. The GBPLM (#19) spent its HOT money on advertising and the TAMU Athletics Department (#29) used it to repay facility use charges in Reed Arena for organizing events. Finally, the antique shop (#27) in Bryan, Pedi cab (#10), and two other participants (#17 and 30) both had no knowledge of HOT distribution mechanism. None of the participants in the back-stage group (#31-40) had any information relating to HOT distribution mechanisms, but two participants (#32 and 34) guessed that distribution should be fair. Non-public relations and non-outside participation requirements for the jobs were one of the reasons that many backstage participants lacked information relating to HOT spending (as some participants suggested). Stakeholder influence in the distribution of tourism revenue and benefits: The study showed that stakeholders played a significant role in the distribution of tourism revenues by submitting proposals or through participation in HOT discussions. Two participants (#1 and 2) suggested they were influential in obstructing the passage of legislation otherwise allowing cities to use HOT money for buying lands. However, the City of Bryan official told to the first author later that the city hardly had a plan of that nature, nor did it intend to draft that kind of legislation. One hotel participant (#24) shared that hoteliers protested the potential allocation of $10 million HOT money by the City of College Station for the development of a Southeast Park, though the city official later clarified the proposal was being reviewed and they listened to and evaluated stakeholders' inputs before making program decisions. It seems that stakeholders tried to find common ground for conflicting issues. Participants (#1, 2, and 8) emphasized that the working collaboration between the CVB and BCS Chamber of Commerce effectively worked in retaining Texas A&M football from being relocated to Houston during the Kyle Field expansion, supposedly saving BCS businesses. Financial incentives to locals and to ethnic minorities to run tourism business: The study found the majority (n = 26) of study participants (including the cities and county offices) suggested there were no such policies or financial incentives to locals and for ethnic minorities for tourism related businesses, except a few incentives from associations and tax-breaks offered by the cities (#1 & 12). The Chamber of Commerce launched a reward program for historically under-utilized businesses (HUB) and it operated a month-long Youth Leadership Program/Scholarship (for juniors in high school) targeting economically challenged youth (though application was open to all). The City of College Station and the Lodging Association participants informed that the U.S. Small Business Administration (U.S. SBA) could have incentives for startup for low-income and small-scale businesses. Festival grants from the CVB and the Arts Council helped event celebrations for all ethnic groups. The Brazos County official explained that increase in tourism could be considered as an incentive, as it generated more earnings for cities and counties, and helped decrease resident taxes, which equally benefitted low income people, too. At least 19 participants conferred that they have seen tourism businesses such as hotels, motels, and restaurants operated by ethnic minorities including Asians, Hispanics, and African-Americans, excepting a few (#31, 34, 35, and 36), who have not seen such businesses run by ethnic minorities. Some participants (#15, 18, and 28) opined that equal competitive conditions were needed, and incentives to one group would not be fair to another. One restaurant participant #18 suggested incentives existed for new businesses such as Santa's Wonderland, which attracted visitors to BCS. One hotel participant (#24) provided a reference of a protest where 40 hotels strongly opposed the incentives to one business (Atlas Hotel, LP, in Bryan was granted $7 million by the City of Bryan in the past), on the grounds of being not fair or competitive. The city official had clarified that incentives were given following the policy decisions to foster overall economic development in Bryan. A statement by participants (#24, 30, 32, 33, and 39) suggesting an absence of a single hotel in BCS owned by the African-American community provided room for further exploration, as two participants (#24 and 30) linked it to historic discrimination and racism, and participant (#30) added that administrative criteria and scrutiny did not facilitate African-Americans well to establish or be successful in business. Two housekeeping participants (#39 and 40) said such financial support was important, as some businesses may need start-up money, but it should be equal to all (#33). Stakeholders' perception of fairness of tourism revenue distribution: The study found that stakeholders' perception of HOT money (tourism revenue) distribution was highly positive. For 13 participants, it was fair and transparent, and some agreed promotions launched by the CVB and DBA were fairer and more accountable (#14), while others were not sure if they received a fair share (#21) or had no idea/comment how it was distributed (#24 and 29). Others opined it could be improved though sharing. It is worth mentioning that 10 back-stage staff expressed no opinions on those issues. All association and hotel participants and a majority of restaurant participants (#11, 13, 14, 15, and 18) believed a fair distribution system was in place and they enjoyed benefits in a fair manner. Fiesta Patrias (#16) looked for more financial support and visitors' education, and the AAM (#26) shared they received no invitation for HOT distribution discussions (though such meetings are public), but the organization was satisfied with the way HOT money was distributed. The Advent GX (#12) asserted that a just system prevailed regarding the distribution of costs and benefits. Government offices also found it to be fair. The City of College Station official (#28) shared that HOT distribution remained justifiable (not perfectly equitable). The Brazos County official (#23) suggested there was transparency in HOT distribution, and the community remained satisfied with the nature of HOT distribution. In general, distribution of tourism revenues/economic benefits [11,12] was perceived to be fair (meeting one of the criteria of CBT success), and a majority of stakeholders in HOT receiving groups agreed to this statement. Consideration to Ethic of Care Living standards and wages of tourism workers: The study explored and examined the living standards and wages of tourism employees and found that almost all participants (n = 34) across groups agreed tourism workers received an average or above the average salary, but a few shared it was not enough for kitchen/housekeeping staff (#3, 11, 16, and 24) or could be improved to better support the whole family (#3, 16, 24, 30, 33, 34, 35, and 36). The CVB participant (#1) stated that BCS frontline staff received a standard pay, but she was not sure of the back-stage staff payroll. She stated: I do not know on the lower back of the house type positions, . . . I think that there is so much competition in our community that they have to pay them well to keep them. Other associations' group participants (#1, 2, and 8) stated they paid their staff a living or better wage, but The Arts Council (#3) thought government agency staff were better paid, whereas staff in non-projects received below average pay. All respondents in the government offices suggested that people working in BCS tourism were well paid or received above average pay. Most participants from the restaurants (#11, 13, 14, 15, 18, 20, and 21), hotels (#5, 9, 22, 24, and 25) and cultural groups (#12, 16, 17, and 30) suggested their workers earned higher than the industry average or above the minimum. Some participants associated pay with quality of service or performance levels (#13, 21, and 24) and suggested their servers and bartenders earned a good amount through tips (#11 and 14). Hotels/restaurants (#20, 22, 24, and 25) had internal staff training and development programs to improve quality. Back of the house staff were suggested to face fewer promotion opportunities than frontline staff (#22). The Fiesta Patrias participant (#16) suggested that waiters and lower-level jobs were affected harder without a raise in wages for the past seven years. With $700 bi-weekly pays, those waiters and lower-level workers were marginalized, participant (#16) added. The AAC (#30 AAM) participant claimed wages were not on par with the revenue. All participants in the backstage group (#31-40, all ethnic minorities except #31) received minimum wages/salaries (minimum $7.25 per hour as required by law) or higher than minimum. Two respondents (#31 and 36) asserted they were paid "pretty decent wages" and "get paid pretty good", though they thought it was not enough to support ($10.40 an hour for #36) a family. Participants (#32, 33, 34, 35, and 39) found their wages to be enough for a single person only and wanted to be paid more. Another common concern a majority of participants in the backstage group (#31, 32, 33, 34, 35, 36, 38, 39, and 40) remained that in a sharp contrast to football season (Home games of Texas A&M), summer hotel occupancy in BCS during summer remained lower, thereby affecting summer weekly work hours (ranging between 15-30 h for housekeeping staff to 30-36 h for participants #33, 35, and 36). According to the Living Wage Calculator [103] for Brazos County, TX, a single adult would require living wages of $10.99 per hour compared to minimum wage of $7.25 per hour (then in 2016). The study found that a majority of back-stage respondents (except #38 & 40) lived below the Living Wage Calculator standard, though they received higher than the minimum pay. Issues of living standards and wages specifically for backstage staff have been suggested to be addressed from the perspective of ethic of care [15]. All backstage participants (#31-40) suggested their department/unit (some small in nature) offered less promotion opportunities. Based on participants' feedback (#31, 32, 34, 37, and 39), the study found that educational qualification, experience, skills, job sincerity, and training positively contributed to job promotion, and it was not linked to race or ethnicity. For example, one respondent (#34-an African-American female) with some college education earned a career growth opportunity from housekeeper (HK) to HK supervisor after a year. A majority of housekeeping and kitchen participants were given on-the-job trainings by their hotels/workplaces (2-3 week long for kitchen staff), while some managers received out of city training (#24 and 25). Backstage staff seemed to be behind in educational achievement; however, not recognizing the concerns of an integral part of the staff unit with an ethic of care (not understanding their concerns whether they experience being treated/paid/promoted well) has been argued to compromise ethical values in tourism [15,21,85,87]. Whether the staff have received a proper degree of ethic of care needs further research. Resident benefits of tourism including the minority/economically disadvantaged groups: The study found that participants across all groups (#1-30) and 8/10 backstage participants (#31-40) highlighted various benefits of tourism to residents, including the ethnic minorities and economically disadvantaged people. The benefits included: (i) City facilities and services upgradation and development spurred by GST and tourism-Including Veteran's Park upgradation, access to facilities for kids, and free pre-natal and medical services, stated by 14 participants. (ii) Enhancing quality of life and community attractiveness (#1, 2, 3, 4, 8, 9, 13, 16, and 17). (iii) Sales tax helping lower property taxes (#1, 2, 3, 4, 7, 8, and 19); and increase in property values but reduction on resident taxes (#12 and 30). (iv) Multiplier economic effects of tourism, including benefits through game days and other events/festivals, more business (as more people came in) and economic benefits to everybody, retention of some local businesses and moving of other businesses to BCS, opportunities for sharing business with College Station (for Bryan), and tourism provided opportunities to meet new people (stated by 23 participants). (v) Enhanced community pride/image, including appreciation of community history and pride; post visit promotion of BCS; and tourism making the town livelier, cleaner, more interesting, more accessible, and safer than 5-10 years ago, and some people showing interest to move to BCS (suggested by 10 participants). (vi) Income and employment generation, including influx of cash/money, job creation, and economic growth (stated by 14 participants). Associations (#1, 2, 3, 4, and 8) and hotel group participants (#5, 9, 22, 24, and 25) suggested that tourism brought same benefits to the minority community as other residents. Participants (#11 and 14) stated that visitors loved local hospitality. The City of College Station believed all residents shared tourism benefits equitably. Fostering cultural pride and respect for community/ethnic minorities through tourism: Preservation and promotion of culture and heritage remains central to ethic of care. Overwhelmingly, 36 study participants conferred that tourism contributed to the preservation and promotion of BCS through the hosting of numerous cultural events and festivals. Nu-merous events organized in BCS are centered on the "Spirit of Aggieland". The City of Bryan festival grants and matching funds for historic preservation were highly lauded by a majority of participants. Participants across groups (#8, 14, and 18) suggested that various festivals helped preserve and promote the cultural heritage of BCS. Other respondents (#9, 15, 16, 20, 21, and 25) suggested Texas A&M University (TAMU) traditions and the Aggieland culture were the dominant factors for promoting BCS as a tourist destination, as agreed by respondents from the City of College Station and TAMU Athletics, respectively (#28 and 29). The revival of downtown Bryan and its designation as a Downtown Cultural District (by the Texas Commission on the Arts in 2014) based on its history and heritage was acclaimed as a huge success (by participants #9, 10, 11, 13, and 18). All cultural group participants joined by the hotel participants (#5, 22, and 25) mentioned that festivals related to tourism including Texas Reds and Steaks Festival, Jazz and Blues Festivals, Fiestas Patrias, and The World Festival had significant contribution to enhancing cultural pride within the community. An Arts Council participant (#3) stated that the Texas Reds Festival (2015) drew "over 20,000 tourists over two days . . . ". Eight (8/10) backstage respondents underlined the important role culture and festivals played in drawing visitors to BCS. Local residents and exhibitors perceived events and festivals as proper platforms for sharing and interacting with tourists, whereas visitors expressed respect for the natural/cultural quality and heritage, and an appreciation for the place visited [61,104]. Festivals were also credited as income generating events for the vendors and residents, and contributed to community cohesion by creating avenues for interactions and entertainment. A significant revelation of the study was that 12 participants across various groups explained that the Texas A&M University culture and Aggieland Spirit reflected the rich cultural diversity and traditions of BCS. Connections built with TAMU, and maintenance of bonding with the Aggie Spirit, were considered as powerful pull factors in attracting TAMU's alumni and other visitors. Educating tourists about the diverse history and culture of the BCS: Some of the principles of justice tourism such as building solidarity between guests and hosts and promoting mutual understanding, guided by a sense of equality, sharing, and respect [61], including meaningful engagement of tourists with residents, are said to be integral factors to both ethic of care and justice tourism [27,61]. Regarding educating tourists, there were mixed opinions, as some respondents assigned the job to the CVB, while others suggested TAMU played a key role in educating tourists. Most participants (24 of 30 not including backstage staff) agreed that they or their staff got directly or indirectly involved in informing visitors about the places to go, mustvisit eateries in town, and events not to be missed in BCS. All five hotel participants were active in educating tourists. All respondents in the cultural group were active in diverse activities including communication with visitors through partner restaurants, conducting tourists' tours, delivering lectures, grooming staff, and for AAM (#26), showcasing a booth at Texas A&M events. The GBPLM displayed different events in rack-cards and conducted a Hall of Champions tour (at Kyle Field), whereas the TAMU Athletics Department organized event venue tours. The Pedi cab participant (#10) took the opportunity to discuss Texas A&M's rich history and tradition with its customers. A restaurant participant (#14) shared that local residents "are proud tour guides" of the community. Given the nature of their jobs and limited encounters with tourists, few backstage staff had opportunities for educating or providing information to tourists. Study findings suggest that tourism stakeholders in BCS feel proud to share their history and heritage with incoming visitors. Some of the tourists appreciated locals /stakeholders as one of the friendliest hosts (as shared by respondents #18 and 20), restaurant owners/staff received warm compliments from visitors (#21), and visitors enjoyed having a quality time at BCS, and they appreciated the warm hospitality extended to them (#13). A few respondents stated that visitors to BCS were highly impressed by the warm welcome spirit (e.g., Howdy! Culture) of the local employees including the local cuisine (#6 and 23). Based on participants' responses, it can be suggested that the safe destination image coupled with the warm hospitality extended by tourism employees and residents made many tourists interested to revisit (16 participants suggested safe and/or rich hospitality), moving to (#1, 2, 3, 7, 9, and 17), or retiring (#3, 8, and 17) to the BCS area. Data-Driven Themes At the end of each semi-structured interview, participants were asked the following open-ended question: Do you want to add some aspects you think important but were not included/discussed in the questionnaire? Based on participants' feedback, the following three data-driven themes emerged: TAMU as a Driver of Tourism to BCS: Twenty-two respondents across various groups reported that Texas A&M's sports (including home games) and other educational events attracted a large number of tourists to BCS. As discussed earlier, cities and County played a major role in the distribution of tourism revenue (HOT money) in partnership with CVB and other associations. However, views of some participants supported TAMU's role in tourism promotion, as participant (#3) stated, "Because at the end of day, what really drives tourism here is Aggie football, and Texas A&M . . . " Insights coming from participants in the form of data-driven (emergent) themes suggest the need for a broader, collaborative governance in BCS, including TAMU. TAMU Culture/Aggie Tradition Shapes BCS Culture: Fourteen participants across various groups emphasized that the TAMU Culture and Aggieland Spirit (Howdy!) reflected the rich cultural heritage and diversity of BCS, which were prominent factors in attracting its alumni and community. Texas A&M Culture (Howdy! Aggie Spirit) emerged as a unifying factor, as many respondents (#1, 3, 4, 9, 15, 16, 21, 24, 25, 28, and 29) highlighted the role of TAMU culture. Further, "A Dose of Aggie Tradition for Newcomers" also emerged as another sub-theme, as participants (#9, 10, 11, 15, 19, 28, and 29) supported this statement in various ways. Game-day Traffic Creates Temporary Social Disruption in BCS: Most of the respondents across various groups suggested that a warm, welcoming spirit to visitors prevailed in BCS, mainly due to Texas A&M's culture. However, at least 16 participants across all groups suggested that game day traffic disrupted their routine activities. They took it as a natural phenomenon, and residents and stakeholders were rather for games, as they adapted to the game day traffic for economic gains (#1, 2, 3, 4, 12, 21, 23, 29, 30, and 32). Stakeholders hence developed coping mechanisms such as leaving town or staying at home, visiting cinemas, zoos, parks, or malls with children or family members (#1, 2, 3, 4, 13, 23, 28, 29, and 30) on game days. Discussion Governance seems to have an important role in addressing the issues of justice, ethics, and equity, which have been suggested to need further research [10,15,20,21,27,28] in sustainable tourism research and practices, including CBT. Guided by the theoretical and research-driven insights of this study, this discussion examines the issues of justice, ethics, and equity in the domain of governance relating to BCS tourism. Stakeholders expressed a great sense of satisfaction related to the distribution of tourism revenues and benefits (HOT). Resident/stakeholder satisfaction has been recommended as one of the critical success factors of CBT [49,61]. However, concerns expressed relating to the incentives offered to a hotel in Bryan and stakeholders' protest to the City of College Station's proposal to develop Southeast Park through HOT money indicated a need for a better collaboration among stakeholders. The incentives granted to a new hotel in Bryan were of grave concern for other hoteliers, as respondent (#24) stated, "I'm sure they'll (City of Bryan) remember forty hoteliers showing up in red shirts." Upon the researcher's request, the City of Bryan official clarified that the city had taken such steps following the policy decisions to boost economic development in Bryan. No doubt, those in governance sometimes need to look into larger benefits for the entire community; however, consensus, as suggested by Choi and Sirakaya [105], rather than conflict [72] has been suggested to drive better results in CBT settings. Further, a few participants (#6 and 14) expressed dissatisfaction that the CVB put higher importance on promoting College Station and its sports, rather than Bryan and its culture. College Station received a bigger share of promotion due to higher contribution from College Station hoteliers to the HOT funding, which the CVB participant suggested was a standard procedure. However, the suggestions from participants require consideration for future planning. It is worth mentioning that governance and management of tourist destinations entails a complex network of stakeholders. Therefore, as suggested by Valeri and Baggio [106,107], the inclusion of current research insights including social network analysis (SNA) in the tourism planning stage could be helpful for BCS destination managers in enhancing stakeholder relationships, defining their roles, and improving mechanisms for service deliveries. Regarding the provision of financial incentives to locals to run tourism-related businesses or for minority-operated businesses, the study observed some practical difficulties. The foundation of the liberal democratic regime in the U.S. ensures equal individual liberty and freedom to all, and rejects all types of discriminations or incentives to one group. For example, federal EEO provisions help ensure that persons cannot be discriminated on the basis of race, color, religion, sex, national origin, disability, or age. This could be the reason a majority of participants echoed the spirit of EEO. Theoretically, EEO is the ultimate goal societies strive to achieve, but during implementation it creates a potential to leave the poor poorer and vice-versa. Several instances of systemic racism and discrimination abound in the US. For example, in an examination of Black American entrepreneurship, Gold [108] revealed that race-based disadvantages included "low level of earning, lack of wealth, poor education, lack of experience in a family business, and difficulty in getting a loan" (p. 1712). This suggests that practices of systemic racism are still prevalent in the US. Though a majority of participants, including backstage participants, emphasized a level playing field; a few (#31, 34, 39 & 40) suggested financial incentives would better facilitate a startup business, though not based on race or ethnicity. It seems, those in tourism governance including cities could address such issues in partnership with some financing institutions or U.S. SBA. U.S. SBA seems to offer small business loans and preferences for historically underutilized businesses (HUB). In other parts of the world, there are examples such as Fair Trade in Tourism South Africa (FTTSA) where "historically disadvantaged individual (HDIs) are equitably represented in decision-making structures, including but not limited to top management" [93] (p. 740). These provisions seem to empower targeted disadvantaged groups in other countries; however, it may not stand a chance of equal applicability in BCS, Texas. With a reference to tourism development issues from the Third World countries, Crick [109] stated that "benefits from tourism unlike water, tend to flow uphill . . . but the profits go to the elites-those already wealthy, and those with political influence . . . the poor find themselves unable to tap the flow of resources" (p. 317). Crick's [109] statement holds significance for BCS to some extend where some disadvantaged people (e.g., ethnic minorities) seem to be affected from full participation in tourism businesses owing to the factors of historical discrimination as some respondents (#24, 30, 32, 33 & 39) have not seen a single hotel in BCS owned by an African-American. In a similar study, Blanchflower, Levine, and Zimmerman [110] found that "black-owned small businesses are about twice as likely to be denied credit even after controlling for differences in creditworthiness and other factors" (p. 930). Blackstock [58] also identified inclusion of social justice and local empowerment as challenges to CBT success. Measured from the lenses of Rawls' Theory of Justice [29] and Justice as Fairness: A restatement [30] people from ethnic minorities and economically disadvantaged communities are enjoying the fundamental rights of equal liberties, and equal opportunities as guaranteed by the open, liberal democracy of the US and as championed in Rawls' two principles of justice. However, in the implementation side, the difference principle, that economic inequalities are acceptable provided the greatest benefit for the least-advantaged members of society are ensured, seems to face challenges for the current setting. The observation that Rawls' account of "distributive justice that is widely, though (he later admitted) not universally applicable" [15] (p. 99) seems relevant here. However, Smith and Duffy [15] mentioned Rawls' Justice as Fairness provides an objective way of measuring the competing notions of justice employed by different social groups in varying social contexts. Further, they believed Rawls' account "remains culturally relative (to modern Western societies) rather than universal" [15] (p. 101). This seems worth consideration for BCS tourism as the expenditure from tourism revenues by the cities to maintain and build public services and facilities including education, police, fire services, health, roads and transportation, parks and recreation, etc. can also be considered "all-benefitting" expenses, which all city residents including the economically disadvantaged can enjoy. This is similar to Lee and Jan [111] who asserted that CBT development "increases the number of facilities, roads, parks, and recreational and cultural attractions, which benefits residents' quality of life and respects their culture" (p. 368). From the perspectives of ethic of care the study findings suggested that tourism employees in BCS enjoyed ethic of care in general as they were paid at least minimum or higher for their work. All businesses, small to large, ensured their staff were paid at least minimum as mandated by the law or above average or higher depending on staff skills. Results suggested that a majority of backstage staff faced reduced weekly work-hours in the summer; though they were paid minimum wages or higher, which was not close to the standard set by the Living Wage Calculator [103]. Living wage has been defined as a decent wage, as "it affords the earner and her or his family the most basic costs of living without need for government support or poverty programs" [103] (para. 1). This definition seems highly relevant in terms of ethic of care how the wages backstage workers earn from their jobs are supporting their livelihoods. In this study, a majority of backstage participants suggested that their wages could have been made better to support their families, or their summer work hours could have been increased. This is an area that requires more collaboration and coordination among accountable agencies in tourism governance to unearth new possibilities how those unsung backstage employees were ensured a living wage or compensated for summer work-hour losses. Smith and Duffy [15] suggested paying attention to ethic of care and justice to make tourism businesses sustainable, and Shiva [112] contended that sustainability meant beyond "bearing up" and developing a caring attitude to others while considering their needs. The study also found staff promotion to be more problematic for housekeeping staff compared to front office or other departments, which relates to their ethic of care. It can be surmised that low morale arising from lack of promotion and bare minimum pay can have adverse impacts on business output; therefore, an ethic of care in addressing staff issues related to promotions or pay raises (though not easy and simple) can be recommended, as supported in many scholarly works [15,21,85,87]. This seems to be another critical issue for BCS tourism to consider in relations to justice, ethics, and equity. Other positive factors participants suggested relating to an ethic of care remained that a significant flow of visitors from outside the community drawn by college sports and educational events helped enhance the destination image of BCS. They suggested this enhanced their living standards due to added city facilities and additional incomes, and many visitors showed interests to retire, relocate business, and revisit BCS in the future. This indicated a strong sense of mutual understanding and respect among the community, residents, and visitors (as shared by many participants), fulfilling a requirement for justice/ethical tourism [27,61]. Another positive side of BCS tourism remained that not a single respondent indicated any adverse impacts of tourism including vandalism, littering, or negative cultural impacts, as faced by many travel destinations [113]. Another issue the study explored was ethic of care where tourism made remarkable contribution to enhance community pride and respect for the diverse cultural groups and their heritage. A system of support through HOT funding facilitated the celebration of various ethnic festivals. All participants the researcher spoke to during events expressed satisfaction with these festivals and suggested that festivals reflected cultural and economic importance and fostered community cohesion, one of the criteria outlined for CBT success [60,61,65]. Moreover, some participants suggested that Texas A&M's Aggieland or Howdy! Culture served as a unifying factor in the revival and promotion of the BCS culture. Similarly, Lee and Jan [111] stated the use of CBT practices can revive local culture and traditions by showcasing their celebrations to tourists. This is an instance how tourism and sports tourism have been intertwined in BCS to boost economy and promote cultural preservation. As mentioned earlier, a few new data-driven themes emerged in the study. A significant number of respondents suggested that Texas A&M University was one of the major drivers of tourism to BCS, and that A&M sports benefitted BCS businesses and the community. This finding seems in agreement with Oxford Economics' [95] statement that "Texas A&M football is an economic engine, generating substantial business sales, employment, personal income, and local taxes" (p. 6). Another report regarding the hotel occupancy, average daily rate (ADR), and revenue per available room (REV PAR) for College Station compiled by STR, Inc. Hendersonville, TN, USA [114] supports Oxford Economics' [95] findings that home games substantially contributed to the BCS economy through high hotel occupancy during game days (September through November) with high ADR and high REV PAR earned by the hotels (for details see [20] (pp. 286-289). To conclude, as opposed to other studies including [115], the findings of the Oxford Economics [95] and STR Inc. [114] reflected the potential suggestions of this research that Texas A&M football is a key factor to drive tourism to BCS; it creates economic opportunities in the community and helps minimize the gap of economic inequity by creating additional jobs. Following Texas A&M's entry to the Southeastern Conference (SEC) in 2012 and viewership capacity added to Kyle Field, the economic impact of TAMU college football has the potential to increase drastically. Without doubt, TAMU Culture has significantly shaped BCS Culture, and social disruption of game day traffic is a temporary phenomenon that residents have acknowledged for economic gains. One participant (#30) described the benefits and burdens of the events as: "If you want wealth and development, you want to have revenue, you're going to have cows, you're going to have to have manure. You can't have one without the other." Practical Implications of the Research The study presented an integrated framework of SCBT identifying some underrepresented issues in the dimension of governance such as ethics, equitable distribution of benefits and burdens, respect and recognition for diverse values, and distributive justice benefitting disadvantaged populations, among others. The results reemphasized the suggestions of some early and current scholarly works [10,20,21,27,28,[80][81][82][83] by finding that CBT/ST operations can likely be more sustainable if issues related to justice, ethics, and equity are taken into consideration, while addressing potential benefits for economically disadvantaged persons. Therefore, it is believed the study has implications for those responsible in tourism planning/governance to develop facilitating provisions for the disadvantaged. Suggestions have been made to resolve some of the issues through capacity building, business ownership, empowerment, and with a broader and proactive form of governance, which engages and facilitates tourism stakeholders, specifically considering the needs of ethnic minorities and disadvantaged communities. The integrated framework of SCBT, which was drawn from a systematic review of sustainable tourism and community-based tourism criteria and was applied for exploratory research in BCS, TX, can have implications for future research and practice. CBT has been utilized around the world to assist communities in improving their socio-economic and overall community well-being. However, limited CBT studies have been conducted in the USA. For example, Lo and Janta [116] present a chronology of CBT projects from 16 countries in Africa, Asia, Latin America, and Oceania (including Australia), and a reference from North America is missing (though the reason is not stated). Further, in a review of CBT and Rural Tourism, Zielinski, Jeong, Kim, and B Milanés [117] brought up 103 case studies from different parts of the world, where several case studies appeared from other developed nations such as Canada, Australia, and Spain, and just one case study appeared from the USA. This suggests that while CBT and/or rural tourism have remained in practice in the US for a long time, they have not drawn much attention in mainstream CBT discussions. Viewed from this perspective, the current research holds the potential of sharing more information on CBT practices from a developed economy. In the context when Jamal and Camargo [81] expressed concern on the worrisome state of "how little justice is studied in tourism studies" (p. 207) and Jamal and Higham [82] asserted the infancy stage of justice and tourism research requiring more "research and praxis to build a robust knowledge base and weave tighter just tourism futures" (p. 155), this research undoubtedly makes a new contribution to the field. Further, owing to the vulnerability of the sustainable tourism paradigm amidst the setting of open-market economy and liberal democracy such as the United States, this research offers suggestions for making tourism governance more proactive, collaborative, and facilitative to better address the issues of justice, ethics, and equity and to contribute to sustainable tourism development. The authors believe this research contributes to the field by enhancing the existing body of knowledge and by addressing some of the gaps in sustainable-community based tourism. Thus, the research also lays a foundation for future research relating to SCBT. Recommendations Based on the aforementioned findings and discussion, likely courses of action offered by the respondents of the study and the body of information and knowledge existing in the field, the study proposed some recommendations. Justice, ethics, and equity have consistently been found to be integral to STD; however, futuristic recommendations offered on such issues could be suggestive only (rather than prescriptive ones), since tourism operations differ in various geographical, socio-cultural, and economic contexts. Smith and Duffy [15] commented that whether the scale of tourism developments are "good" or "bad" is morally charged (p. 2); therefore, it will be difficult to offer straight-forward recommendations and alternatives on ethical and justice issues pertaining to tourism development. However, any scholarly debate and new knowledge forwarded on such critical issues could be helpful in interpreting and communicating why some of the measures work and some do not in a given context. Therefore, the suggestions offered through this study may be valuable specifically for those in BCS tourism governance to better manage tourism, and they may provide a reference for other SCBT practitioners and researchers. Need to incorporate TAMU as a part of tourism governance: While making suggestions for improvements, a significant number of participants outlined the influential role of Texas A&M sports and academic events in bringing tourists to BCS. Further, other participants stated that connections with Texas A&M, including bonding with the Aggie Spirit (Howdy!), served as powerful motivators in attracting its alumni and community to BCS. Feedback from participants suggest the need for incorporating Texas A&M as a part of tourism governance for holistic and strategic tourism planning. This could be a contributing factor in establishing BCS as a year-round travel destination and helping to support jobs and equity issues. Need for more facilitative and enterprising governance: The study found that a majority of participants (n = 26) stated that there were no such policies or financial incentives for locals and for ethnic minorities for tourism related businesses, excepting a few incentives from associations and tax breaks offered by the cities. Further, support for a level-playing field was categorically emphasized by a few participants. However, there were a few housekeeping participants (#39 and 40) who said some financial support from government was important, as some enterprises may need start-up funds, but it should be equal to all (#33). This indicates the need for governance to come forward with some incentives (though equitable), which could facilitate tourism business ownership (such as hotels) by some ethnic minorities (e.g., African-Americans). There are examples in other developed countries such as Australia where a host of government-assistance packages remain available for starting a business for Indigenous people [118]. Organizing/facilitating some informative or entrepreneurship development related workshops regarding the existing or potential support mechanisms through the Chamber of Commerce or U.S. SBA may improve opportunities for those who lack resources and information, and it may help develop new entrepreneurship. Absence of a single hotel owned by the African-American community in BCS, statements from two participants (#24 and 30) linking historic discrimination and racism, and a participant (#30) adding administrative criteria and scrutiny as reasons behind such status suggest the need for tourism governance to plan for equitable investment promotion in tourism to create and maintain a just and equitable society, as propounded by Rawls' [29] in his Theory of Justice. Empowering the community through capacity building: Community empowerment has consistently been argued to be an integral factor of CBT success, including support to individuals and firms for enhancing job related capacity/skills through trainings. As discussed earlier, backstage staff representing ethnic minorities including Hispanics and African-Americans were found to face career promotion challenges, and African-Americans were found to have no known hotel ownership in BCS. If ethnic minorities including Hispanics and African-Americans and disadvantaged groups in BCS are to benefit from tourism, some sort of intervention and facilitation by governing bodies may be required to work out the issues of inequity and for creating a just society, as proposed by Rawls [29,30]. Varying levels and scales of governments specifically in developing countries facilitate and support communities through education and training for capacity building and to develop tourism entrepreneurial skills [50,70,105]. If the local BCS governments, through the application of knowledge-based platform of tourism, could replicate programs such as skills and entrepreneurship development in partnership with other agencies, NGOs, or local experts, it could trigger positively in addressing equity and justice issues by engaging more people in tourism and sharing tourism benefits. In free-market capitalist economies, governments are not typically in a position to offer preferences to specific ethnic groups as practiced in some socialist countries. However, organizing capacity development trainings targeting all economically disadvantaged groups could include all ethnic groups without discrimination and benefit all. Recommendations from Mtapuri and Giampiccoli [59] for forming strong partnerships with different agencies included government departments, private sector, and NGOs for capacity building and skills development, which seem highly relevant for BCS tourism governance and community. A prescription for CBT success from these authors may be helpful for BCS tourism governance to more successfully facilitate tourism. A statement by Rawls [30] that "a basic principle satisfying the difference principle rewards people, not for their place in that distribution, but for training and educating their endowments, and for putting them to work to contribute to others good as well as their own" (p. 75) holds significance in this context. Moving beyond sustainable tourism, such measures may contribute to developing equitable, stable, and sustainable communities. Need for rewarding/incentivizing the corporate innovation for ethic of care: Ethic of care has been found to be an issue in BCS tourism, especially for backstage staff representing ethnic minorities. By law, cities and counties cannot move beyond guaranteeing minimum wages in their jurisdictions. However, those in tourism governance could possibly organize public recognition programs for businesses that adhere to an ethic of care for all staff including backstage staff. They could conduct staff satisfaction surveys for recognitions and address issues related to ethic of care (this sounds unusual, though, as guest satisfaction surveys remain common) by allocating funds. Allocation of funds may come through regular tourism revenues such as GST (not HOT money), and they can provide recognitions and awards to outstanding businesses that treat their staff with an ethic of care. It was discussed earlier that the hotel occupancy, ADR, and REV PAR of BCS hotels grew significantly coinciding with home game days. This seems a potential area where responsible agencies in tourism governance in BCS can engage in proactive dialogue with tourism businesses including hotels to somehow compensate backstage staff (for their reduced work-hours in summer) by emphasizing a sense of corporate social responsibility or addressing their career-growth related issues. According to Smith and Duffy [15] and Shiva [112], one of the measures of ethic of care can be expressed through showing concern or through a start of dialogue. Establishing local shuttle services to diversify tourism locations in BCS: Other suggestions from participants relating to justice and equity likely require attention. A few respondents expressed dissatisfaction that College Station and its sports were given more preferences including marketing priority by CVB compared to Bryan and its culture, which could serve as guidelines for future planning by the cities and CVB/s (even for cooperative marketing in the future). Another area the participants expressed concern remained that given the lack of public transport or shuttle services, tourists were rather centered on major attractions in BCS such as Texas A&M University, George Bush Presidential Library and Museum, Downtown Bryan, and Messina Hof. It seems that development of city-transport services or shuttle services could help more attractions and lengthen visitors' stay in BCS. This suggestion may not require immediate attention from the BCS governance, but may be important for future planning. Further, based on the success of events and festivals in BCS in drawing locals and regional tourists, creation of new events and festivals targeting new locations in BCS could help address seasonality issues while diversifying tourism benefits. Conclusions and Limitations Conclusion: This study reinforces that issues of justice, ethics, and equity remain salient for sustainable tourism development; however, they can pose implementation challenges. Many scholars including Bramwell [17], Springett and Redcliff [19], and Boluk, Cavaliere, and Higgins-Desbiolles [72] have stated that ST governance at various scales and levels has largely remained deficient in addressing issues pertaining to sustainable tourism development. This study explored the state of ST and CBT practices in BCS with reference to justice, ethics, and equity and proffered some alternatives to address them through collaborative tourism governance. This research proffered various practical recommendations for the success of CBT in BCS in particular and other CBT settings in general by mixing systematic literature review with an empirical study. Smith and Duffy [15] underlined "genuine sustainable development is always and everywhere about ethics" (p. 159); however, the authors suggested that it would be hard to find universal solutions for ethical/justice issues or theories; rather, applying ethical values in the context of tourism development can contribute to sustainable development and ensure community benefits. Taking insights from justice tourism as "both ethical and equitable" [61] (p. 104), this research explored justice, ethics, and equity issues in BCS tourism and linked them to a broader spectrum of SCBT. Limitations: One of the limitations of the current study is that it includes only the tourism stakeholders (not tourists and residents) in two adjoining towns of BCS. The views expressed by respondents working as business owners/managers or staffs representing various properties possibly relate more to their individual businesses such as hotels and restaurants and may not reflect the whole spectrum of BCS tourism. Addition of other research participants including visitors/tourists to BCS and residents (other than the business owners/employees interviewed) could have given the issues being explored a wider representation, which future studies may address. This study borrowed some of the criteria of ST/CBT applied in the settings of socialist democracies, which face some implementation challenges in an open market, capitalist economy, and liberal democracy such as the United States. Policy guidelines and popular practices of "Equal Employment Opportunity" (EEO) in the U.S. disapprove of job preferences for disadvantaged communities (as practiced in South Africa). Country specific variations in the political, social, and economic regimes and their corresponding practices make the issues of justice, ethics, and equity more complicated in relation to CBT practices, which underline the need for adapting to localized solutions. As a time-specific, two-year field-study comprising of only 40 research participants conducting semi-structured interviews, this research may fall short compared to other cross-sectional and longitudinal studies of similar nature. Therefore, this research has limitations in the transferability of its findings to other locations or situations. The use of mixed methods could broaden the validity, transferability, and generalizability of this type of research. Hence, it can be recommended that future researchers should explore and broaden this area of inquiry. This study also highlighted how issues of justice, ethics, and equity are critical to STD; however, given their complexity and gravity, each of these issues requires a separate and focused examination in the future in relation to tourism governance and sustainability. Given the research limitations, the researchers would like to suggest decision-making bodies in BCS tourism governance to complement the research findings with recent studies. However, given the context that issues of justice, ethics, and equity have remained less explored in relation to community based tourism in liberal democratic settings such as the United States, this research holds the potential of providing a reference to other developed, liberal economies as well as developing economies for comparing and contrasting the similarities and differences of SCBT practices. Author Contributions: T.B.D. and J.F.P. both contributed for the preparation, review, and editing of this. All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2020-06-25T09:07:06.981Z
2020-06-20T00:00:00.000
225657316
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://centerprode.com/ojsl/ojsl0301/coas.ojsl.0301.01001c.pdf", "pdf_hash": "7d299c8fa7b40fdbf8ff53f193c80f750a274276", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42320", "s2fieldsofstudy": [], "sha1": "9ab9c80cb5c05736fcc44fbac86a7f5ffd61b053", "year": 2020 }
pes2o/s2orc
Negotiating Course Design in the Mexican Educational System Using Complex Thought: A Case Study in Central Mexico The University of Guanajuato joined in a national research project that aims to redesign the type of classroom guides teachers use to plan their coursework. The project places its theoretical framework on the philosophical position of Edgar Morin’s complex thought. Through redesign of the classroom guide for a course of curriculum design in an MA in Applied Linguistics at the University of Guanajuato a collaborative ethnography was developed to look at how the interwoven steps of complex thought could be inserted into the course framework in order to see if there was an impact from the student’s perspective of their learning. Introduction This collaborative case study was carried out as the result of participation in a weeklong workshop in Puebla, in conjunction with a larger network of researchers under the direction of the Normal School (teacher education school) system. The workshop consisted in the construction of a course plan designed implementing Edgar Morin's philosophical ideology of complex thought and Bloom's Taxonomy in the sense of building into the course a series of steps that combine the concept of moving from recognition to application, using the course evaluation as evidence. The RECREA research project The University of Guanajuato began this research by considering the renovation of teaching practices that link meaningful learning processes to significant scenarios where their graduates fulfill the educational and social demands of Mexico, linked to a national project under the direction the normal (teacher education) school system. The aim of the current project is to incorporate research groups composed of university and normal school members in order to link classroom research with improvements in the learning processes of the students and to gain a better understanding of teachers' work in the classroom. In this sense, an emergent problem could be conceptualized as the development of teaching practices, which focuses on practical and theoretical problem-solving activities within the classroom environment. • Conceptualizing the students within the framework of complex thinking will provide them with the opportunity to approach and solve problems within their educational reality from an integral and holistic perspective. • Regarding classroom practices, it seems possible that taking the small incremental steps articulated in terms of durability, public awareness and training, post classroom success may be more viable. • The participant acknowledges having a sense of understanding of what the different elements to consider for teaching are. • This participant notes how having a perspective on the view of the world seems to be the initial point of departure to promote change. Based on the above, some Higher Education Institutions have generated projects for the Network of Communities for the Renewal of Teaching-Learning in Higher Education (Red de Comunidades para la Renovación de la Enseñanza-Aprendizaje en Educación Superior, RECREA) which emerged in 2017, as an initiative of the Department of Higher Education at the Secretary of Public Education, coordinated by the Department of Higher Education for Professionals of Education and the General Office of Higher University Education (Jiménez Lomeli, 2018). To this end, the Universities of Guanajuato and Puebla have joined in the effort by carrying out a small research project that looks at the interaction of the students and teachers in a curriculum design course, employing the underlying theory of Edgar Morin on complex thought into the course plan of action in the terms that have been laid out by the RECREA project, where we have inserted a series of classroom steps that focus on taking the student from the stage of recognition to application through guided discussion and focused evaluation tasks. Complex thought The idea of complex thought, coined by Edgar Morin (2011), is considered as a strong component of the theoretical basis for the project as well as classroom action research that leads to a plan for monitoring and evaluating the teaching-learning process and the results. This approach allowed the researchers to analyze the learning process by combining Bloom's Taxonomy with the ideological focus of complex thought. In this case, the student is conceptualized as an integral human being and the course syllabus is built around the student in the form of incremental learning steps. The idea being that both teacher and students deconstruct the learning concepts and practices and then rebuild them together; thus, allowing the student to gain agency in the learning process. Furthermore, it also allows the students to personalize the classroom processes. In turn, the student has a higher probability of applying the conceptual information acquired into actual personal, professional practice. Conceptualizing the students within the framework of complex thinking will provide them with the opportunity to approach and solve problems within their educational reality from an integral and holistic perspective. In this sense, it is fundamental to define the concept of complex thinking, which is seen as complexity in terms of the relationship with the whole, in contrast to the paradigm of simplicity in relation to the obedience of the natural order and the relation of complex thought with interdisciplinarity in opposition to an objective world reality (Morin 1995). Therefore, when talking about complex thinking, it is important to differentiate between "complex" and "complicated", which are often taken as synonyms. But complexity is not a complication, since the second concept is considered as a simple dimension while complexity implies a number of elements, which as Morin, Roger and Domingo (2002) mention is "a framework of events, actions, interactions, feedback, determinations, hazards, which constitute our phenomenal world" (p. 37). Therefore, when complexity is retaken, it would have to refer to a series of conceptions, relationships and interdependencies obtained from a series of knowledge. Regarding classroom practices, it seems possible that taking the small incremental steps articulated in terms of durability, public awareness and training, post classroom success may be more viable. For the RECREA project, complex thought takes up knowledge as something tangled, in disorder, and ambiguous. In the classroom, the teacher is not the one who delivers knowledge as something already finished, fragmented and simplified, but the student is the one who, starting from uncertainty and imprecision, articulates, understands and develops his own critique through a strategic interaction. This is to be combined with Bloom's Taxonomy. This particular Taxonomy is founded on principles that are broken into a set of three hierarchical models used to classify educational learning objectives into levels of complexity and specificity. The three lists cover the learning objectives in cognitive, affective and sensory domains. As Krathwohl (2002) states: Bloom saw the original Taxonomy as more than a measurement tool. He believed it could serve as  a common language about learning goals to facilitate communication across persons, subject matter, and grade levels;  a basis for determining for a particular course or curriculum the specific the meaning of broad educational goals, such as those found in the currently prevalent national, state, and local standards in Mexico;  a means for determining the congruence of educational objectives, activities and assessments in a unit, course, or curriculum; and  a panorama of the range of educational possibilities against which the limited breadth and depth of any particular educational course or curriculum could be contrasted (p. 212). In the case of this project, we have used his verb sets as reference points in the sense that we are employing the four dimensions of knowledge that is referred to in his Taxonomy table (Krathwohl, 2002). However, even though we use this framework to illustrate how the student is to be guided through each phase of the class in the course design, the underlying ideology that is being inserted is in relation to the concept of complex thought in that we are taking the learner from a simple recognition stage to a complex stage of application in the real world. In this specific class (explained in further detail below), we are establishing the ability to analyze and construct curriculum of a program. The research site The research site is a class in the MA program in Applied Linguistics in English Language Teaching in the University of Guanajuato, which is taught on Fridays and Saturdays in the Language Department. The course selected for the case study was Diseño de Programas de Segunda Lengua (Second Language Course Design) as it was thought to be a suitable selection for piloting the course design proposed by the RECREA Project. Data collection and analysis issues Since the purpose of this investigation is to examine the perceptions of the students in depth, case study was deemed to be an appropriate methodological choice because they tend to be intensive in the process of collecting the research data via a number of sources (Denzin & Lincoln, 2000;Creswell, 2005). In case studies, the research data can be collected by using different data collection techniques such as documents, archival data, interviews, direct observation, participant observation and artifacts (Stake, 2000;Yin, 1998). Furthermore, Yin (1998) mentions that when the research scope focuses on answering how and why questions, a case study approach should be considered, and this study looks for possible answers to these types of questions in the form of a qualitative instrumental case study. An instrumental case study is defined as a case that often is interested in context and activities. Stake (1995) defines a case study as instrumental "if a particular case is examined mainly to provide insight into an issue or to redraw a generalization. The case is of secondary interest, it plays a supportive role, and it facilitates our understanding of something else" (p. 136). In this case study our focus is on the course syllabus more than the students. The research data for the case was collected through a weekly journal were the nine participants kept a log of their perceptions of what they had learned after each three-hour class over a period of 12 weeks. Simultaneously, the two teacher/researchers maintained a field journal with ethnographic notes of the course, following the suggestions outlined by Deggs and Hernandez (2018) to serve as a basis for comparing student and teacher perceptions as well as and to provide additional data on the case in under investigation. Pseudonyms were used to protect identity. Finally, the complete data set was analyzed by all four researchers. Fostering reflection The class initially aimed to bring about reflection amongst the participants with the purpose of having them restructure their beliefs about their past teaching experiences, consider a present perspective about how they go about in their current practices, and reconsider this constructed perspective for future teaching practices. A question arose among a number of participants as they noticed the vast amount of reflection that the class entailed. This is summarized by the following student: This class provides a lot of reflection but confusion as well. I don't know what to believe anymore! Was my teaching practice wrong all this time? (Kari 8) The participant indicates the importance of reflection throughout the class. Yet, she is aware of the possible issues that this may entail. Similarly, a participant found herself in an "eyeopening" situation in regard to change and how to go about it, if possible. She recalls: In this session it was analyzed to what extent we possess "freedom to change the system". The teacher asked us a question that was eye-opening to me: "Do you want to be part of the problem?" Certainly, this question made me feel uncomfortable with the way I have been teaching. Although, I'm still struggling to understand what the teacher means by challenging the system since my context doesn't allow people to come up with new ideas or teachers are restricted to follow what it is written on the syllabus. Still, I want to explore in which ways I could contribute to become a teacher that understands and analyzes the content that authorities want us to teach. Probably, by understanding it I may implement some changes. (Lulu 2) This participant found herself in a situation of discomfort concerning her past teaching experiences. She may have been focused on past experiences that were not as fruitful in which she felt constrained due to impositions from stakeholders. Nonetheless, she aims to better learn about alternatives to implement changes in her teaching practices. A similar case is as follows. The participant analyzes the following: It was addressed about the elements that are involved inside the classroom such as: the task, unit, book, course, syllabus and curriculum. These factors are familiar for me; however, in my professional and personal life, I did not reflect on what was their real meaning… I just considered the lesson plan, and that was everything, and the main reason was because the school provided me the curriculum, the syllabus and the book. Thus, I just accomplished my activities but I did not reflect on the relation among these factors. I feel confused because I wonder about the next question: What did I do in my first years as teacher? (Debbie 10) The participant acknowledges having a sense of understanding of what the different elements to consider for teaching are. Nonetheless, she seems to not have reflected on any possible connections between one or another as it was all provided for her. This leads to questioning what she was doing in her past experiences. The previous participants acknowledged the role of reflection and the constant recall of their past teaching experiences to better grasp the content being presented in class. This led the participants to begin questioning their previous teaching performance to notice any positive and negative experiences with the aim to begin noticing any challenges that drove to inconsistencies throughout their teaching practice. Presenting an array of teaching options The content of the class shifted towards presenting a number of options for the students to cope with according to their beliefs and practices as English teachers. A number of points of view allowed for the participants to better view their own practices concerning their educational philosophies. A participant presents the following: The participant recalls the value of being aware of the different educational philosophies. She noticed the struggle of teaching a grammatical point under a given philosophy, and the array of possibilities that emerge according to the rest of the options that the other philosophies have to offer. Similarly, another participant was also able to grasp the importance of considering the educational philosophies for distinct teaching practices. She mentions the following: The participant acknowledges how her beliefs on the more suitable educational philosophy do not coincide with that of the last place where she worked. She is now more aware of how her past teaching practices inclined towards a particular philosophy. Becoming aware of the history of these philosophies and where they originate from helps her find a connection between her practice and what the alternative philosophies have to offer. Likewise, another participant recalled the value of being presented with the alternative perspectives concerning the various philosophy types. She highlights the following: This participant recalls how arriving to a constructed philosophy may take time and how this determines how one goes about developing as a teacher. This, in turn, allows for one to know what can be done and improved as a teacher, while at the same time knowing how much control the institution may have over its teachers. An interesting question arises concerning what can be done when forced to cope with norms that go against an established philosophy. This, in turn, linked to the concept of freedom that the teachers have and where the source of change may lay within. A student discusses the following: …the analyzed philosophies made me think about the changes in the society, and how it is necessary to have a perspective that allows to explain the world. We also discussed that it is possible to make small changes in our context, but also how every decision we make has responsibilities. Regarding the philosophy and the approach, we follow to teach, there is not a wrong or adequate way to do it, but to be consistent and coherent on what we do is necessary. (Vanessa 2) This participant notes how having a perspective on the view of the world seems to be the initial point of departure to promote change. She acknowledges that there may not be a right or wrong way to do it, as long as this perspective is coherent with what is being done. She further argues for going beyond what is expected by taking additional action. She notes: I consider that we are not limited, if we "think outside the box". Small changes can be done. However, the most important step is to take the responsibility of those changes. Everything that is modified produces certain nuisance. It seems that not many people like changes because they are uncomfortable. In conclusion, there is always a possibility of doing "something more". (Vanessa 3) Once aware of the need to take action and be coherent with one's own teaching beliefs and the actual practice, the participant argues for change. Though change might not lead to a positive outcome all the time, she calls for action in going beyond what is expected as teachers. Change may be seen as vital, yet some participants were not fully aware of their hidden plans within the classroom. The following section discusses how becoming cognizant of their hidden aims for class interferes in how the participants may seem to go about in their teaching practice. Uncovering the unseen aims Detecting the implicit aims that the participants may have had concerning their teaching practice was of crucial importance for them to be more knowledgeable of what and why they do when developing as English teachers. This seems to be of interest for the participants once they became aware of their hidden curriculum and what this may entail. Initial questions arose concerning the impact of having a sense of their hidden curriculum. A participant mentions the following: An important point to consider is if my hidden curriculum is useful or not to my students or their needs. Also, these questions arise: How do I know if my hidden curriculum is good or bad for my teaching practice? How did my hidden curriculum change over the time? (Debbie 4) Raising awareness of the hidden curriculum, the prior example denotes how participants began to reflect on their past experiences and how their unseen plan may have had an impact on their teaching. A participant further elaborates on the role of the hidden curriculum: Open Journal for Studies in Linguistics, 2020, 3(1), 1-12. ______________________________________________________________________________________________ … previous experiences as students shape your teaching practice as teacher. This means that as a teacher, I do not want to repeat the things I consider were not useful in my learning process… I realize that these issues are part of me, of my personality, of my way of thinking. Even when I had been teaching for a couple of years, I have not noticed how these aspects affect positively or negatively on my teaching practice. In others words, I was not completely aware of my hidden curriculum. (Debbie 4) The participants were aware of the issues faced when developing as English teachers though they may not have been aware of what this referred to. Another participant also became cognizant of the importance of being aware of her hidden intentions throughout her teaching. She follows: A concept that was not new for me but did not understand was the hidden curriculum. In this class, I discovered that I follow and implement, either consciously or unconsciously, certain patterns with my students. The reason behind this is that I probably think that these will help them to become better students or somehow will contribute to accomplish their objectives. However, how do I know what they need? Or why am I assuming that they need guidance specifically from me? Probably I am also assuming that since I'm the teacher I know best; therefore, I should provide everything. (Linda 5) The participant became acquainted with the unseen aims presented along her teaching practice. In her case, these hidden aims are with the purpose to help her students improve. However, she questions how she may better grasp a sense of what her students need and to what degree they might need guidance from her. As the teacher, she assumes her students expect her to provide the majority of the input and guidance. A participant also related to the previous by highlighting the importance of her hidden curriculum. She comments on the following: It was also very interesting to find out that we all implement a hidden curriculum either conscious or unconsciously. I had not realized that I include some organizational skills within the language content I teach, perhaps because I consider it is something they lack and would be a useful tool for their learning process. But now I think that this practice is based on my assumption of how they learn just because it is the way I learned. Maybe my students don't even need it, and I am including it within the curriculum of my class. This, once again, reinforces the role of a teacher in a learner's learning process as well as how influential we could be in it. (Penelope 5) Like the afore-mentioned participant, this previous participant also recognizes the relevance of her hidden curriculum. She acknowledges having not being aware of it, yet she aimed to incorporate elements according to the possible needs of the students based on her assumptions of what she can do to reinforce their learning process. Further elaborating on the role of the hidden curriculum, another participant became aware of the effects that bringing in additional and hidden elements into the class may have. He notes the following: … I could go in-depth to what I represent as a teacher and to consider how I can reflect my personal interests to my students. Depicting these characteristics is not wrong, but it is essential to be accurate or aware of when this happens (Sandy 5) The participant acknowledges that this notion of a hidden curriculum is not necessarily a negative one. Furthermore, this participant highlights the significance of how accurate this unseen plan is for the students when it is applied at the correct time. Another participant was also able to relate her hidden curriculum with the authority that she may hold in class. This tendency may lead to a more democratic class in which her students are more active participants. She holds: In this session I learned that there is not a complete authoritarian, nor a democratic class and democracy should be conceived as negotiation rather than pleasing everybody. To me, it would be ideal to perform democratic lessons, but I question myself how to achieve this without being afraid of losing control of my own class. Without a doubt, performing a democratic class involves changing my own beliefs as well as noticing when I am implementing elements from my hidden curriculum (Linda 11) The participant is aware of the challenge of modifying her hidden curriculum. She inclines toward a position in which her students become active participants in making the decisions in class. However, this may be a restrictive stance as she may lose authority and face in the process. The participants became intrigued by the notion of the hidden curriculum and how their unseen objectives for their students play a significant role in how they go about in their teaching. This impacts them as teachers in the sense that they may or may not be aware of these ulterior motives that they do not make visible for their students, though they may become prominent at some point further on in their education. The role and degree of power A topic of interest for the participants to restructure their thinking processes relates to hegemony and the distribution of power that may be given at a certain school or institute. The concept was first introduced concerning other elements and how power imposition from various sources is present to varying degrees. A participant recalled the issue of broader social control over English teachers and how we may be excepting of being given attention. She expresses the following: When we were talking about the authority inside classroom and democracy the professor made us realize that English teachers are conditioned to do what they are asked to do, so we are always working as other people ask us to do even if we are not happy with that. The professor made us realize that when this happens, again we can see hegemony controlling us again, in every step that we teachers make. I was reflecting on this and the only thing that came up to my mind is that teachers are the only ones who know what happens inside the classrooms, so our voices should be listened to in order to have better job environments and proper designed classes for our students. The concern here is that we are so used to not to being heard that even when something happens inside the classroom that is not appropriate, we tend to be silent and not do anything because we are used to not being heard. (Richelle 10) The participant initially notices hegemony over English teachers in terms of the oppression that we may have within our field. She notices what position we have in the classroom as understanding what happens within the four walls, yet there are external sources that determine how the class goes about. However, hegemony was also viewed from the perspective of the teacher inside the classroom and how varying degrees of control may be fruitful to exploit. A participant mentions the following: In this class, the teachers gave us the elements from which we can analyze a curriculum as well as a syllabus such as the role of students, metalinguistic elements or material design… these elements shed light on what we can approach to become critical teachers and try to explore areas from which we can adjust our practice and identify areas where we may be perpetuating hegemony. (Linda 4) The participant became acquainted with different elements to consider when aiming to explore perspectives from which a teaching perspective may be modified. It is important to highlight how these elements relate to a given power that becomes present and how this power may lead to broader control. Similarly, a participant further expressed how understanding himself as a teacher led to reconsidering his teaching practice in terms of how social power depicts the decisions to be taken. He expresses the following: … I could analyze and try to understand how true I am as a teacher and how this is reflected in my teaching practice. Being aware of knowledge is relevant for us to improve our practices and consider what to do or not based on what is said by superiors for us to do. So, hegemony depicts substantial influence in our construction in social life, but also to raise awareness in the decisions we take and be responsible with them will allow us to avoid mistakes we might later do. (Sandy 4) The participant was able to be cognizant of what having social control implies. This relates to not only having a sense of who he is as a teacher, but also how his conception of such is brought into his teaching practice to promote what he beliefs beyond what is imposed by higher stakeholders. This, in turn, is reflected on how the person wishes to mold his teaching practice. Likewise, this understanding of hegemony was able to perpetuate within another participant in terms of having a deeper understanding of who she is as a teacher and once aware of the degree of control to be exploited within the classroom, use this in favor for a more positive experience when developing as a teacher. She mentions: The participant acknowledges how social control is prominent in our society. Moreover, she believes that being more aware of who we are as teachers gives us broader advantages when aiming to promote control within our classrooms. The participant further elaborates on her perspective: …as teachers we have certain power that could be used in a positive way; we can help and guide our students to be more critical about the things established in our society. If we as teachers start to do an internal change of our positions, who we are and the things we are able to do, the change in our society would be of real impact. (Debbie 7) The participant once again calls for initial understanding of who we are as teachers to promote a social change within our teaching practices. These elements seem to go hand in hand, in which one may not perpetuate when lacking the other. The participants were able to grasp the knowledge of having social control, its implications and how to go about using it to their advantage in the classroom. This linked to change and how they could be able to adjust their teaching based on firstly becoming more acquainted with who they are as teachers, to then have control of the varying degrees of power that they are able to use to their abilities based on what the institution may entail. From thinking to action There seems to be an overall positive effect in class in terms of complex thought and taking the students from more critical view of their teaching practice to proposed action based on the content covered throughout the course. A perspective from a participant consists of becoming aware of alternatives to view his teaching practice. Yet, he believes that there may be a possible existing gap between theory and practice. He considers the following: We have lived diverse situations as English teachers, and now we are aware of the theory and ideologies of how the things are and how they "should be". At this point, I would like to observe a kind of free experience for my classes. I see these new ideas and concepts in content classes where students are self-committed, but I would like to see them for language classes as well… I saw different critical functions of what we learned in theory, but there is still a wall to overcome that might take us to the praxis. (Guy 11) The participant is mindful of the content covered in class, yet he views it as an "ideal" state. He seems to be open to trying out alternatives as a possible way to overcome that bridge between theory and practice. Another participant also acknowledged the importance of the content covered in class and expressed the following: There's a before and after of the way I understand and think about education, in general. I am aware of how the system works and mainly, my job and responsibilities in it. Yes, it will be hard to change it, but… I am in! (Penelope 12) The participant recalled the importance of the content covered in class for her to be more conscious of how the education system functions and her role within it. She acknowledges the difficulty of promoting change when difficult situations arise or are imposed upon us, yet she is determined to do so. Similarly, a participant also became more cognizant of how she performs as a teacher. She mentions the following: The participant also considered the importance of being aware of her own teaching practice. She admits to become keener to questioning what happens within her classroom as she develops as a teacher. Likewise, a participant also felt a positive sense towards better understanding herself as an English teacher and being true to her ideologies. She conveys the following: The participant seems better prepared to defend her position as an English practitioner. She became knowledgeable of the different options and grasped what seemed best for her. Nonetheless, she accepts the responsibility of performing as she thinks. A similar perspective was taken from a participant who views the relevance of the many aspects surrounding how he develops as a teacher. He considers the following: After discussing how power is represented by different hidden features around the world, I realize that sometimes we as teachers are ghosts pretending to be doing something almost unreachable. In other cases, we are aware of this control or in the discrepancies that language entails, but we are not willing to do or foster for a change. We reproduce a series of systematic steps towards repetition, pretending we are doing something different. Therefore, the relevance of being congruent and consistent in our teaching practice is relevant. Moreover, understanding and trying to break these vicious circles where we are, should be our duty as part of the teaching society. I consider high relevance on how curricula are constructed and how we ignore some elements that are hidden in it. (Sandy 9) The participant views teachers as invisible entities within the curriculum. He proposes his view on how English teachers argue for change when disguising teaching practices that may be repetitive. The importance of breaking this cycle is presented to lead to positive change. Similarly, another participant acknowledges the significance of going a step further in one's teaching practice. She reflects on the following: Today's question that I take for homework is: when are we going to create something new if we don't step outside the box? This is a triggering question that all teachers should be thinking about if we want to see an improvement in our students. Now I understand that when asked to challenge the system, it does not refer to doing whatever we think it is correct, but to understand educational philosophies, and question inconsistencies in educational policies or the syllabus we are following to be aware of what it is going on inside the institution and out of it. (Linda 10) The participant considers the cruciality of going beyond what is expected from the educational system we are in, and not only understanding what happens within our education setting, but outside of it as well in terms of educational beliefs and policies or norms we are to follow. The participants seem to be more knowledgeable of different paths and the array of alternatives to develop as English teachers. Each alternative option entails a set of duties and responsibilities for the teacher and a way for the students to benefit from. This raises thought for action amongst the participants as they feel better equipped to promote changes within their teaching practices. Conclusion We can argue that knowledge was delivered as an array of options and the participants were able to grasp what seemed to fit according to their beliefs and ideologies and reconstruct the understandings that best suit each one of them. Gradually, uncertainty was set aside for the students to come to a more critical understanding of who they are as teachers and what they do and aim to do inside the classroom. Nevertheless, we cannot argue that this was the result of the curriculum design. It clearly played a role, but there are also other issues to consider like the teachers themselves and the students. In conclusion the implementation of complex thought in the course design was definitely an awareness raising activity for the teachers and it seems to have played a positive role in the development of the course. It would be appropriate to continue with the same process in more courses and continue tracking the process.
v3-fos-license
2022-02-01T16:11:54.605Z
2022-01-29T00:00:00.000
246436034
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1073/15/3/1034/pdf", "pdf_hash": "461d31f76c7379c54e707b29cc407214ac7d4601", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42321", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "2a595a5c79903acdb24d01d5c518d893fa48e995", "year": 2022 }
pes2o/s2orc
Jet Impingement Cooling Enhanced with Nano-Encapsulated PCM In the present study, the laminar flow and heat transfer of water jet impingement enhanced with nano-encapsulated phase change material (NEPCM) slurry on a hot plate is analytically investigated for the first time. A similarity solution approach is applied to momentum and energy equations in order to determine the flow velocity and heat transfer fields. The effect of different physical parameters such as jet velocity, Reynolds number, jet inlet temperature, and the NEPCM concentration on the cooling performance of the impinging jet are investigated. The volume fraction of NEPCM particles plays an essential role in the flow and heat transfer fields. The results show that NEPCM slurry can significantly enhance the cooling performance of the system as it improves the latent heat storage capacity of the liquid jet. However, the maximum cooling performance of the system is achieved under an optimum NEPCM concentration (15%). A further increase in NEPCM volume fraction has an unfavorable effect due to increasing the viscosity and reducing the conductivity simultaneously. The effect of adding nano-metal particles on the heat transfer performance is also investigated and compared with NEPCM slurry. NEPCM slurry shows a better result in its maximum performance. Compared with the water jet, adding nano and NEPCM particles would overall enhance the system’s thermal performance by 16% and 7%, respectively. Introduction Jet impingement cooling (JIC) is a highly efficient technique in heat treatment, thermal management, and cooling of hot surfaces [1]. With relatively simple equipment but the ability to extract high heat flux, this technology is widely used in many industrial applications such as cooling of electronic chips and microelectronic circuits, nuclear power plants, and hot rolling steel strip [2,3]. The impinging jets are categorized in five different configurations, i.e., free surface jet, plunging jet, submerged jet, confined jet, and wall jet [4]. Free surface jet configuration is considered to be a major classification of JIC [5][6][7]. In this configuration, a fluid jet (usually water) exits from a nozzle into an ambient gas (mostly air) and impinges on the target surface. Depending on the phase of coolants or the state of jet flow, free surface jets can be distinguished in single-and two-phase JIC. Single-phase impinging jets have been studied extensively, experimentally and numerically, by various researchers, e.g., [8][9][10][11][12][13][14][15]. A comprehensive review of the single-phase JIC technique and its heat transfer methods has recently been conducted by Ekkad et al. [2]. They reviewed a variety of modifications and applications of the JIC technique focusing on impacting novel design, implementation, and improved manufacturing techniques for heat transfer enhancement. In another recent review study, experimental works on the impingement of multiple liquid jets (arrays of impinging jets) on flat surfaces are reviewed by [16]. Two-phase JIC can refer to jet impingement boiling (JIB [3]) in which the impinging liquid is allowed to boil on a hot surface ( [4,[17][18][19][20][21][22][23][24][25][26]) or jet flow is itself in the two-phase form such as dusty flow jets, e.g., [27]. A comprehensive review of the jet impingement boiling has been published by Wolf et al. [4]. Recently, Mohaghegh et al. [1,3,6] presented a mechanistic model with a similarity solution approach to simulate the jet impingement boiling. Two-phase flow jets such as nanofluids and dusty fluids have been investigated by several researchers. They can be treated as one or two separated phases. Mohaghegh et al. [27] simulated a three-dimensional stagnation-point flow and heat transfer of a dusty fluid toward a stretching sheet. They investigated the effect of non-axisymmetric velocity components on the surface (velocity ratio), fluid and thermal particle interaction parameters, and stretching sheet velocity parameters on the fluid and heat transfer fields. Considering the interaction of phases in the form of source terms in the governing equations, they employed the conservation equations for each phase separately. Under a creation condition and concentration of the second phase dispersed in the base liquid, the two-phase flow can be treated as a single phase, but the effect of the second phase can be considered as effective thermophysical properties of the mixture (e.g., a homogeneous flow including carried fluid (based fluid) and dispersed particles). Nanofluid simulations in the literature are good examples of this kind of flow. A recent review study on nanofluid jet impingement cooling is provided by Tyagi et al. [28]. They provided an overview of studies conducted on nanofluid spray/jet impingement cooling with a focus on the jet nozzle and surface configuration such as the nozzle to plate distance, plate inclination, and the surface roughness, and the flow parameters such as jet Reynolds and Prandtl numbers. Improving the thermophysical properties of the base coolant by adding nanoparticles is an effective method to enhance the thermal efficiency of JIC. The majority of this improvement is associated with thermal conductivity and specific heat capacity of the advanced coolant [29]. Metallic nanoparticles used in nanofluid JIC referred earlier enhance the thermal conductivity of the coolant, while nano-encapsulated phase change material (NEPCM) particles dispersed in the base liquid coolant known as NEPCM slurry are proposed to improve the heat capacity of the base coolant [29]. Wu et al. [30] experimentally investigated the effect of adding NEPCM particles in water to enhance the performance of jet impingement and spray cooling. They concluded that the NEPCM slurry can enhance the heat transfer coefficient significantly compared to pure water jet impingement and spray cooling, respectively. Rehman et al. [31] numerically investigated the thermal performance of free-surface jet impinging cooling using NEPCM slurry. They employed a commercial computational fluid dynamics (CFD) code FLUENT to simulate the problem in a fully turbulent regime. A thorough review of the literature reveals that most studies on JIC have been carried out with the single-phase, mostly air and water, while very few studies are reported on JIC with NECPM slurry as coolant. The existing experimental works are limited by providing a small range of data, under a certain or a small range of physical parameter variations, which leads to reporting some empirical constants or fitting parameters which may not be valid for other conditions. On the other hand, the existing two-phase numerical models based on CFD or DNS simulations are very costly in solving details of phase change with not much progress in giving accurate results. The numerical simulation of jet impingement flow also involves several complexities, such as free-surface tracking. In this work, JIC enhanced with NEPCM is studied. The similarity solution approach is proposed to simulate the flow and heat transfer field. Appropriate similarity variables for governing equations are derived for the current problem. To avoid the complexities that correspond with the interactions between two phases (particle-fluid interaction), the effective thermophysical property approach is proposed and used in the governing equation. To the best of our knowledge, no attempts have been made to analyze jet impingement cooling enhanced with NEPCM analytically. By this reasonably simple approach presented in this study, significant reductions in the complexity and cost of the computations of jet impingement flow simulations are obtained. Furthermore, the effect of adding NEPCM and nano-metal particles on the thermal performance of JIC can be easily investigated and interpreted. Problem Description The axisymmetric flow profile of a free-surface circular jet impinging with the velocity of v j and the temperature of T j is illustrated in Figure 1. As the circular jet impinges on the surface, flow is symmetrically diverted around the stagnation point and extends to the surface in a parallel manner. The primary mode of enhanced heat transfer is due to the flow stagnation in the stagnation region (r ≺ d j /2) [2,3]. Most JIC studies have investigated this region as the heat transfer enhancement is significant in this zone and continuously decreases with the distance away from the stagnation zone [1,3,4,[18][19][20][32][33][34]. The present study also focuses on heat transfer in this region. Due to the balance between the stream acceleration (thinning boundary layer) and viscous diffusion (thickening boundary layer thickness), the boundary layer thickness is constant [35]. Boundary layer velocity (u) and freestream velocity (u ∞ ) profiles in rdirection are indicated in Figure 1. computations of jet impingement flow simulations are obtained. Furthermore, the effect of adding NEPCM and nano-metal particles on the thermal performance of JIC can be easily investigated and interpreted. Problem Description The axisymmetric flow profile of a free-surface circular jet impinging with the velocity of j v and the temperature of j T is illustrated in Figure 1. As the circular jet impinges on the surface, flow is symmetrically diverted around the stagnation point and extends to the surface in a parallel manner. The primary mode of enhanced heat transfer is due to the flow stagnation in the stagnation region ( 2 j r d  ) [2,3]. Most JIC studies have investigated this region as the heat transfer enhancement is significant in this zone and continuously decreases with the distance away from the stagnation zone [1,3,4,[18][19][20][32][33][34]. The present study also focuses on heat transfer in this region. Due to the balance between the stream acceleration (thinning boundary layer) and viscous diffusion (thickening boundary layer thickness), the boundary layer thickness is constant [35]. Boundary layer velocity (u ) and freestream velocity (u ∞ ) profiles in r -direction are indicated in Figure 1. Mathematical Modeling Considering a steady, axisymmetric, incompressible, laminar boundary layer flow and heat transfer of a viscous fluid in the neighborhood of a stagnation point on a flat plate located in z = 0, the conservation equations in cylindrical coordinates (r, z) with the corresponding velocity components (u, v) are presented as Equations (1)-(4): Continuity equation: ∂u ∂r z-momentum equation (axial direction): In these equations, P and T represent the pressure and temperature fields, respectively. Parameters ρ, ν, and α are density, static viscosity, and thermal diffusivity of flow, respectively. Employing the Bernoulli's equation in the potential region, the following relations between free stream velocity u ∞ (r) and the pressure gradients in r direction are specified: The velocity components of the potential (inviscid) flow near the stagnation point are described as [27,35]: The C parameter represents the velocity gradient that is expressed in terms of the jet velocity and the jet diameter as where the value of C = 0.77 for circular jet [36]. Similarity Solution To convert partial differential Equations (1)-(4) into a set of ordinary differential equations, the following dimensionless similarity variables are introduced [3,27]: Substituting these transformations into momentum and energy equations yields the following non-linear ordinary differential equations: where f and θ represent the dimensionless velocity and temperature in the boundary later, respectively. The boundary conditions correspondent to the above equations are as (form no slip and constant heat flux conditions on the wall): The wall shear stress is defined as follows: Integrating Equation (13) over the stagnation zone, the averaged wall shear stress can be calculated by the following relation: where A = π d j 2 4 is stagnation zone surface area. Subsuming Equation (13) into (14) and proper calculation, the averaged wall shear stress is calculated as follows: The heat transfer coefficient h is defined as follows: where T w is the surface (wall) temperature. Using the dimensionless temperature variable form Equation (8) and substituting properly in Equation (16), the heat transfer coefficient can be calculated by the following relation: Note that the flow properties in boundary layer are estimated in film temperature; Thermophysical Properties Nano Particles and the Two-Phase Flow Water is used as the base liquid coolant in the present study. The added nanoparticles to the base coolant are considered as nano capsules made up of polystyrene as shell and filled by n-octadecane paraffin wax as core (with the average size of 100 nm and 1:1 mixing ratio of wax to polymer), and metallic oxide aluminum (Al 2 O 3 ) (with 30 nm average size) nanoparticles. The thermophysical properties of these components are listed in Table 1. The thermophysical properties of water listed in Table 1 are reported at the room temperature of 298.15 K; however, in the current simulations, the temperature-dependent properties are calculated and considered. The correlations to calculate the temperatures based on thermophysical properties of water are extracted from VDI-Heat Atlas [37]. As earlier mentioned, the single-phase flow approach which considers the effective properties of the mixture is employed to model the NEPCM slurry as well as nanofluid. The empirical correlations correspondent with the effective thermophysical properties NEPCM slurry and nanofluid, are summarized in Tables 2 and 3, respectively. The melting process PCM happens in a temperature range. The full melting range (∆T m ) of NEPCM particles is from 21.0 • C to 29.5 • C (T 1 and T 2 Table 2, respectively) [30]. Therefore, NEPCM particles are in a solid form in the temperatures less than T 1 and in liquid form for the temperature larger than T 2 . Within this range ∆T m = T 2 − T 1 , phase change process happens and NEPCM particles absorb the heat in the form of latent heat. A sine profile in a proper resemblance to DSC curve suggested by Alisetti and Roy [38] is used in the present study. [40,41]. Numerical Solution Equations (9) and (10) with the boundary conditions (11) and (12) are a set of highly nonlinear ordinary differential equations with boundary values. One of the most convenient and efficient methods to solve boundary value problems of a set of nonlinear ODEs is the fourth-order Runge-Kutta numerical method [6]. These boundary value problems have unknown parameters in the upper boundary condition (η ∞ ). So, the upper boundary conditions f (∞), and θ(∞) may be substituted by the initial boundary conditions f (0), and θ(0), to convert the boundary value problem to an initial value problem. For this purpose, the shooting technique is applied along with the fourth-order Runge-Kutta method with initial guesses for values f (0) and θ(0), and an iterative solution procedure till satisfying the upper boundary conditions f (∞) = 1 and θ(∞) = 0. The solution algorithm to obtain the dimensionless velocity and temperature field and finally the wall shear stress and the heat transfer coefficient is shown in a flowchart in Figure 2. Results and Discussion The results obtained from the numerical integration of Equations (9) and (10) for the JIC problem under an arbitrary constant surface heat flux are plotted in Figure 3. Based on the boundary layer theory, the numerical solutions show that the dimensionless boundary layer thickness η ∞ is around 2 which coincides with results reported by [35]. Results and Discussion The results obtained from the numerical integration of Equations (9) and (10) for the JIC problem under an arbitrary constant surface heat flux are plotted in Figure 3. Based on the boundary layer theory, the numerical solutions show that the dimensionless boundary layer thickness η ∞ is around 2 which coincides with results reported by [35]. Energies 2022, 15, x FOR PEER REVIEW 9 of 20 In the following, the main results from the analytical-numerical simulations to evaluate the heat transfer field and cooling performance of an impinging jet with the NEPCM slurry as coolant are presented. The nozzle and plate characteristics as well as NEPCM properties are conducted with the same physical, geometrical, and operating parameters reported by Wu et al. [30] ). In order to validate the results presented in this paper, the heat transfer coefficient results obtained from the current model are compared with the results obtained from an experiment-based correlation for water jet reported by Liu et al. [36] (Figure 4). For the case of NEPCM slurry jet, Nu number results are compared with the experimental results conducted by [30] for a special case of 28% particle volume fraction ( Figure 5). As can be seen, a very good agreement for both cases is reported. In the following, the main results from the analytical-numerical simulations to evaluate the heat transfer field and cooling performance of an impinging jet with the NEPCM slurry as coolant are presented. The nozzle and plate characteristics as well as NEPCM properties are conducted with the same physical, geometrical, and operating parameters reported by Wu et al. [30] as nozzle diameter (d j ), nozzle-to-surface distance (H), and constant heat flux (q ) and are set to 0.75 mm, 8 mm, and 30 W/cm 2 , respectively. Furthermore, the results are assessed over a wide range of jet inlet velocity (4 m/s ≤ V j ≤ 16), jet inlet temperature (16 • C ≤ T j ≤ 32 • C), NEPCM particle volume concentration (0 ≤ ε ≤ 0.3), and alumina nanoparticle concentration (0 ≤ φ ≤ 0.06). In order to validate the results presented in this paper, the heat transfer coefficient results obtained from the current model are compared with the results obtained from an experiment-based correlation for water jet reported by Liu et al. [36] (Figure 4). For the case of NEPCM slurry jet, Nu number results are compared with the experimental results conducted by [30] for a special case of 28% particle volume fraction ( Figure 5). As can be seen, a very good agreement for both cases is reported. [36] for NEPCM slurry (with 28% particle volume fraction) as coolant. [36] for NEPCM slurry (with 28% particle volume fraction) as coolant. Zhang and Faghri [40] observed that NEPCM slurry behaves as Newtonian fluid for particle volume fraction below 0.3. Therefore, in the present study, NEPCM slurries with particle volume fractions below 0.3 are considered to be treated as a Newtonian fluid. Nanoparticles dispersed in water will influence the hydrodynamic of the fluid flow, such as viscosity of the flow and shear stresses as a result of the relative motion between the nanoparticles and the liquid flow. Figure 6 shows the effect of NEPCM concentration on the variation of the average wall shear stress of stagnation flow with respect to Re number. Shear stress increases with the increase of Re as expected. At the same Re number, NEPCM slurry reports higher wall shear stresses compared with water and also with the increasing concentration of NEPCM, shear stresses grow significantly due to higher effective viscosity. Zhang and Faghri [40] observed that NEPCM slurry behaves as Newtonian fluid for particle volume fraction below 0.3. Therefore, in the present study, NEPCM slurries with particle volume fractions below 0.3 are considered to be treated as a Newtonian fluid. Nanoparticles dispersed in water will influence the hydrodynamic of the fluid flow, such as viscosity of the flow and shear stresses as a result of the relative motion between the nanoparticles and the liquid flow. Figure 6 shows the effect of NEPCM concentration on the variation of the average wall shear stress of stagnation flow with respect to Re number. Shear stress increases with the increase of Re as expected. At the same Re number, NEPCM slurry reports higher wall shear stresses compared with water and also with the increasing concentration of NEPCM, shear stresses grow significantly due to higher effective viscosity. As JIC is aimed to heat transfer applications, the rest of the results focus on the heat transfer parameters and their effect on the thermal performance of the JIC. Figure 7 shows the effect of particle volume fraction of NEPCM slurry on the distribution of the dimensionless temperature (θ profiles) in the boundary layer thickness. As expected from the similarity solution, the trend of all profiles is the same (general profile), but the effect of NEPCM concentration is significant on the values at the surface (at 0 η = ) and the thickness of the thermal boundary layer ( T η δ ∞ = ). As heat flux is transferred through the wall, we are interested in the thermal field values on the surface. According to Equations (9) and (10) As JIC is aimed to heat transfer applications, the rest of the results focus on the heat transfer parameters and their effect on the thermal performance of the JIC. Figure 7 shows the effect of particle volume fraction of NEPCM slurry on the distribution of the dimensionless temperature (θ profiles) in the boundary layer thickness. As expected from the similarity solution, the trend of all profiles is the same (general profile), but the effect of NEPCM concentration is significant on the values at the surface (at η = 0) and the thickness of the thermal boundary layer (η ∞ = δ T ). As heat flux is transferred through the wall, we are interested in the thermal field values on the surface. According to Equations (9) and (10), the dimensionless temperature in the boundary layer only depends on Pr number. Therefore, the effect of the NEPCM concentration is exerted on the temperature field/equation through Pr number. The value of dimensionless temperature at the surface; θ(0) and its variation with respect to Pr number and NEPCM concentration in the slurry jet is depicted in Figure 8. As seen, θ(0) decreases with the increasing NEPCM concentration (and therefore Pr number). According to Equation (16), decreasing θ(0) should lead to an increase in heat transfer coefficient. However, in addition to dimensionless temperature at the surface; θ(0), the heat transfer coefficient depends also on some other characteristics of fluid flow (C) and thermal (k & ν) fields. The comparison of jet impingement heat transfer coefficients for the NEPCM slurry with different particle volume fraction at inlet jet temperatures of 25 °C as a function of jet velocity which varies from 4 to 16 m/s (flow rate from 1.8 to 7.3 g/s) is illustrated in Figure 9. The comparison of jet impingement heat transfer coefficients for the NEPCM slurry with different particle volume fraction at inlet jet temperatures of 25 • C as a function of jet velocity which varies from 4 to 16 m/s (flow rate from 1.8 to 7.3 g/s) is illustrated in Figure 9. As seen for all test cases, a higher flow rate (or inlet jet velocity) increases a higher heat transfer coefficient and therefore cooling rate and vice versa. The figure also indicates that for this specific inlet temperature, apart from the NEPCM concentration value, the slurry jet has a higher heat transfer performance compared with the pure water jet due to absorbing heat flux as the latent heat and therefore the wall temperature rising slows down. However, the results show that slurry with a 15%, particle volume fraction has the highest heat transfer coefficients among others. Adding more NEPCM particles to the slurry would decrease the cooling performance. It is interesting to find under what conditions the cooling performance is the maximum. For this purpose, the heat flux and jet velocity (mass flow rate) are set to 30 W/cm 2 and 8 m/s (3.6 g/s), respectively, while the jet inlet temperature and NEPCM concentration are varied between 16 and 32 • C, and 0 to 0.4, respectively. First, the comparison of transfer coefficient profiles as a function of inlet jet temperature for water and NEPCM slurry with different particle volume fractions is provided in Figure 10. Except for the case of the water jet which has a slight increasing slope, the profiles regarding slurry jets have a peak within the melting temperature range of PCM. Within this range, PCM starts to melt and therefore the heat transfer coefficients increase. For a heated surface at a temperature out of the melting range, water has a higher heat transfer coefficient. This is because, as shown earlier, with increasing particle volume fraction in slurry, viscosity sharply increases. This increase will thicken the boundary layer and hence will reduce the heat transfer as a result. Therefore, the overall heat transfer coefficient decreases. Furthermore, for the lower inlet temperatures (T j < 19 • C), there would be no phase change, and the heat transfer coefficient for slurry is lower than for the water case, because the slurry has a lower conductivity than water as a result of the lower conductivity of NEPC particles. The results also indicate that the maximum enhancement in the heat transfer coefficient for slurry cases occurred at an inlet jet temperature of 25.2, which is quite close to the peak temperature (26.2) measured by DSC melting curve [30]. Furthermore, this peak has the maximum value at the particle volume fraction of 15%. To find the optimum particle volume fraction corresponding with the maximum value over a wider and continuous range, the variation of heat transfer coefficients versus particle volume fraction is depicted in Figure 11. As can be seen, the curves have a peak in ε = 0.15 and this concentration is not varied with the jet velocity/flow rate. To investigate the effect of nanoparticles on the cooling performance of the JIC and compare the results with ones for the water and slurry jets, the heat transfer coefficients for the nanofluid jet impingement are also calculated. The variation of heat transfer coefficients with respect to jet velocity for three different particle volume fractions at inlet jet temperatures of 25 • C is depicted and compared with water in Figure 12. As seen, increasing nanoparticle concentration will slightly increase the heat transfer coefficient, but the effect seems less significant compared to the results found for adding the NEPCM particles in Figure 9. Tang et al. [44] conducted an experimental work on Al 2 O 3 nanofluids to characterize the viscosity and shear stresses. They found that alumina nanoparticles dispersed below 6% volume concentration exhibit Newtonian behavior for an operating temperature range from 6 to 75 • C. Therefore, nanofluids below 6% are assumed to be Newtonian fluid. Figure 13 compares the heat transfer coefficients for water, nanofluid, and NEPCM slurry jets for particle concentrations, leading to the maximum value for the heat transfer. As seen, at the same jet velocity, the maximum heat transfer coefficients for NEPCM slurry are significantly higher than those for nanofluid. Compared with the water jet, adding NEPCM and nanoparticles would overall enhance the thermal performance of the JIC by 16% and 7%, respectively. Conclusions In this paper, the flow and the heat transfer of free-surface jet impingement cooling using water, NEPCM slurry, and nanofluid were studied. An analytical solution using a similarity approach was presented to determine the main characteristics of flow and heat transfer fields, such as the wall shear stress and the heat transfer coefficient. The effect of NEPCM particle concentration on these characteristics was investigated. It was found that wall shear stress significantly increases with the increase of NEPCM concentration. Furthermore, adding NEPCM particles to the water jet can be effective if the inlet temperature is in the range of the melting temperature of the utilized PCM. Moreover, there was an optimum value for the particle concentration that led to maximizing the heat transfer coefficient of JIC. It was also found that in the maximum performance, NEPCM slurry shows a better result compared to the nanofluid as it enhanced the thermal performance of the system by 16%, while the nanofluid enhances the performance by 7%, compared with water. However, the maximum performance of the NEPCM slurry strongly depends on the inlet jet temperature and volume concentration. At the inlet temperature of 25.2 • C and concentration of 15%, the maximum performance is achieved. It is suggested that the future work focus should be on the NEPCM particles with higher latent heat and especially higher thermal conductivity to overcome the weakness of NEPCM slurries.
v3-fos-license
2021-01-20T06:16:19.759Z
2021-01-19T00:00:00.000
231641133
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/s41416-020-01221-9.pdf", "pdf_hash": "b4ed218dfca7d13764bbec60f69811659899f48e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42322", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a3a67a9b487f2db621341ecc009be06b09a6b244", "year": 2021 }
pes2o/s2orc
Diagnostic performance of a faecal immunochemical test for patients with low-risk symptoms of colorectal cancer in primary care: an evaluation in the South West of England Background The faecal immunochemical test (FIT) was introduced to triage patients with low-risk symptoms of possible colorectal cancer in English primary care in 2017, underpinned by little primary care evidence. Methods All healthcare providers in the South West of England (population 4 million) participated in this evaluation. 3890 patients aged ≥50 years presenting in primary care with low-risk symptoms of colorectal cancer had a FIT from 01/06/2018 to 31/12/2018. A threshold of 10 μg Hb/g faeces defined a positive test. Results Six hundred and eighteen (15.9%) patients tested positive; 458 (74.1%) had an urgent referral to specialist lower gastrointestinal (GI) services within three months. Forty-three were diagnosed with colorectal cancer within 12 months. 3272 tested negative; 324 (9.9%) had an urgent referral within three months. Eight were diagnosed with colorectal cancer within 12 months. Positive predictive value was 7.0% (95% CI 5.1–9.3%). Negative predictive value was 99.8% (CI 99.5–99.9%). Sensitivity was 84.3% (CI 71.4–93.0%), specificity 85.0% (CI 83.8–86.1%). The area under the ROC curve was 0.92 (CI 0.86–0.96). A threshold of 37 μg Hb/g faeces would identify patients with an individual 3% risk of cancer. Conclusions FIT performs exceptionally well to triage patients with low-risk symptoms of colorectal cancer in primary care; a higher threshold may be appropriate in the wake of the COVID-19 crisis. BACKGROUND There are around 1.8 million new colorectal cancer (CRC) diagnoses worldwide each year, and almost 900,000 deaths. 1 Population screening is effective in reducing mortality, with a relative risk of CRC mortality varying between 0.67 and 0.88 depending upon the screening modality, frequency of screening and sex. 2 However, even when screening is available, most CRCs present with symptoms. In the UK, less than 10% of CRCs are identified by screening, with the remainder identified after symptoms have developed. 3 In many countries, symptomatic patients present first to primary care, where the general practitioner (GP) assesses the possibility of cancer, and investigates or refers for specialist tests if appropriate. 4 The usual diagnostic test in secondary care is colonoscopy, with CT imaging or capsule endoscopy occasionally used. Requests for urgent CRC investigation have relentlessly increased over the last decade, with a parallel increase in colonoscopies. These doubled in the UK between 2012 and 2017. 5 This rise was driven in part by referral of patients whose symptom profile, while still representing possible cancer, was relatively low-risk. 6 These patients, often with abdominal pain or mild anaemia, had been excluded from UK national guidance in 2005, 7 but transpired to have the worst survival across the different symptoms, often presenting as an emergency. 8,9 In 2015, the National Institute for Health and Care Excellence (NICE) published revised guidance, NG12. 10 The revised NICE recommendations were explicitly based on the risk of cancer posed by the patient's symptoms, and used only primary care evidence to estimate this risk. Patients having a risk of CRC of 3% or more are recommended for an urgent suspected cancer referral, and are usually offered colonoscopy. For risks below 3%, patients were to be offered testing for occult blood in their faeces, with those testing positive to be referred urgently. This recommendation was based on a systematic review performed by NICE, finding six studies of faecal occult blood testing mostly in secondary care, totalling 9871 patients. 10 The sensitivity and specificity for colorectal cancer varied considerably across these studies, although the diagnostic performance was considered sufficient in the absence of other tests available in primary care. An economic evaluation supported this recommendation. 10 The faecal immunochemical test (FIT) for haemoglobin measures the amount of haemoglobin in a faeces sample and has largely replaced faecal occult blood testing. NICE guidance issued in 2017 (DG30) recommended FIT should replace faecal occult blood testing in primary care patients with low-risk symptoms of CRC. 11 The systematic review underpinning that recommendation found nine studies: 12 in only one was the FIT performed in primary care, though even in that study all patients had already been selected for urgent referral for possible CRC. 13 Thus all the evidence underpinning the use of FITs in primary care in DG30 was from the high-risk referred population; this brings a substantial risk of spectrum bias. 14 This study evaluated a FIT used by general practitioners to triage patients with low-risk symptoms of possible CRC in the South West of England, and estimated the diagnostic performance of FITs in this population. METHODS This joint South West Cancer Alliances transformation project provided a quantitative FIT service to primary care practices across the South West of England (population~4 million) from June 2018. This area includes 14 secondary care providers and 10 clinical commissioning groups (CCGs), listed in Supplementary Material. The FIT diagnostic service (comprising of FIT kits for patients, patient instructions, lab processing of FIT and timely associated reporting of results) was available to GPs to triage patients with low-risk symptoms of CRC, as defined by NG12 and DG30. 10,11 Patients meeting the following criteria were eligible for a FIT (these criteria were derived from the 2015 NG12 for faecal occult blood testing, current at the time of project design): • Aged 50 years and over with unexplained abdominal pain or weight loss • Aged 50-60 years with change in bowel habit or irondeficiency anaemia • Aged 60 years and over with anaemia, even in the absence of iron deficiency The laboratory service was provided by Severn Pathology in Bristol and the Exeter Clinical Laboratory in Exeter using the HM-JACKarc analyser. This assay has a recommended analytical range of 7-400 μg Hb/g faeces (though some values below 7 μg Hb/g faeces were reported); results over 400 μg Hb/g faeces were recorded as >400 μg Hb/g faeces. A threshold value of ≥10 μg Hb/g faeces defined a 'positive' test, as per DG30. 11 Test kit packs were delivered to primary care practices including: the test unit, instructions for use, a form to select the indication for the test, and a prepaid envelope for the patient to return the completed test to the laboratory. Test results were returned to practices electronically in Devon, Cornwall and Avon. In Somerset, Wiltshire and Gloucestershire, reports were initially sent by post, and later electronically. Information about the service was publicised through local CCG newsletters and through the local Cancer Research UK Facilitator Team, who provided practice-level training and support. GPs were provided with written, online, and video support for using the FIT service, indications for the test and how to use it, and advice on how to deal with a positive test. GPs were advised in the guidance that if faecal haemoglobin concentration (f-Hb) was ≥10 μg Hb/g faeces they should consider using an urgent referral for suspected cancer under the local secondary care provider's arrangements. They were also advised that occult blood in the faeces can be caused by a wide variety of benign conditions as well as CRC, and further assessment may be appropriate to rule these out before referring. Data collection All patients with a FIT analysed from 1 June 2018 to 31 December 2018 were included in this study. Data extracted from the two laboratories included the test date, result, indication, patient year of birth and gender. Separately, each of the 14 secondary care providers in the region extracted data, including stage at diagnosis, on any cancer identified from 1 June 2018 to 31 December 2019 after entry into upper or lower gastrointestinal services. This captured cancers diagnosed by all routes, including screening and incidental findings such as routine referral or emergency admission. This allowed for 12 months of follow-up time for all patients, during which missed CRC diagnoses in FIT negative patients were likely (but not certain) to be diagnosed through other routes. Had a longer period of follow-up been chosen, some of the CRCs diagnosed in FIT negative patients after 12 months may not have been causing symptoms at the time of testing. Only cancers identifiable on a gastrointestinal (GI) pathway were identified: non-GI cancers, referred to other cancer diagnostic pathways, were not identified. Test results were matched against referral and diagnosis data by each of the secondary care providers using NHS number, then removed and replaced with a randomly allocated study number. Year of birth was used as a secondary confirmation of correct matching of patient records. This was done to adhere with information governance requirements and to ensure completeness of the full patient pathway. GPs were advised not to offer multiple FITs to individual patients; where more than one test was recorded for one patient, the earliest result was used. Statistical analysis and power calculation Summary statistics were used to describe the cohort, and to estimate the performance of FIT in this population, including sensitivity, specificity, positive predictive value (PPV), and negative predictive value. A Chi-squared test was used to compare the proportion of male participants, and a Mann-Whitney test to compare the median age, between those with a result at/above and below the threshold. A receiver-operating characteristic curve was produced for quantitative f-Hb against CRC diagnosis. Logistic regression was used to model the relationship between cancer and f-Hb (treated as a continuous variable), after logtransformation to improve the final model fit. Non-linearity in the relationship between f-Hb and CRC was explored using fractional polynomials, though goodness of fit was not improved by doing so. Consequently, a linear term was retained. The probability of being diagnosed with CRC in the next year for a given f-Hb value was estimated from the final model, in particular identifying the value equating to an individual cancer risk of 3%, to mirror NICE recommendations for urgent investigation. 10 Stata version 16 was used for all analyses. 15 Diagnostic test summary statistics were estimated with the DIAGT module. 16 A simulation approach was used to estimate the sample size required to achieve 95% confidence intervals of 2.2% to 4.0% around a cancer risk of 3% from the logistic regression. Assuming a linear relationship between f-Hb and CRC risk suggested a sample of 2250 would be sufficient so long as the threshold was within the central core of the distribution of f-Hb levels. It was estimated that 10,000 tests would be used in a year; data were collected over seven months to meet the sample size requirement. In practice it was comfortably exceeded, increasing precision. Data governance As this project was evaluating service delivery, and not changing routine clinical practice, ethical approval was not required. Data sharing agreements were drawn up between all parties, and Caldicott guardian approvals were in place to allow data sharing. The requirement for individual NHS numbers for use within this evaluation meets the criteria set out in section 6 of the General Data Protection Regulation: Guidance on Lawful Processing. The Figure 1 shows the distribution of f-Hb in patients above the threshold. Referrals in patients with a FIT Of 618 patients with f-Hb ≥10 μg Hb/g faeces, 458 (74.1%) were referred to lower gastrointestinal (GI) services within three months (Fig. 2). Of the remaining 160, 36 were referred up to 12 months after FIT. Cancer outcomes for these patients are shown in Fig. 2. Of 3272 patients with f-Hb <10 μg Hb/g faeces, 324 (9.9%) were referred to lower GI services within three months. Cancer outcomes Table 1 shows the cancers identified during the year after FIT. The positive predictive value of FIT in this low-risk symptomatic population is 7.0% (95% CI 5. (Fig. 3). The median number of days from FIT to diagnosis of CRC in patients testing above the f-Hb threshold was 34 (IQR 23-56). Staging data were available for 31 of 43 patients: 6 Dukes' A; 5 Duke's B; 12 Dukes' C; 8 Dukes' D. The median number of days to diagnosis in patients with a result below the f-Hb threshold was 57 days (IQR 37-197). Staging data were available for six of eight patients: 1 Duke's B; 2 Dukes' C; 3 Dukes' D. Cancer risk by f-Hb Figure 4 shows the estimated probability that an individual will be diagnosed with CRC for a given f-Hb level, estimated from the logistic regression model. Using this model, a f-Hb level of 37 μg Hb/g faeces (CI 26-50) in an individual with that result corresponds to a CRC risk of 3%. Five patients with CRC had a f-Hb value in the range 5-9 μg Hb/g faeces. DISCUSSION This study reports the use of FIT for detection of CRC in a primary care symptomatic population. The test performed very well using the threshold value of ≥10 μg Hb/g faeces. Test sensitivity and specificity were 84.3% and 85.0%, respectively, both notably high figures for a primary care cancer test. Using this threshold, the positive predictive value of f-Hb ≥10 μg Hb/g faeces was 7.0%, and the negative predictive value 99.8%, in a population with an overall prevalence of CRC of 1.3%. FIT also performed well irrespective of gender or age. A f-Hb level of 37 μg Hb/g faeces corresponded to an individual's CRC risk of 3%. Strengths and limitations The strengths of this study are its size, and its setting being where the test will be used, eliminating spectrum bias. The three symptom groupings used by GPs to prompt FIT were estimated to have PPVs in primary care in the range 1-3%, and the overall prevalence of 1.3% fell within that range. These defined symptoms match the current (September 2020) NICE guidance on when to offer faecal testing for colorectal cancer to adults without rectal bleeding. 10 Every one of the 14 secondary care providers in the region were recruited, increasing reliability and generalisability. Cancer metrics in the NHS are very accurately maintained; dedicated cancer managers ensure accurate data recording, and secondary care provider performance on cancer metrics is regularly published in the public domain. Furthermore, all secondary care providers of cancer services within England are required to use nationally defined datasets eliminating disparity in data definitions. Despite the thorough methods, it is possible that a small number of cancers were missed, although this is unlikely to affect the overall interpretation of the results. Crucially, the methods allowed the identification of CRCs in those not offered further investigation after the FIT result was received, and a long follow-up period of 1 year was achieved. The age group studied, with a median age of 65 years for those tested, is close to the median age for CRC diagnosis of 72 years, suggesting the GPs were using the test in those genuinely considered to have a real-but small-risk of cancer. More women were tested, whereas CRC is slightly more common in men. This may reflect the entry criteria, particularly with two of the three criteria incorporating iron-deficiency anaemia, a condition more common in women, 18 or the fact that women are more likely to seek medical intervention. 19 Symptom data could not be verified, but the overall prevalence figure suggests testing was rarely extended into higher-risk groups. Completing the test was patient driven; only tests which were completed and returned to the lab were reported; it is not known how many tests were handed out by GPs and not returned to the labs. Both participating laboratories used the same FIT system; achieving consistency across the cohort, but meaning the results are not applicable to other systems. 20 Comparison with previous literature Three recent studies can be compared, as they examined FIT in the symptomatic primary care population, rather than the screening or referred populations: two of these also used a threshold of 10 μg Hb/g faeces. Juul et al. studied 3462 Danish primary care patients aged ≥30 years, with symptoms not meriting urgent colonoscopy, but not defined further. 21 In that study, FIT was also recommended in patients diagnosed with irritable bowel syndrome, lest this were a misdiagnosis. 15.6% patients tested over the threshold, and 9.4% of these (CI 7.0-11.9%) had a CRC diagnosed in the next 3 months. There were fewer than three cancers identified in those below the threshold (the inexact number reflecting Danish data protection rules). Nicholson et al. followed up 9896 primary care patients in Oxfordshire, England, for 6 months after FITs were ordered in primary care. The entry criteria did not match NICE guidance DG30 or NG12, and included rectal bleeding. The sensitivity for CRC was 90.5% (CI 84.9-96.1%), and the positive predictive value of a positive test 10.1% (CI 8.2-12.0%). 22 with anaemia group, 2.9% in the 4.0-9.9 μg Hb/g faeces without anaemia group and 0.2% in the <4.0 μg Hb/g faeces group. The PPV of 7% for results over the 10 μg Hb/g faeces threshold in the present study is the lowest of the three that used that threshold, which may reflect the stricter criteria for use, in particular excluding patients with rectal bleeding, but matching current national guidance. 10 The PPV in patients with f-Hb from 4.0-9.9 μg Hb/g faeces in our cohort was 1.7%; comparable to Chapman et al.'s results. 23 As a comparison, the sensitivity of 84.3% reported here is higher than in the screening population of 67.0% (CI 59.0-74.0%, with thresholds of >50 μg/g 24 ), and lower than that in referred populations of 93.3% (CI 80.7-98.3%, thresholds of >10 μg/g). 25 Clinical and research use of the results The values reported in this study are excellent for a cancer triage test in primary care. The performance of diagnostic tests is generally worse as the prevalence of the target condition falls. 14 In primary care, gastrointestinal complaints are common, and the symptoms of CRC overlap both with less common cancers, such as pancreatic or oesophago-gastric, and with common benign conditions. With most symptoms (apart from rectal bleeding) the likelihood of CRC is low, often in the range 1-3%. 26 A large UK primary care study showed that patients would opt for cancer investigation even for risks as low as 1%. 27 FIT has been introduced to allow primary care investigation of such patients. It works in classical Bayesian fashion: from a prior risk of CRC of 1.3% in the symptomatic population, a result over the 10 μg Hb/g faeces threshold increased the risk to 7.0%, and a result below the threshold reduced it to 0.2%, which is approximately the whole population background risk, including those without symptoms. 26 Furthermore, in five of the eight cancers with a result under the threshold, the patient's GP still requested urgent investigation for possible CRC, probably because continuing symptoms allowed the GP to 'overrule' the negative test. Conversely, nearly a quarter of patients who tested over the threshold were not offered investigation within three months, although ultimately all who had CRC were investigated within a year. CRC incidence was similar in those who were referred within three months and those who were referred later; delays in the referral process should be avoided. Other cancers were diagnosed in participants in this study. Sixteen oesophago-gastric or pancreatic cancers were found, only four having a FIT result over the threshold. This suggests that GPs should consider other intra-abdominal cancers in patients with f-Hb < 10 μg Hb/g faeces and continuing symptoms, even though FIT is intended for the detection of colorectal cancers as Hb is immunologically degraded in the small intestine. While the PPV at or above the current threshold of 10 μg Hb/g faeces is 7%, the risk of having CRC for an individual with f-Hb of exactly this value is 1% or lower (Fig. 4). Given this risk is lower than the 3% chosen to underpin the NICE NG12 recommendations for urgent cancer investigation, 10 there may be scope to raise the threshold at which urgent definitive investigations are undertaken. f-Hb of 37 μg Hb/g faeces (CI 26-50) would identify those with a personal 3% risk of cancer, though the large uncertainty on this estimate may warrant the use of a lower value until more data are available to reduce this uncertainty. Such a change may be appropriate while endoscopy resources are severely curtailed by COVID-19 precautions, with 'safety netting' by GP review for those with f-Hb levels between 10 and 36 μg Hb/g faeces. 28 In the long term, however, the UK's aspiration is for improvements in cancer diagnosis to increase the proportion of cancer patients diagnosed at stage I or II to 75% by 2028 (from a pre-COVID~53%). 29 If CRC improvements are to contribute to this target, it may be that the threshold should be retained at 10 μg Hb/g faeces, or even lowered further, though not below the level where the test is considered reliable, currently 7 μg Hb/g faeces. Several research needs arise from this study. The first is a healtheconomic analysis, examining the choice of f-Hb threshold from that perspective. Second, it may be possible to combine data on symptoms, other lab tests, and demographics with f-Hb to increase the predictive power of FIT. A third strand of research-not directly related to this study, but overlapping-considers whether FIT can be used to triage the high-risk population. Such studies are underway; in Scotland, patients reporting rectal bleeding (considered a 'red flag' symptom) in primary care with a f-Hb <10 μg Hb/g faeces were unlikely to be harbouring CRC or other serious bowel disease; 30 similar results were observed in Sweden. 31 Those will complement the study reported here, and establish the final place for FIT in colorectal cancer triage. CONCLUSION FIT in the low-risk primary care population performs well. Falsenegatives are few in number, and many of those with a falsenegative test appear to receive timely investigation despite f-Hb below the 10 μg Hb/g faeces threshold. The false-positive rate is 93%, meaning 13 patients with f-Hb over 10 μg Hb/g faeces have to undergo colonoscopy to identify one CRC. This is a major diagnostic advance; low-risk patients were previously either not investigated, and had more emergency admissions and worst survival, 8,9 or were referred for colonoscopy. The background rate of cancer of 1.3% in this population meant 77 patients had to be offered colonoscopy to identify one cancer, potentially swamping endoscopy services, and putting patients at a small risk of complications. Clinically, therefore, FIT works, although health-economic aspects are as yet uncertain.
v3-fos-license