added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2020-03-12T10:58:24.480Z
|
2020-03-05T00:00:00.000
|
213191910
|
{
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.dib.2020.105376",
"pdf_hash": "3ddb875381f218f296291218a10fb3ff8d3abf4e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42323",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"sha1": "64445b2c3be5b81a2bdbfcd96dd7028665c5620c",
"year": 2020
}
|
pes2o/s2orc
|
Quantifying cover crop effects on soil health and productivity
The dataset presented here supports the research paper entitled “A calculator to quantify cover crop effects on soil health and productivity”. Soil health (sometimes used synonymously with soil quality) is a concept that describes soil as a living system to sustain plants, animals, and human. Soil physical, chemical, and biological properties, along with their interactions, are required to quantify soil health. The use of cover crops in agricultural rotations may enhance soil health, yet there has been little progress in understanding how external factors such as climate, soil type, and agronomic practices affect soil and cash crop responses. In response, this dataset compiles measurements from 281 studies and provides an analysis of field-measured changes in 38 soil health indicators due to cover crop usage. Environmental and background indicators were also compiled to assess how climatic and management practices affect soil and cash crop responses to cover crops, with specific categories including climate type (tropical, arid, temperate, and continental), soil texture (coarse, medium, and fine), cover crop type (legume, grass, multi-species mixture, and other), and cash crop type (corn, soybean, wheat, vegetable, corn-soybean rotation, corn-soybean-wheat rotation, and other). An unbalanced analysis of variation was used to determine the hierarchy of most to least important factors that affected responsiveness of each soil health indicator. Based on the hierarchy structure, a soil health calculator was then developed to quantify the response of 13 parameters – erosion, runoff, weed suppression, soil aggregate stability, leaching, infiltration, microbial biomass carbon, soil bulk density, soil organic carbon, soil nitrogen, microbial biomass nitrogen, cash crop yield, and saturated hydraulic conductivity – to cover crops. The presented data in the calculator report the mean change in parameter values based on all combinations of climate, soil texture, cover crop type, and cash crop type.
a b s t r a c t
The dataset presented here supports the research paper entitled "A calculator to quantify cover crop effects on soil health and productivity". Soil health (sometimes used synonymously with soil quality) is a concept that describes soil as a living system to sustain plants, animals, and human. Soil physical, chemical, and biological properties, along with their interactions, are required to quantify soil health. The use of cover crops in agricultural rotations may enhance soil health, yet there has been little progress in understanding how external factors such as climate, soil type, and agronomic practices affect soil and cash crop responses. In response, this dataset compiles measurements from 281 studies and provides an analysis of field-measured changes in 38 soil health indicators due to cover crop usage. Environmental and background indicators were also compiled to assess how climatic and management practices affect soil and cash crop responses to cover crops, with specific categories including climate type (tropical, arid, temperate, and continental), soil texture (coarse, medium, and fine), cover crop type (legume, grass, multi-species mixture, and other), and cash crop type (corn, soybean, wheat, vegetable, corn-soybean rotation, corn-soybean-wheat rotation, and other). An unbalanced analysis of variation was used to determine the hierarchy of most to least important factors that affected responsiveness of each soil health indicator. Based on the hierarchy structure, a soil health calculator was then developed to quantify the response of 13 parameters -erosion, runoff, weed suppression, soil aggregate stability, leaching, infiltration, microbial biomass carbon, soil bulk density, soil organic carbon, soil nitrogen, microbial biomass nitrogen, cash crop yield, and saturated hydraulic conductivity -to cover crops. The presented data in the calculator report the mean change in parameter values based on all combinations of climate, soil texture, cover crop type, and cash crop type.
© 2020 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license.
( http://creativecommons.org/licenses/by/4.0/ ) Specification Table Subject Soil science, agronomy Specific subject area Soil degradation, agricultural sustainable development, soil health, crop yield, soil erosion, nutrient leaching Type of data Table Figure Supplemental table R code How data were acquired Systematic literature search, data extraction, data filtration with quality control, data analysis Data format Raw Analyzed and processed Filtered Parameters for data collection Using search term "soil health" or "soil quality" and "conservation management" or "cover crop" in ISI Web of Science, Google Scholar, and the China National Knowledge Infrastructure (CNKI
Value of the Data
• The dataset quantifies cover crop effects on soil physical, chemical, and biological properties.
• This dataset supports future systematic reviews and meta-analyses of soil health.
• This dataset supports a global soil health calculator.
• Farmers and extension agents can benefit from this dataset when accessing the web-based soil health calculator presented in the companion article.
Data description
The dataset here summarizes the data processing and analysis to support the creation of a web-based soil health calculator [1] . The data were compiled from a global soil health database ( SoilHealthDB ) [2 , 3] , and represent 281 published articles from 38 countries. Supplemental Document 1 includes the full list of references and describes the metainformation of 281 published articles used in this dataset.
Supplemental Document 2 describes the UANOVA analysis outputs of all 38 indicators. Supplemental Document 3 describes the hierarchy structure that identifies the most to least important factor (i.e., hierarchical layer) for 13 indicators (for details regarding these indicators please see Table 1 in the related research article [1] ).
Supplemental Document 4 summarizes the data used to develop a web-based soil health calculator. Fig. 1 describes the number of records from 38 countries and the number of records published through year. Fig. 2 shows the histogram and theoretical quantiles vs. sample quantiles (Q-Q) of log transformed response ratio (RR) for yield, SOC, Nitrogen, and Aggregation (left two columns show raw data, while right two columns show resampled data using bootstrapping). Fig. 3 describes the bootstrapping outputs of RR (in percent changes) for 38 soil health parameters (whiskers indicate the 95% confidence intervals). Note that the bootstrapping outputs were comparable with yet distinct from the reults presented in the related research article ( Fig. 3 ; [1] ), which instead showed data from one sample t-tests. Fig. 4 describes the mean and 95% confidence interval of RR (log transformed) for yield, SOC, Nitrogen, and Aggregation, based on climate type, soil texture, cover crop type, and cash crop type. 5 shows the data processing procedure, hierarchical layer structure, and soil health calculator output, which presents average % change for 13 indicators (for details of these 13 indicators please see Table 1 in the related research article [1] ).
Experimental design, materials, and methods
We quantified the response ratio for each observation of a soil health indicator as RR = ln(X cc /X nc ), where X cc indicates the parameter value in the cover crop treatment and X nc represents the parameter value in the no cover crop control. All RR values for a given indicator were assembled. To ensure normality, RR distributions were resampled (with replacement) 10 0 0 times via bootstrapping ( Fig. 2 ). Mean values and 95% confidence intervals were then estimated from the resampled distributions for each parameter ( Fig. 3 ). We next used an unbalanced analysis of variation (UANOVA) to test for significant differences in RR values in 38 soil health indicators, examining the data as grouped by Koppen climate type (tropical, arid, temperate, or continental [4] ), soil texture (coarse, medium, or fine [5 , 6] ), cover crop type (legume, grass, multi-species mixture, or other), and cash crop type (corn, soybean, wheat, vegetable, corn-soybean rotation, corn-soybean-wheat rotation, or other). Examples of the UANOVA analysis are presented in Fig. 4 , with the full analysis provided in Supplemental Document 2. We next developed a hierarchy structure that identifies the most to least important factor (i.e., hierarchical layer) for each parameter. Climate was set as the highest level, while the remaining three factors were ordered from lowest to highest p-values in the unbalanced ANOVA. This analysis was applied to calculate mean RR for all combinations of climate type, soil texture, cash crop type, and cover crop type, specifically focusing on 13 key soil health indicators: cash crop yield, bulk density, soil organic carbon, soil nitrogen, soil aggregation, soil infiltration, soil saturated conductivity, soil erosion, surface runoff, leaching, weed pressures, soil microbial carbon, and soil microbial nitrogen ( Supplemental Document 3). As the final step, these data were used to develop a web-based soil health calculator ( Fig. 5 ), with the underlying data compiled in Supplemental Document 4.
|
v3-fos-license
|
2021-03-16T05:42:05.578Z
|
2021-02-01T00:00:00.000
|
232228533
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.cureus.com/articles/43480-case-of-coronavirus-disease-2019-myocarditis-managed-with-biventricular-impella-support.pdf",
"pdf_hash": "b4308a665003fdcfe98c0fed6b9f68eee398834d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42327",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b4308a665003fdcfe98c0fed6b9f68eee398834d",
"year": 2021
}
|
pes2o/s2orc
|
Case of Coronavirus Disease 2019 Myocarditis Managed With Biventricular Impella Support
Severe acute respiratory syndrome coronavirus 2, responsible for coronavirus disease 2019 (COVID-19), is a pandemic that has taken the world by storm. We present the only contemporary reported case of COVID-19 myocarditis leading to recovery with utilization of biventricular Impella (Abiomed, Danvers, MA, USA) for temporary mechanical circulatory support. A 35-year-old female with systemic sclerosis who was found to have five days of generalized malaise associated with fevers and cough. She tested positive for COVID-19 via nasal polymerase chain reaction. Cardiac enzymes were found elevated on admission. Invasive hemodynamics assessment was significant for elevated right and left-sided filling pressures, along with calculated cardiac index of 1.3 L/min/m2. Decision was made to place right and left-sided ventricular support with percutaneous Impella for mechanical circulatory support. She was started on intravenous immunoglobulin for suspected COVID-19 myocarditis along with remdesivir and solumedrol. After two weeks of continuous temporary mechanical circulatory support, the patient’s hemodynamics improved and she was discharged. Repeat echocardiogram demonstrated normalization of left ventricular function.
Introduction
Coronavirus disease 2019 (COVID-19) has changed the healthcare world in a multitude of ways and has forced us to think outside the box in managing patients that are severely affected. Specifically, its various cardiac manifestations, which include myocardial infarctions, myocarditis, and others, have led to profound long-term consequences. Similar to other viruses, COVID-19 has the potential to cause significant myocarditis leading to impaired biventricular (Bi-V) function and chronic heart failure. At its worst, it can result in cardiogenic shock requiring vasopressors and/or inotropes. Longer-term management is still controversial, with temporary mechanical circulatory support growing more popular in these patients. While the exact prevalence of COVID-19 myocarditis is still unclear, the complications that stem from it have raised awareness within the medical community. We present the only contemporary reported case of COVID-19 myocarditis leading to recovery with utilization of Bi-V Impella (Abiomed, Danvers, MA, USA) support for temporary mechanical circulatory support. No cases have been reported regarding utilization of Bi-V Impella as therapy for management of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) cardiogenic shock.
Case Presentation
We present the case of a 35-year-old woman with a history of systemic sclerosis who was found to have five days of generalized malaise associated with fevers and cough. On arrival, she was found tachycardic at 112 beats per minute and febrile at 101.8°F. She tested positive for COVID-19 via nasal polymerase chain reaction. Cardiac enzymes were found elevated on admission with troponin T of 0.28. On day two of hospitalization, the patient had spontaneous pulseless electrical activity arrest secondary to hypoxemia from COVID-19 pneumonitis. Transthoracic echocardiogram (TTE) revealed ejection fraction (EF) of less than 10% and severe right ventricular impairment with no pericardial effusion or significant valvular abnormalities seen (Video 1). Previous TTE showed normal LV function. Labs showed elevated lactic acidosis of 10 and NT pro-BNP of 7139 pg/mL. Invasive hemodynamics demonstrated a right atrial pressure of 21 mmHg, pulmonary arterial pressure of 32/23 (26 mmHg), and pulmonary capillary wedge pressure of 18 mmHg. Pulmonary artery pulsatility index was calculated to be 0.7. CO 2.1 L/min and CI of 1.2 L/min/m 2 using Fick's equation. Given the findings consistent with cardiogenic shock, extracorporeal membrane oxygenation (ECMO) was briefly discussed by the surgical team but due to the poor prognosis of the patient and unclear long-term options such as transplant, it was not undertaken. After a heart team approach, the decision was made to place right and left-sided ventricular Impellas' for mechanical circulatory support ( Figure 1).
Bi-V, biventricular
She was started on intravenous immunoglobulin for COVID-19 myocarditis along with remdesivir and solumedrol. After two weeks of continuous temporary mechanical circulatory support (TMCS), patient hemodynamics improved and she was weaned from TMCS. At this time, cardiac magnetic resonance imaging (MRI) was performed which confirmed the presence of myocarditis. Repeat echocardiogram demonstrated Bi-V recovery and remodeling with an LVEF of 60% and no significant valvular disease or pericardial effusion (Video 2).
VIDEO 2: TTE: apical four chamber view showing recovery of EF Post
Bi-V Impella. She was discharge home on day 23 with no neurological deficits.
Discussion
Myocarditis has been seen in up to 7% of COVID-related deaths [1]. The mechanism behind COVID-induced myocarditis is thought to be due to the combination of direct cell injury through the release of cytokines as well as the virus's ability to bind to ACE2 spike protein on cardiomyocytes to induce injury [1]. Interestingly, COVID-19 cardiac histopathologies did not demonstrate a high frequency of lymphocyte predominant inflammatory infiltrate with myocyte injury, which is usually seen in viral myocarditis [2]. The most common cardiac findings were nonmyocarditis inflammatory infiltrate, which was reported in about 12.6% of cases [2]. With no distinct histopathology seen to diagnose these patients, about 60% of COVID-19 myocarditis cases were determined through cardiac MRI [2]. As seen in our case, MRI played a role in determining that the patient was suffering from COVID-19 myocarditis.
As a severe form of myocarditis, patients develop signs and symptoms of acute heart failure leading to cardiogenic shock. Initial management includes inotropes, vasopressors, and mechanical ventilation, if needed. However, there is still no consensus on long-term management of these patients. Mechanical circulatory support, which includes either ECMO, ventricular assist device, or intra-aortic balloon pump are some options being used in those unresponsive to conventional therapy [3]. ECMO is not only used in COVID patients with respiratory failure, but is becoming increasingly popular in those with cardiovascular compromise as well. However, there are still no absolute standards on when to initiate ECMO, and data on its effect on overall mortality are still limited.
The use of Bi-V continuous microaxial flow devices during acute COVID-19 myocarditis allows a viable alternative with a minimally invasive approach to allow ventricular rest and optimal offloading without the increased risk of surgically placed TMCS [4]. With recent emergency use status by the FDA, its wide adaptation still remains sparse. Specifically, no cases have been reported regarding utilization of Bi-V Impella as therapy for management of SARS-CoV-2.
Conclusions
Our case demonstrates a unique approach to management of COVID-19 myocarditis. It is the only reported case in the literature utilizing Bi-V Impella devices for circulatory support without the concurrent use of ECMO. Due to the success in this patient, this promising approach warrants continued investigation in the management of COVID myocarditis and cardiogenic shock.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
v3-fos-license
|
2018-04-03T04:24:22.825Z
|
2016-06-01T00:00:00.000
|
38934952
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://figshare.com/articles/journal_contribution/Antioxidant_potential_of_indigenous_cyanobacterial_strains_in_relation_with_their_phenolic_and_flavonoid_contents/1476202/files/2165908.pdf",
"pdf_hash": "8b4839641e52bd4c8b44ebc7bfb12ac51d535f7a",
"pdf_src": "TaylorAndFrancis",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42333",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "f341599a41b5a09ab1ad8680450ec13d3de57542",
"year": 2016
}
|
pes2o/s2orc
|
Antioxidant potential of indigenous cyanobacterial strains in relation with their phenolic and flavonoid contents
Abstract Antioxidant activities of eight indigenous cyanobacterial strains belonging to the genera Oscillatoria, Chroococcidiopsis, Leptolyngbya, Calothrix, Nostoc and Phormidium were studied in relation with their phenolic and flavonoid contents, ranging 3.9–12.6 mg GAE g−1 and 1.7–3.44 mg RE g−1. The highest activities were shown by Leptolyngbya sp. SI-SM (EC50 = 63.45 and 67.49 μg mL−1) and Calothrix sp. SI-SV (EC50 = 65.79 and 69.38 μg mL−1) calculated with ABTS and DPPH assays. Significant negative correlations were seen between total phenolic and flavonoid contents and the antioxidant activities in terms of EC50 values. Furthermore, HPLC detected 15 phenolic compounds with total concentrations ranging from 277.3 to 829.7 μg g−1. The prevalent compounds in most of the strains were rutin, tannic acid, orcinol, phloroglucinol and protocatechuic acid. Cyanobacterial strains showed high potential as a good source of phenolic compounds with potent antioxidative potential which could be beneficial for food, cosmetic and pharmaceutical industries.
compounds that fight against Reactive Oxygen Species and oxidative stress, which in turn could also be used in treating chronic diseases related to oxidative stress such as, inflammation, diabetes, cardiovascular diseases, neurodegenerative diseases and premature aging (Saranya et al. 2014). These antioxidants include the phenolic or polyphenolic compounds that have numerous beneficial effects in humans and other mammals (Plaza et al. 2008). They are structurally diverse with hydroxyl group covalently attached to an aromatic hydrocarbon group. From various algal species, phenolic acids and their ester, derivatives of Phloroglucinol, halogenated phenols and derivatives of sulphated phenols have been identified (La Barre et al. 2010). Dietary antioxidants comprise vitamins, carotenoids and phenols among which phenolics have the highest in vitro antioxidant activity affirming their importance in food and pharmaceutical industries (Oueslatia et al. 2012). Natural tannins are also one of the phenolic compounds with very high antioxidant abilities. They include hydrolysable tannins, flavonoids and non-hydrolyzable tannins or flavolans which are becoming interesting these days because not only do they have potent antioxidant abilities, they can also bind to pigments, proteins, other macromolecules and metal ions (Okuda & Ito 2011).
Total phenolic and flavonoid content
The total phenolic and flavonoid contents are given in Table S1. Among the eight strains, Leptolyngbya sp. SI-SM showed the highest amount of total phenolic and flavonoid content (12.6 and 3.44 mg g −1 ) closely followed by Calothrix sp. SI-SV (12.4 and 3.23 mg g −1 ). Kumar et al. (2015) showed lesser phenolic contents from these cyanobacteria. The unicellular strain Chroococcidiopsis sp. SI-ST showed total phenolic and flavonoid content of 4.32 and 2.44 mg g −1 . To the best of our knowledge no study has been done on phenolic content of Chroococcidiopsis sp. and this is the first one done on this cyanobacteria. In the case of total flavonoid content a somewhat similar trend was seen as shown in Table S1. Moreover, despite the fact that the total phenolic content of Oscillatoria sp. SI-SF and Chroococcidiopsis sp. SI-ST was low, they showed high flavonoid content of 2.82 and 2.44 mg RE g −1 . Singh et al. (2014) also showed similar amount of total flavonoid content from Oscillatoria acuta.
Antioxidation potential
The antioxidant activities of eight strains of cyanobacteria and the standards, obtained from ABTS and DPPH assays are given in Table S1. In the ABTS assay, the highest radical scavenging activity was shown by Leptolyngbya sp. SI-SM, closely followed by Calothrix sp. SI-SV and Nostoc sp. SI-SN with low EC 50 values of 63.45, 65.79 and 75.99 μg mL −1 , respectively ( Figure S1(A)). The standard Trolox gave highest antioxidation with low EC 50 value of 25.48 ( Figure S2(A)). These results are comparable with the study done by Shanab et al. (2012) however; they extracted the biomass in water instead of 70% methanol and also reported higher antioxidation of 75.6% at 100 μg mL −1 from Oscillatoria sp. as opposed to the current study. The DPPH assay gave slightly lower antioxidation activities than the ABTS assay with similar trend as shown in Figure S1(B). Ascorbic acid as a standard showed much higher antioxidation with a low EC 50 value of 28.16 ( Figure S2(B)). These results are in agreement with study done by El-Aty et al. (2014). Similarly, when comparing the antioxidation of Leptolyngbya sp. SI-SM at 50 μg mL −1 by DPPH assay (Figure S1(B)), Suhail et al. (2011) showed lower antioxidant activity from Plectonema boryanum, a taxonomic synonym of Leptolyngbya boryana.
Correlation between phenolic, flavonoid content and antioxidation potential
The correlation coefficient (R 2 ) was calculated between the antioxidation activities obtained from ABTS and DPPH assay as shown in Figure S3(A) and (B). The results showed significantly strong negative correlations between the total phenolic and flavonoid content and EC 50 values obtained from both ABTS (R 2 = 0.868, 0.620) and DPPH (R 2 = 0.813, 0.616) assays. It was also observed that despite the fact that Oscillatoria sp. SI-SF comparatively contained less total phenolic content but due to its high flavonoid content, it showed high antioxidation activity. This phenomenon was conversely present in Nostoc sp. SI-SN. This showed that the antioxidation activity greatly depended upon the phenolic and flavonoid contents equally (Figueroa et al. 2014).
HPLC UV/vis analysis of phenolic and flavonoid content
Fifteen phenolic compounds were detected in the 70% methanolic extracts as shown in Table S2. The highest concentration of phenolic compounds was found in Leptolyngbya sp. SI-SM with 829.7 μg g −1 . Rutin, tannic acid, orcinol, phloroglucinol and protocatechuic acid were the dominant phenolic compounds in most of the strains with concentrations ranging 96. 3-176.2, 13.4-75.0, 71.2-167.4, 12.0-28.3 and 35.6-94.0 μg g −1 , respectively. Higher values of protocatechuic acid were found by Shalaby and Shanab (2013) in water and methanolic extracts of Spirulina platensis. In Leptolyngbya sp. SI-SM, Gallic acid was seen in very high amounts of 205.4 μg g −1 followed by rutin and ferulic acid with 176.2 and 98.1 μg g −1 , respectively. These compounds are strong free radical scavengers and, therefore, could be involved in the high antioxidation activity of Leptolyngbya sp. SI-SM. Singh et al. (2014) showed lower values in P. boryanum under high salt stress. As far as other cyanobacterial strains were concerned, resorcinol, vanillic acid and syringic acid were detected in high amounts (81.3, 21.7 and 13.4 μg g −1 ) in Phormidium sp. SI-SC. Caffeic acid and benzoic acid (28.2 and 17.5 μg g −1 ) were detected in highest amount in Oscillatoria sp. SI-SF. Salicyclic acid and acetyl salicyclic acid (23.5 and 13.2 μg g −1 ) were present in Oscillatoria sp. SI-SA in higher amounts. Tannic acid was present in highest amount (75.0 μg g −1 ) in Nostoc sp. SI-SN. Due to the unavailability of flavonoid standards other than rutin, not much flavonoid varieties were detected in the extracts.
Conclusion
It is concluded from the study that out of eight cyanobacterial strains Leptolyngbya sp. SI-SM and Calothrix sp. SI-SV showed promising results as good sources of phenolic compounds with potent antioxidant activities which is directly related with the abundance and types of phenolic compounds extracted from the biomass. The antioxidant benefits of phenolic compounds from terrestrial plants have long been established and exploited but the phenolic compounds from cyanobacteria are still an emerging research field and deserve more scientific attention and interdisciplinary research. Since there is very scarce data published on the phenolic compounds of cyanobacteria particularly, this study would contribute greatly for this purpose and would be a boon for food and pharmaceutical sciences to explore unusual and potent phenolic compounds from these blue-green microbes.
Disclosure statement
No potential conflict of interest was reported by the authors.
|
v3-fos-license
|
2024-04-12T15:15:20.171Z
|
2024-04-09T00:00:00.000
|
269074716
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2024.1360986/pdf",
"pdf_hash": "a33e48603e09f34c187e1447f3994bcc16ede9fd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42335",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"sha1": "a22587d227218f6b7f687f1449ca57fc0c95947c",
"year": 2024
}
|
pes2o/s2orc
|
The predictors of voluntary participation in pulmonary tuberculosis screening program: a study in a suburban community of southern Thailand
Background The health belief model (HBM), baseline health condition, and sociocultural factors impact the decision to participate in a tuberculosis screening program. Methods This cross-sectional and descriptive study was carried out among the “Kao Taew” community dwellers aged 18 years and above, who voluntarily underwent the provided pulmonary tuberculosis (PTB) screening by chest radiographs (CXRs). The level of individual HBM domain perception, attitudes toward PTB prevention, and regularity of PTB prevention practices by the participants were evaluated. The significantly associated or correlated factors such as demographic characteristics, individual HBM domain perception, and attitudes toward PTB prevention with the regularity of PTB prevention practices from the univariate analysis were further analyzed by multiple linear regression (p < 0.05) to determine the independent significant predictors of PTB prevention practices. Results Among 311 participants comprising 65% women, 57.9% aged ≥ 65 years and 67.2% had an underlying disease. The study participants had a high level of perception of HBM domains but a low level of perception of the barrier. In addition, a high level of attitudes toward PTB prevention and a high regularity of PTB prevention practices were found. A multiple linear regression analysis revealed that the perceived benefits of PTB screening [Beta = 0.20 (0.04, 0.36) p = 0.016] and acquiring underlying diseases [Beta = 1.06 (0.38, 1.73), p = 0.002] were significant predictors for PTB prevention practices, while belief in Islam was a reverse predictor [Beta = −0.84 (−1.47, −0.21), p = 0.010]. Conclusions The level of perception of the individual domain of HBM, health status, and religious belief significantly predicted voluntary participation in PTB screening programs. Careful consideration by integration of the relevant health psychology, physical, and sociocultural factors is crucial for planning a health screening program.
Background: The health belief model (HBM), baseline health condition, and sociocultural factors impact the decision to participate in a tuberculosis screening program.
Methods: This cross-sectional and descriptive study was carried out among the "Kao Taew" community dwellers aged years and above, who voluntarily underwent the provided pulmonary tuberculosis (PTB) screening by chest radiographs (CXRs).The level of individual HBM domain perception, attitudes toward PTB prevention, and regularity of PTB prevention practices by the participants were evaluated.The significantly associated or correlated factors such as demographic characteristics, individual HBM domain perception, and attitudes toward PTB prevention with the regularity of PTB prevention practices from the univariate analysis were further analyzed by multiple linear regression (p < .) to determine the independent significant predictors of PTB prevention practices.
Results: Among participants comprising % women, .% aged ≥ years and .% had an underlying disease.The study participants had a high level of perception of HBM domains but a low level of perception of the barrier.In addition, a high level of attitudes toward PTB prevention and a high regularity of PTB prevention practices were found.A multiple linear regression analysis revealed that the perceived benefits of PTB screening [Beta = .( ., . ) p = .
Introduction
Pulmonary tuberculosis (PTB), a contagious pulmonary infection disease, has been a health concern globally for a long time.Stopping its spread is the highest aim of global and individual national healthcare programs.In the year 2015, the WHO endorsed an "End of tuberculosis" strategy aimed at eradicating tuberculosis by 2035 (1).Therefore, PTB control was included in Goal 3 of Sustainable Development Goals (SDGs) for control of communicable diseases, and under SDGs issue 3.3, PTB, AIDS, malaria, neglected tropical diseases, hepatitis, water-borne diseases, and other communicable diseases were the targets (2).The increasing incidence of PTB is a major health concern, and the process of its effective control is challenging.Public screening in areas posing a high PTB infection rate have been carried out in many countries.It is noteworthy that the mortality rate of patients with HIV and PTB co-infection has been decreasing, while that of non-HIV patients remained stable (3).It is possible that, besides the recent advance of antiretrovirus drugs, regular medical check-ups and screening tests including chest radiography (CXR) can be performed.Because PTB is a usual co-infection in HIV patients, early detection and treatment can reduce the mortality rate.This finding highlights the benefit of regular health and/or CXR screening, especially among those who have an underlying immune-compromised state.While active participation in health screening programs to prevent PTB spread among the public is required, some barriers such as knowledge; socioeconomic, cultural, or religious beliefs; or conflicting psychological perceptions of the disease exist.Therefore, strengthening health education and campaigns are needed to foster public understanding and disease recognition.Understanding the perception of health and disease in potential participants is also crucial.First, evaluation of how they perceive the risk, susceptibility, and severity of the disease is mandatory (4)(5)(6)(7)(8)(9).Then, removal of all possible barriers for access to the available health services, either geographic, travel, individual emotional factors, or non-rational thoughts, should be encouraged.This way of a public approach to encourage healthcare participation is based on the health belief model (HBM) (10)(11)(12)(13).
Several global and national strategies to control the spread of PTB have been applied.Health education, campaigns, and interventions for facilitating the active participation of the public in PTB screening programs are widely implemented.Furthermore, mass screening of the public by CXRs or several laboratory techniques and providing treatments to the diagnosed PTB patients are common strategies implemented worldwide to reduce the incidence of PTB.Despite these intensified health programs, the incidence of PTB in some specific locations still increases.The barriers to access to these health programs, i.e., geographic separation or difficulty in traveling, may be one of the contributing factors.However, other factors such as individual psychosocial, economic, cultural, or religious beliefs can contribute to the nonacceptance of the well-provided health services too.While routine CXR is the simplest way for PTB screening and is widely available, the rate of active participation in screening by CXR is still lower than expected in many countries (14).The reasons for participants' reluctance to undergo CXR screening may be because clinical symptoms of PTB are less severe at the beginning and slowly progress, leading to under-recognition and perception of infection and its fatality, or it is considered a low socioeconomic stigma in some societies, causing unwillingness in people to undergo CXR screening.The discrepancy between the availability of CXR and the engagement in screening radiography by people requires further exploration.
Thailand is one of the top 14 countries worldwide with high PTB incidence.Due to the less severe pulmonary symptoms compared with other pulmonary infections and under-recognition of acquiring PTB as mentioned, it can widely spread if the infection control measures are not stringent (15).In the year 2022, there were 103,000 (143:100,000) newly diagnosed or recurrent cases of PTB reported in Thailand, among which 1,200 died of the disease (16).The successful treatment rate in Thailand during 2013-2020 was 81.5-86.3%, which was lower than the global target (90%).The reasons were as follows: 9.3% of the PTB patients died before completing the treatment, among which were patients aged > 65 years accounted for 19% of the dead and non-compliance with the treatment provided accounted for 5.4% of the dead (17).The underrecognition of acquiring PTB infection, low active participation in medical screening, and low treatment success rate together contributed to the high incidence of PTB in Thailand.
The high incidence of PTB in Thailand also impacted the situation of PTB in Songkhla, a southern province of Thailand.It was ranked as the eighth among top ten provinces with a high PTB incidence in Thailand according to the records of the Department of Disease Control, Ministry of Public Health, from October 2020 to February 2021 (18).In the first quarter of 2023, a total of 688 newly diagnosed and recurrent PTB cases were reported in Songkhla province, which accounted for 34.8% of all pulmonary infections reported in the province.Hat Yai, Meung (the study area), and Sadao districts were the top three districts reported to have a high PTB incidence in the province (16).Then, PTB screening programs for early detection and treatment, .
/fpubh. .which were national policy-driven strategies for PTB control in Thailand, were implemented in this province.Routine CXR has been accepted for mass PTB screening in both community and specific settings due to its simplicity of application and high costeffectiveness (19,20).In this study, we aimed to evaluate the impacts of demographic characteristics of the screening program respondents, their attitudes, and levels of perception of individual domains of the health belief model (HBM) toward PTB prevention.Further analysis was carried out to determine the independent factors from the variables mentioned in predicting the PTB prevention practices among respondents.The evaluation of the impact of individual domains accounting for the HBM construct was specifically focused on providing insights into the powerful motivators that could generate a strong intention to participate in the screening program of the program respondents.The understanding derived from this study can be useful for designing and implementing future programs in similar settings.
. Theoretical models
In this study, the HBM was specifically focused as a significant motivator of voluntary participation in the screening program.We believed that achieving high levels of perception in HBM domains would facilitate attitudes and adherence to recommendations of PTB prevention practice subsequently.The HBM, originally a social psychology concept, first described by Rosenstock (23), has been considered in planning health programs or services.It is a useful predictor of adoption and long-term adherence to the designed health programs, indicating the program's sustainable success.Considering the concept of the HBM, it consists of two opposite arms of encouraging and discouraging domains to adopt and adhere to health suggestions or interventions.Perceived disease susceptibility or risk, perceived disease severity, perceived benefits from adopting the suggested health program, cues to action following the recommendation, and self-efficacy are parts of the encouraging arm, whereas perceived barriers is in the discouraging arm.Aiming at high adoption and adherence to the program from the participants, promoting a high level of perception of risk of acquiring a disease, susceptibility, and disease severity should be stressed.Meanwhile, to lessen or remove the perception of barriers to engaging in a health program, detailed cues to action should be meticulously introduced to the participants.Then, self-efficacy and self-confidence for conducting the advised health practices will be formed in the program participants.It was suggested that the HBM alone or enhanced by self-efficacy, a high level of knowledge, or attitudes toward disease prevention were important motivators for high adoption and adherence to a health program (4-6, 24-27).
In addition to HBM, the internal health locus of control (IHLC) can enhance self-efficacy for proper self-management of one's health.A previous study suggested that facilitating the selfefficacy of program participants through enhancing the perception of HBM and strengthening belief in IHLC concurrently were useful for achieving the expected outcomes of a health program (28).Furthermore, another study stressed that self-efficacy was possibly more powerful than the HBM or other health psychology concepts in predicting sustainable adherence to health advice or programs by the program participants (29).
In conclusion, this study mainly integrated the HBM with IHLC constructs to advise the required health practice of the program to participants.We expected the formation of self-efficacy in the participants finally so that they could deliberately decide to join and follow the program activities with confidence.
. Terms and definitions and concepts used in this study
The terms and definitions used in the study included the following: (a).Health Belief Model (HBM) is a psychological construct describing the fundamental factors that influence an individual's decision to participate in a health program, follow health advice, or accept a suggested treatment or disease prevention.It consists of the domains that evaluate an individual's perception of disease vulnerability or risk, disease severity, benefits of adherence to the health advice or services, barriers to access to the services or to follow the health advice provided, and cues to action.(b).Self-efficacy indicates the level of an individual's selfconfidence in managing one's health condition competently.
(c).PTB prevention practices refers to the expected health behaviors or practices implemented to protect a person from contracting PTB.
Study setting and design
The current study was conducted in "Kho Taew, " a subdistrict under the governance of Meung district, the metropolitan city of Songkhla province, southern Thailand.Covering an area of 28.4 km 2 , it is located 14 km north of the Meung district where every governmental service, including healthcare, is available.The communication between the study area and the metropolitan city is convenient on the road.There are two primary health care units (PHCUs) in the study area.Kho Taew was selected as the study site due to the reported high incidence of PTB.The Songkhla Provincial Public Health Office reported that Meung district, including Kho Taew, ranked the second highest PTB incidence in the province.This study aimed to understand the perception of PTB health burden and the motivators or barriers of the community people to participate in a provided PTB screening program in such a high-risk PTB area, particularly when considering that they resided near the regular health-service centers of the province that were accessible without any difficulties.The study design used in this study was a cross-sectional, exploratory, and descriptive design.
. Sampling methods
In our study, we invited all community dwellers aged 18 years and above, who were a high-risk group for contracting PTB, to voluntarily participate in the PTB screening program by conventional CXRs and enrolled them in this study.We understood that this method might cause selection bias, but we extensively explored the significant motivators or barriers to participation in the screening program, either personal demographic, socioeconomic, religious belief, or levels of perception of individual domains of the HBM.
. Study method
We employed a cross-sectional and descriptive design.The primary data including demographic characteristics, level of perception of an individual domain of HBM toward PTB prevention, attitudes, and regularity of practice of PTB prevention were collected by personal interviews of the program participants.
. . Study tools
The tools for data collection were the questionnaires developed by researchers to evaluate the perception, attitudes, and regularity of PTB prevention practices in the community.The questionnaires had passed the content validity and reliability tests before employment as shown by the index of item objective congruence (IOC) and Cronbach's coefficient, respectively, addressed below.(a).Perception of HBM domains (six interview domains, with the score ranging from 5 to 25 points/domain).The answers to each interview question according to HBM domains were classified into five levels: strongly agree, agree, uncertain, disagree, and strongly disagree (5 points/level).(b).Attitudes toward PTB prevention (ten interview items, with the score ranging from 1 to 5 points/item, total scores 10-50 points).A 5-point Likert scale consisting of strongly agree, agree, uncertain, disagree, and strongly disagree was used (1 point/level).(c).Regularity of PTB prevention practices performed (five interview items, with the score ranging from 1 to 5 points/item, total score 5-25 points).In addition, five levels of regularity of PTB prevention behaviors included always, frequently, sometimes, rarely, and never were applied (1 point/level).
. . Data collection
After the ethical approval and informed written consent from the participants were obtained, the interviews for data collection were carried out by a group of well-trained fifth-year medical students as a part of their study in community medicine.Then, the CXRs for PTB screening were performed.We collected general demographic characteristics data of the study participants, the levels of perception of individual HBM domains related to PTB prevention, and attitudes and regularity of PTB prevention practices.A standard screening CXR was performed by a mobile radiological imaging machine (Fujifilm, model FDR Smart X R ).The findings from chest images were confirmed by two independent radiologists in our institution.If there was a disagreement between the initial reports, the consensus for the final diagnosis was obtained by discussion with a third independent and clinically blind radiologist.If an abnormal CXR was suggestive of PTB, the patient would be transferred for diagnosis confirmation and treatment accordingly at Songkhla Provincial Hospital.The interviews and PTB screening radiographs were performed during 9-11 May 2023.
. . Data analysis
Descriptive statistics were used to describe general demographic characteristics.The Wilcoxon-rank sum test was used to test the significant associations between general demographic characteristics and regularity of PTB prevention practices.The correlations between the level of perception in individual domains of HBM, attitudes, and regularity of PTB prevention practices were analyzed by Spearman's correlation (p < 0.05).The variables showing significant associations or correlations were further analyzed by multiple linear regression analysis to determine the significantly independent predictors for carrying out PTB prevention practices regularly (p < 0.05).
The justification for selecting this study method was based on the requirement to assess the real and current HBM perceptions, attitudes, and regularity of PTB prevention practices in the community.Hence, the descriptive and exploratory analyses were considered suitable to respond to the study objectives. .
Study population and sampling technique . . Study population
The study population consisted of dwellers of "Kho Taew" subdistrict of Meung district, Songkhla, who were aged 18 years or above and voluntarily participated in the PTB screening by CXRs done in the community.This population was at risk for PTB contraction due to high incidence of PTB in their community.
. . Sampling technique
Our sampling technique was non-randomized and inclusive.We enrolled all community dwellers who voluntarily participated in the CXR screening program.This approach was chosen because these groups are generally considered to be at higher risk for PTB as mentioned before.The rationale of using non-randomization was specifically to gain comprehensive insights into their perceptions of the HBM and its prediction of the decision to participate in the PTB screening program.
. Determination of sampling size
The sample size for our study was calculated based on the requirements for statistical power, which resulted in a total of 262 participants.This figure was derived by using a calculation formula for a standard sample size for the multiple linear regression analysis.The specific parameters used in this formula included an effect size of 0.05 and a power of 0.95.These values were chosen to ensure that the study had adequate power to detect statistical significance and was also feasible within the constraints of the study setting and population.
The choice of an effect size of 0.05 was based on conventional standards in epidemiological research, which aimed to detect smallto-moderate effects in community-based studies.The power of 0.95 was selected to provide a high probability of correctly rejecting the null hypothesis (i.e., detecting a true effect) if it indeed exists.This high level of power reduces the risk of Type II errors, ensuring that the study findings are robust and reliable.
In accounting for the response rate, our approach in the study was to invite all eligible individuals in the community.We then monitored the actual number of participants who voluntarily participated in the study and received the PTB screening radiographs.This method ensured that we reached the required sample size of 262 participants.
. Variable measurements
The dependent variable in this study was the regularity of PTB prevention practice.We derived the outcomes from the questionnaire evaluating the regularity of PTB prevention.The questions used to evaluate this dependent variable are shown in Table 4, reporting the regularity of PTB prevention practices.
The independent variables were the community dwellers' demographic characteristics, attitudes of PTB prevention, and level of perception of individual domains of the HBM.These variables were obtained by a personal interview with the program participants.The levels of perception of individual domains of the HBM and attitudes were evaluated through designed and validated questions in the related questionnaires, as indicated (Tables 2, 3).
The scoring and stratification of the scores obtained in different levels were described in the study tools.
. Ethical consideration
We confirmed that we strictly followed the regulations of the 1964 Declaration of Helsinki and related standard ethical guidelines in conducting this study.Consents for participation and publication of the study were obtained from all participants.The participants' personal or identifiable information were completely anonymous.
In addition, the study protocol was reviewed and approved by the ethics committee of the Faculty of Medicine, Prince of Songkla University, an institutional ethic review board (EC.code.66-177-9-2).
Results
. Population, livelihoods, and healthcare service of the study area During the study period, "Koa Taew" comprised 2,463 households with 11,519 people, including 8,176 aged 18 years or above according to the subdistrict civil registration.The original and majority of the people here followed Islam (Thai Muslim).Agriculture, e.g., rice fields and rubber plantations, followed by raising livestock were the main livelihoods (30).There were two PHCUs in this area, each of which was headed by one professional nurse and two-three assistants.Most of the Kho Taew people usually visit one of the PHCUs for initial healthcare and medical treatment.In case of complicated medical conditions, the patients are transferred to Songkhla Provincial Hospital for specific investigations and treatments.Based on the 2022 annual report of the Songkhla Provincial Public Health Office, Meung district including "Kao Taew" people ranked second for high PTB incidence in the province (16).
. Study population demographic characteristics
A total of 317 community dwellers who voluntarily participated in PTB screening programs were enrolled in this study.Two participants were excluded as they had been diagnosed with PTB and were under treatment, and four participants did not meet the inclusion criteria.Subsequently, 311 of 8,176 (3.8%) people were included in the interviews for data collection before the CXR screening was done.Although the calculated adequate sample size was 262 people, we included all the program participants because of their voluntary participation, and informed consents were obtained for study enrollment.They consisted of 202 (65%) women and 109 (35%) men, among whom 57.9% were aged 65 years or older and only 4.5% were uneducated.Inadequate monthly earnings were reported by 60.5% of the study participants during the interview.Two-thirds (67.2%) of them had one or more underlying health conditions, e.g., essential hypertension, diabetes mellitus, dyslipidemia, chronic airway disease, and HIV infection.Twenty-six participants were current smokers (8.2%).Only six participants (1.9%) were living in the same house with a person currently diagnosed with PTB (Table 1).The high frequency of older people in the community is currently a real situation found around Thailand.The longevity of people due to advanced medical care and being free from employment are the reasons of the high proportion of the older community members involved.However, they frequently have one or more underlying health conditions, leading to an immune-compromised state.This situation elevates the risk of PTB contraction among them.
. Levels of perception of individual HBM domains toward PTB prevention
The levels of perception of individual domains of the HBM toward PTB prevention practices were high, except for perceived barriers.This finding implied that the people of the Kao Taew subdistrict were aware of the risk of PTB contraction or susceptibility, severity of PTB infection, and benefits from participation in the PTB screening program.Therefore, they were willing to participate in the PTB screening program provided in their community and perform the PTB prevention practices (Table 2).The provision of the screening program readily available in their community possibly removed the perception of barriers of access to the program.In summary, when the community people had a clear understanding and recognition concerning the harmfulness of PTB combined with removing or lessening the barriers, they would feel convenient and willing to participate in the screening program.
. Levels of attitudes toward PTB prevention
The overall median score of attitudes toward PTB prevention in the community was 34 (27.5, 40.0), which was graded as agreement with PTB prevention practices.The finding could be assumed that the people of the Kho Taew subdistrict had positive attitudes toward PTB prevention (Table 3).The positive direction of attitudes toward PTB prevention was a good baseline factor for applying PTB control strategies.
. Adherence to PTB prevention practices
The overall median (Q1, Q3) score of regularity of following PTB prevention practices among the Kho Taew subdistrict people in this study was 21 (18.0, 23.0), which was graded as "high regularity" of practice.This finding indicated that the study participants had practiced the expected PTB prevention behaviors much regularly.Moreover, it was also a good baseline factor, particularly when combined with a high level of attitudes (Table 3), for strengthening the PTB prevention behaviors among the study population (Table 4).
. Associations and correlations between demographic characteristics, levels of perception of individual HBM domains, and levels of attitudes toward PTB prevention, and regularity of doing PTB prevention practices
Gender (female) (p = 0.024), religion (Buddhism) (p = 0.004), and having an underlying disease (p = 0.004) were significantly associated with regularity of following PTB prevention practices, whereas educational levels, income, and staying in the same house with a person infected by PTB were not (Table 5).
By the linear regression analysis, we found that having an underlying disease and perceived benefit domain of the HBM significantly predicted the adoption of PTB prevention practices or also screening programs according to this study, while belief in Islam had an inverse significant prediction (Table 7).
Discussion
Our study in Kho Taew subdistrict enrolled more female (65%) than male participants.This difference was possibly because female participants were usually concerned about their health more than males participants (31).In addition, household economic factors required males and younger people to work far from their homes for livelihoods.Therefore, more participants and older people (aged ≥ 65 years) (57.9%) were left in the community and enrollable for this study.Another explanation could be that the older people were more likely to feel concern about their high vulnerability and actively participated in this PTB screening program.We found from the baseline participants' characteristics that female participants, those who believed in Buddhism, and who had underlying disease were significantly associated with high regularity of following PTB prevention practices, whereas economic status and living with a family member who had PTB were not (Table 5).Therefore, gender, previous illness, and religious beliefs significantly influenced the regularity of PTB prevention practices and possibly including voluntary participation in the PTB screening program in this study subsequently.
The study participants had a high level of attitudes toward the prevention of PTB spreading and high regularity of practicing PTB prevention based on the median (Q1, Q3) of total scores obtained (Tables 3, 4).In addition, the perception of individual HBM domains among the community dwellers was high, except for perceived barriers (Table 2).The combination of these findings could imply that the community dwellers realized the benefits of and perceived no significant barriers to follow PTB prevention advice or practices.We believed that, besides the high attitudes toward PTB prevention and high perception of advantages of PTB prevention following individual domains of the HBM, the health information received from usual public health education could promote their understanding regarding PTB prevention.Hence, the Kho Taew dwellers were willing to adopt the PTB prevention advice, including the CXR screening provided.Moreover, we believed that the higher percentage of educated people in the community influenced the decision of community dwellers to undergo the CXR screening as well.Our finding was supported by When the significantly associated demographic characteristic variables and correlated HBM domains with regularity of performing PTB prevention practices were further analyzed to determine the significantly independent predictors of adopting PTB prevention practices (Table 7), we found that having an underlying disease, perceived benefits of PTB prevention or screening, and belief in Islam (inverse prediction) were the significant independent predictors.Several previous studies confirmed the significant influence of mostly perception of benefits and barriers of HBM domains and level of understanding or knowledge on the people's decision to participate in cancer screening or health education programs (7)(8)(9)(10)32).Moreover, health educations focusing on encouraging the domains of the HBM, reducing all possible barriers to access to healthcare service, and understanding how to practice disease prevention could promote self-efficacy in practicing self-care finally (11-13, 33, 43-46).For religious beliefs, the results showed a significantly inverse prediction of believing in Islam for the regularity of PTB prevention practices in this study.We believe that it is possibly explained by misunderstanding of the people, for example, the belief that getting a disease is God's will and unavoidable or a kind of fate.In addition, this way of thought follows the external health locus of controls (EHLC) concept in which the unopposed external influencers, i.e., God or bad fortune, influence the individual's self-efficacy in managing one's own health.People posing this kind of thought will lose their self-confidence in managing their own matters, including health.Earlier studies found that a high level of belief in the EHLC also negatively influenced an individual's health practice (47)(48)(49)(50)(51). Changing one's belief in the EHLC, particularly God-or bad fortune-attributed control, to the IHLC and strengthening one's self-efficacy should be done by demonstrating objective evidence of health outcomes after following the provided health advice (52).Importantly, health educational programs for specific population groups such as being female, married, and Muslim to provide cues to action and to form positive health attitudes without conflicts against cultural or religious beliefs were encouraged (40,42,53).
We specifically determined the significantly independent predictors from demographic characteristics, levels of perception in individual domains of the HBM, and attitudes that predicted the regularity of PTB prevention practices among the study community dwellers.Otherwise, we also explored the impact of socioeconomic, cultural, and religious belief on the practices.We believed that understanding of these factors would provide useful insights into their influence on the decisions of the community dwellers.In addition, comprehensive and integrated consideration of these influencing factors is essential for program success.We stressed on the significant impact of the HBM as a powerful driver in facilitating the decision of the community people to participate in the health program.Therefore, detailed assessment of the perception of individual domains of the HBM before planning and applying a health program is crucial.
Strength and limitation of the study
By focusing on a specific community located near the metropolitan city of Songkhla province, where access to healthcare service is not a problem, we found that strengthening the perception of risk and severity of PTB to individual health and reducing barriers of access based on the HBM construct could clear the unmet practice for PTB screening in the community.Additionally, we further suggested that, by forming self-efficacy, self-care ability could be developed among the community people.
This study has some limitations.It had a small sample size and was performed in a specific study location that would limit its generalizability.Primary data were obtained from the interviews in which response bias from the participants could be involved.Moreover, the knowledge about PTB provided before data collection might modify the real perception, attitudes, and practices.Finally, the requirement of the community people to undergo CXR screening for PTB possibly deviated the responses causing bias as well.
Conclusion
There is a gap between the national policy-driven PTB control and the response from the people in the community in this study.Multiple factors including demographics, religion belief, and perception of disease according to the HBM concept can affect the decision to accept the screening advice.Comprehensive evaluation of these factors is mandatory before careful planning of a health program for the community can be initiated.Enhancing the encouraging domains of HBM concept and lessening the influence of barriers of access to healthcare service are essential for the success of the program.In this regards, suitable health education provided to the community people is needed.Since different social context affect the health belief, decision, and practice of the people, we suggest that expanding the sample size and study setting to cover people having various socioeconomic and health statuses, cultures, and religious beliefs will benefit from designing a policy-driven community health program in the future.organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
There were three designed questionnaires evaluating (a) the levels of perception of individual domains of the health belief model, (b) attitudes toward PTB prevention, and (c) the regularity of PTB prevention practices performed by the community members.All the questionnaires were tested for content validity by three experts in PTB prevention and treatment [index of item objective congruence (IOC) > 0.5].The content reliability assessment values by Cronbach's alpha test were 0.87 and 0.90 for (a) and (b), respectively.
TABLE General demographic characteristics of the study participants.TABLE Scores and levels of perception of individual HBM domains.
TABLE Scores and levels of attitudes toward PTB prevention.
(10,26,s(36)(37)ns and correlations between general demographic characteristics and regularity of PTB prevention practices.TABLE Correlations between the perception of individual HBM domains and attitudes toward PTB prevention in the community and regularity of PTB prevention practices.studiesassessingtheimpact of health knowledge, attitudes, and the health belief model on disease screening(10,26, 27,32, 33)Most of the current studies evaluated the impact of the HBM on the decision of people to undergo breast, cervical, lung, or colonic cancer screening(34)(35)(36)(37)(38).No studies discussing PTB screening in the community are available.A qualitative study examining the HBM domains among African immigrants who declined to participate in the provided hepatitis B screening program revealed that lack of HBV knowledge and awareness and cultural challenges related to healthcare or preventive care, fear, and social stigma were significant barriers (39).The fear of adverse effects from COVID-19 vaccination is also an example of the barrier of acceptance of the vaccine despite public awareness of the disease fatality (40-42).The available evidence signified enhancing the perception of risk of acquiring a disease and disease severity through comprehensive health education alongside providing cues to action and weakening the barriers (40, 42).
a Wilcoxon rank sum test, b Spearman's rank correlation, * p < 0.05.those TABLE Independent predictors of practicing PTB prevention practices from demographic characteristic data and perception of individual HBM domains.
|
v3-fos-license
|
2020-09-16T13:06:19.010Z
|
2020-09-14T00:00:00.000
|
221720157
|
{
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1088/1748-3190/abb86b",
"pdf_hash": "a99a4122d753790376dbe3b3c11dfbcca327a4fb",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42336",
"s2fieldsofstudy": [
"Engineering",
"Biology",
"Materials Science",
"Environmental Science"
],
"sha1": "43e9f1d9fd573e03d2a411ab78e2deee304df6d2",
"year": 2020
}
|
pes2o/s2orc
|
Constructional design of echinoid endoskeleton: main structural components and their potential for biomimetic applications
The endoskeleton of echinoderms (Deuterostomia: Echinodermata) is of mesodermal origin and consists of cells, organic components, as well as an inorganic mineral matrix. The echinoderm skeleton forms a complex lattice-system, which represents a model structure for naturally inspired engineering in terms of construction, mechanical behaviour and functional design. The sea urchin (Echinodermata: Echinoidea) endoskeleton consists of three main structural components: test, dental apparatus and accessory appendages. Although, all parts of the echinoid skeleton consist of the same basic material, their microstructure displays a great potential in meeting several mechanical needs according to a direct and clear structure–function relationship. This versatility has allowed the echinoid skeleton to adapt to different activities such as structural support, defence, feeding, burrowing and cleaning. Although, constrained by energy and resource efficiency, many of the structures found in the echinoid skeleton are optimized in terms of functional performances. Therefore, these structures can be used as role models for bio-inspired solutions in various industrial sectors such as building constructions, robotics, biomedical and material engineering. The present review provides an overview of previous mechanical and biomimetic research on the echinoid endoskeleton, describing the current state of knowledge and providing a reference for future studies.
Introduction
Sea urchins (Echinodermata: Echinoidea) are known to have been in existence since the Middle Ordovician, about 460 million years ago [1]. During the Early Jurassic, they underwent an intensive adaptive radiation leading to a variety of specialized forms and lifestyles adapted to different marine habitats [2][3][4][5][6][7][8][9][10][11][12][13]. Echinoids are traditionally subdivided into two groups: regularia and irregularia, mainly identifiable based on test morphology and lifestyle [14,15]. Regular echinoids are typically spherical in shape with the peristome (mouth region) on the central oral side and the periproct (anal region) aborally located. The area spanning from the apical system throughout the peristome is divided in five ambulacral and five interambulacral fields, each one characterized by ten double columns of different skeletal plates with species-specific fine-relief ornaments [16]. Regular echinoids possess a prominent pentaradial symmetry superimposed on the ancestral echinoderm bilateral symmetry. In contrast, irregular echinoids are typically aboral-orally flattened and elongated or heart-shaped. The peristome is orally located, but not necessarily in the centre of the oral surface. The periproct migrated from the central aboral side towards the oral side assuming variable positions in the test [12,17]. The ambulacral fields are often restricted to the aboral side forming the petalodium [18]. Thus, irregular echinoids typically possess a strong bilateral symmetry superimposed on the radial symmetry acquired [4][5][6][7][8][19][20][21][22][23][24][25]. The evolutionary success of echinoids is undoubtedly due to the strategic employment of their endoskeleton, macroscopically consisting of three main functional components: test, dental apparatus (Aristotle's lantern) and accessory appendages (such as spines and pedicellariae) [26,27] (figure 1).
In the course of evolution, the echinoid skeletal parts transformed in morphology and physiology adapting to novel functions [28]. For example, in some species the main function of the spines shifted from protection to burrowing [29][30][31][32][33]. Also, pedicellariae, the small pincer-like appendages, developed different forms, including venomous types [34]. In addition, the morphology of the dental apparatus differed due to feeding strategies, such as scraping and crushing, or it has been entirely eliminated [35][36][37][38]. Complementarily, the modifications during evolution have specialized and adapted these skeletal parts to efficiently absolve specific mechanical roles. In particular, spines and test protect the animal by withstanding biotic (e.g. predatory attacks) and abiotic (e.g. strong wave motion or substrate impact during burrowing or locomotion) mechanical stresses [39][40][41][42][43][44][45][46]; pedicellariae, provide further defence and are used for cleaning the echinoid's epidermis [34,47]. Aristotle's lantern plays a direct role in multiple activities such as gripping, scraping, digging, and even locomotion [48]. In particular, the lantern, which consists of an integrated system of 40 skeletal elements, joined and moved by specific muscles and ligaments, represents one of the most complex and optimized biomechanical models in the animal kingdom [49][50][51][52][53].
Due to its unique features, it does not surprize that the constructional design of the echinoid skeleton has attracted the interest of both biologists and engineers. Accordingly, mechanical engineering and material science principles, methods and tools have been applied in exploring the mechanical performances of sea urchins as an integrated system or single component [23,45,46,[54][55][56][57][58][59][60][61][62][63]. This biomechanical approach provided important biological insights on form-function skeletal features, taxa comparisons, ecological and evolutionary trends and adaptive meanings, as well as, new functional principles used to design innovative bioinspired technical solutions [27,46,54,[64][65][66][67][68]. Echinoid skeletal components are structurally and functionally organized regarding, among others, lightness, stability, strength, flexibility and stress resistance. Presently, due to the availability of novel analytical methods, the underlying principles can be better understood and transferred into building constructions and industrial products; a process known as 'biomimetics' and 'bionics' [69]. Otto Herbert Schmitt an engineer and physicist coined the term biomimetics in 1957 and its approach was regulated and certified in 2015 by the International Organization for Standardization (ISO 18458) [70]; whereas, the term bionics, a combination of the words 'biology' and 'technics', was coined by the US Air Force Major J E Steele in 1960 [71][72][73][74][75]. Often used as an equivalent, both terms identify a design process inspired by nature that generated innovative technological solutions. Over the past decade, other terms have occurred in conjunction with this process, such as biomimicry, biomimesis, bio-inspiration, nature-based solutions, biologically inspired designs and numerous others; although often used as synonyms, each one differs in objectives, principles and approaches [69,[74][75][76].
The present review provides an overview of recent knowledge on echinoid skeletal structures. Its intention is to identify the main morphological features and mechanical aspects, in order to provide a reference for future research on biomimetic applications. Accordingly, the following issues will be discussed in detail: (1) current knowledge of biomineralization and material properties of the echinoid endoskeleton; (2) skeletal microstructure (stereom); (3) the three main skeletal components: i.e. test, Aristotle's lantern and accessory appendages; (4) biomimetic process and echinoid-inspired applications in building constructions, robotics, biomedical and material engineering.
Biomineralization
The biomineralization process in echinoid skeletons has been extensively investigated throughout different developmental stages from larvae to adults [77][78][79][80][81][82][83][84][85]. Detailed mineralogical analyses revealed that its mineral matrix consists of calcite, containing up to 15% magnesium [86,87]. Hence, the echinoid skeleton is generally considered a high-magnesium calcitic structure, although its magnesium content can vary significantly according to species and specific skeletal parts, as well as, environmental factors such as temperature or pH [88][89][90][91][92]. These variations determine different mechanical properties of the skeletal parts [88,93]. The calcite in echinoid skeletons displays the optical behaviour of a monocrystalline structure with definite orientation of the optical axes [87,94]. In terms of mechanical behaviour, the rupture response of the echinoid biocalcite results in conchoidal fractured surfaces, which differ from the welldefined cleavage of pure calcite crystals [95][96][97][98]. For many years, this fracturing behaviour was attributed to the presence of organic components (proteins) within the stereom structure [95]. Seto et al (2012) later found evidence that this behaviour is mostly due to the particular echinoid calcite structure [99]. Indeed, this calcite is a mesocrystal composed by numerous aligned calcite nanocrystals (∼100 nm) embedded in a matrix of amorphous calcium carbonate (ACC) and macromolecules [95][96][97][98][99][100][101][102]. These last two components cause the conchoidal fracture properties. Echinoid biocalcite has often been discussed as representing a composite material because it contains up to 0.2% proteins by weight [84,103,104]. From a material engineering perspective, materials composed of two or more constituents with different physical, chemical and mechanical properties are defined composites. The combination of different constituents produces a material with advantageous emerging properties, strongly different from the properties of the same constituents [45]. This is usually the case when the fraction of each composite reaches a relevant amount of the total volume [105]. When the amount of one of the components is too low, the material is not considered a composite; in fact, the second constituent affects the material properties by interacting with the main component rather than contributing its own advantageous properties to the material composition [105].
Vertebrate bone for example, represents a high-performance composite material consisting primarily of collagen and hydroxyapatite. The mineral component provides bone with mechanical rigidity and load-bearing strength, whereas the organic fibrous component provides elasticity and flexibility [106]. In quantitative terms, the hydroxyapatite fraction should account for 35% of the volume in order to reinforce the skeletal material effectively. Nevertheless, the amount of hydroxyapatite in vertebrate bone reaches roughly 50% and the collagen represents the other 50% providing advantageous tensile properties; for this reason, vertebrate bone represents a true composite material, of which anisotropy provides considerable strength and stiffness in at least one direction [45, 106,107]. Herman et al (1988) demonstrated that the amount of organic matrix in echinoid calcite is not sufficient to form a continuous layer within the stereom and does not represent a considerable amount of the total volume; thus, it cannot be considered as an effective constituent in making the echinoid calcite a composite material [96,108]. However, Seto et al (2012) demonstrated that the mesocrystal structure of the echinoid calcite contains between 8 and 10 wt % ACC in mature spines, consequently revealing that ACC calcite could itself serve as a second component of this composite material [99].
Composite systems, usually feature the advantage that cleavage propagation is prevented by a suitable alternating arrangement of stiff, strong materials with less stiff materials creating a functional interface where the latter, having a reduced elastic modulus, assumes a stress-breaking role absorbing stresses [105,107,109]. This is the case of nacre that is composed of 95% aragonite and 5% proteins and polysaccharides, as well as, vertebrate bone [45,107]. The employment of calcified collagenous fibres consequently results in an anisotropic material, which is stiff and tough in one direction, but brittle in at least another [45,107].
Recently, Lauer et al (2020) demonstrated that unlike mechanical properties of other biogenic ceramic composite materials, such as nacre, the combination of high Mg-calcite with ACC and organic phases have little effect on macromechanical properties of the Heterocentrotus mamillatus spines [110]. Thus, although the micromechanical properties of the echinoid skeleton are governed by the interplay of ACC, organic phase and Mg calcite [96,99,111], the macromechanical properties seem mainly governed by the porous stereom structure and architecture resulting in a remarkable damage tolerance [110].
Interestingly, the crystallographic design and macromolecule distribution makes the echinoid biocalcite a more isotropic material [112]. In this regard, it has been demonstrated that the anisotropy is larger in synthetic crystals than in young sea urchin spines; whereas, mature spines have an extended anisotropy, ranging between those of synthetic crystals and young spines, suggesting the existence of . Labyrinthic stereom with variable porous texture is the dominant microstructural pattern. In the wide circular insertion area of the overall catch apparatus it is possible to distinguish specific stereom patterns related to muscle and ligament insertions, density and sizes of porosity, closer and more regular in the ligament area. Adjacent to the tubercle where the stereom structure tends to become imperforate, pore size decreases. (E) Vertical section of the plate showing a high diversity of the stereom microarchitecture according to zones and related specific mechanical needs. (F) Details of stereom types detected: (1) imperforate stereom; (2) labyrinthic stereom; (3) galleried stereom; (4) microperforate stereom. Bar = 100 μm. lia = ligaments insertion area, mia = muscles insertion area, tb = tubercle. remarkable differences in the biological crystal composition during spine formation and growth [113]. In contrast, vertebrate bones (such as femurs) display a clear and defined preferential orientation of collagen and apatite inside trabeculae, as well as, a highly anisotropic trabecular architecture; thus, it is capable of transferring loads more effectively in only one direction [45,107]. However, apart from the mineral composition, echinoid stereom is similarly characterized by a variable oriented trabecular architecture ensuring a more directional resilience [57,114]. Moreover, due to its trabecular meshwork, the echinoid stereom is a lightweight construction and possesses a high level of robustness, e.g. allowing the applied forces to bypass malfunctional trabeculae and to be transferred to the functional surrounding ones.
Stereom
Stereom [115] is a 3D mesh of trabeculae, i.e. struts, made of biocalcite [114]. It represents a key element responding to the principles of robustness, lightness and stability, due to three primary factors: (1) material composition and related mechanical properties based on material variations through strategic substitution of calcium (Ca) with magnesium (Mg) in the calcite crystal, and alterations of fracture behaviour [ 84,114,123,124]. Consequently, this lightweight structure denotes an important adaptive achievement within the entire phylum Echinodermata contributing to its evolutionary success [125][126][127]. The complex constructional design of the stereom varies from species to species and within both individuals and skeletal elements. Nevertheless, known far away in time [128], this structural variability was described in detail by identifying ten different stereom types in the test: imperforate, microperforate, simple perforate, galleried, rectilinear, retiform, laminar, fascicular, labyrinthic and irregular perforate [114]. All of which can be employed in a number of combinations, creating species-specific 3D structural patterns easily recognizable in scanning electron microscope (SEM) images. Architectural variability and possible modulations based on specific mechanical needs have been described in several studies regarding: (1) the test and its individual plates [46, 55-57, 87, 97, 116, 124, 129, 130]; (2) Aristotle's lantern ossicles [52,131]; and (3), more frequently, spines [27,30,52,54,58,61,65,123,[132][133][134][135]. As a rule, stereom density tends to increase in regions subjected to high mechanical stresses resulting in imperforate or microperforate types; in particular, this occurs in those areas exposed showing the variability and complexity of its internal microarchitecture in relation to the ambulacral pore arrangement: labyrinthic and microperforate stereom prevailing. Bar = 100 μm. irm = retractor muscle insertion area, c = compass, de = demi-epiphysis, dp = demi-pyramid, ef = epiphyseal fossa, p = ambulacral pore, r = rotula, s = suture, subapical fossa, t = tooth.
In the past decades, the mechanical design of the stereom has been extensively studied in a twodimensional view [16,33,84,114,129,130,134,136,137]; however, with the advent of affordable high-resolution computed tomography (CT) scanning, recent studies explored the stereom using 3D modelling reconstruction, 3D topological and structural analysis (e.g. finite element analysis, FEA). These modern methods allow detailed analyses of mechanical properties, lightweight constructions and load-bearing systems [55-57, 59, 124, 135, 138]. Accordingly, different mechanical tests on the skeletal layout demonstrated how these stereom variabilities have diverse structural implications [54, 61,67,90,122,132,133,139].
Hitherto, it is also important to remark that the echinoderm skeleton is a proper mesodermal tissue, and that the living stereom contains an organic stroma, consisting of cells and extracellular matrix including collagen fibres [45,114]. The stroma significantly contributes to the integrity of the skeleton providing indispensable resistance and flexibility qualities. In general, this organic component: (1) reinforces the endoskeleton, providing greater mechanical resistance to the overall structure and continuity to the related ligaments, thus avoiding the risk of fracture at low applied forces [140]; (2) transforms the test into a flexible jointed integumental layer meaningfully reducing the impact of bending stresses [140,141]; (3), acting as an energy absorbance system and stress-breaker interrupting the propagation of fractures due to material component discontinuity (stereom + stroma = rigid + elastic components) [52, [142][143][144]; (4) confers reinforcement, support and potential repair to the mineral structure [95,103,108,145,146].
Test
The echinoid test (figure 4) is a multi-element system consisting of a number of skeletal plates joined by sutures. These sutures can be characterized by the presence of interdigitating articular surfaces (combjoints) often bound together by short collagenous ligaments [23,27,33,46,57,63,140,[147][148][149]. This constructional design fulfils several mechanical principles acting as a resistant, lightweight, load-bearing and load-transferring system, as well as being an attachment point for appendages. Structural strength is achieved by hierarchical constructional adaptations, such as: overall shape, plate layout and arrangement (trivalent vertex arrangement, in which three plates meet in one point), skeletal interlocking and reinforcements (e.g. internal buttressing), material distribution and stereom diversity [27, 46, 55-58, 63, 140, 149-151]. These skeletal features have been described as functional strategies which, suitably combined with adaptations of the connective tissue components, allows the echinoid test to withstand compressive, tensile and bending stresses [46,55,63,109,140,141,149]. In particular, collagenous sutural ligaments play a central role in increasing the structural strengthening of the test by binding rigid calcite plates at sutures [140,141]. By measuring the breaking forces of the Strongylocentrotus purpuratus skeleton with intact or removed soft tissues, Ellers et al (1998) demonstrated that skeletons without ligaments broke at lower apically applied forces in respect to those with ligaments [140]. Different is the case of the minute clypeasteroid Echinocyamus pusillus, of which Grun and Nebelsick (2018) showed that soft tissues do not possess a significant structural function. However, the overall layout and plate connections between Strongylocentrotus and Echinocyamus are fundamentally different due to the extensive skeletal plate connections in the Echinocyamus responsible for its overall stability [46, 55-58, 149]. Some echinoid morphologies are also optimized with respect to hydrodynamic property adaptations such as the lunulae of sand dollars, which are considered to reduce lift when sand dollars are on the sea-floor surface and subjected to strong currents [62].
Due to the structural form and architecture of the test, echinoids have been extensively investigated in order to understand their constructional design and mechanical behaviour in detail [23,45,46,55,60,63,68,107,148,150,[152][153][154]. Detailed morphospace analyses were carried out to explain and predict extinct and extant echinoid test shapes by considering possible phylogenetic, physical and mechanical factors [154][155][156][157][158]. Thompson (1917) in particular, carried out a pioneer study on test shape using a liquid drop analogy to describe the shape and growth of regular echinoids [158]. Ellers (1993) supported this hypothesis using the thin shell theory to explain test curvature defining the echinoid morphospace in two parameters: (1) the apical curvature; (2) a proportion of the vertical gradient of pressure to the internal coelomic pressure [156]. Seilacher (1979) proposed that the echinoid test should be analysed as a mineralized pneu-structure that grows when internal pressure exceeds external tension, varying its morphology through plate growth [23,28]. However, Ellers and Telford (1992) measured the internal coelomic pressure in the regular sea urchin S. purpuratus and Lytechinus variegatus [159]. They found that internal pressure fluctuates rhythmically about −8 Pa and was negative for 70% of the time, disempowering the pneu-hypothesis that requires an internal positive pressure [23,28,160]. These rhythmical fluctuations in pressure could be mainly caused by the lantern movements that change the curvature and tension of the peristomial membrane [159]. Telford (1985) analysed the test as a dome structure utilizing both the membrane theory and static analysis to determine its behaviour under different loads; thereby assessing the hypothesis that the test form was constructed to resist external forces [63]. On the whole, taking into account these and other studies, test form and growth were described and explained using different theoretic models, based on a total of nine hypotheses, in addition to different computational models [for review see 152]. The echinoid test growth is mainly based on two combined processes, namely: plate addition, i.e. the insertion of new plates in the apical system [21], and plate growth, based on a peripherally accretion or reabsorption of skeletal material [161]. However, the main distinctive feature of the growth process lies in the mutable collagenous tissue (MCT) present at plate sutures that can undergo rapid changes in mechanical properties (switching reversibly between stiff and compliant states) accommodating little movement and growth [for review see 162]. In particular, sutures allow growth maintaining a space between plate margins ('plate gapping') [152,155,163] in a manner that they do not unite and continuously expand interacting with the adjacent plates. Usually in regular echinoids, sutures remain open up to the adult stage providing the test some degree of flexibility and mechanical advantages in sustaining loads [140,141].
Modern methods such as 3D acquisition (e.g. μCT and photogrammetry), digital modelling and simulation, e.g. FEA are recently being adopted, providing novel answers to questions about test morphology, functional performance and mechanical behaviour (figure 5) [46, 55, 57, 60, 138,141,164,165]. As pioneers in this field, Philippi and Nachtigall (1996) conducted FEA-analysis describing the behaviour of the regular echinoid test (Echinus esculentus) under diverse loads [60]. Their studies highlighted the structural load-bearing efficiency of the test and interpreted its peculiar spherical shape as the most adapted form to sustain the tensile stresses resulting from the tube feet activity [60]. Recently, Grun and co-workers focussed on the clypeasteroid skeleton using x-ray μCT, SEM observations and physical and virtual tests in order to analyse the hierarchical structural design of the E. pusillus test [46, 55, 57]. They displayed in detail the mechanical properties of the test at different hierarchical levels, i.e. from the overall shape-although consisting in a discontinuous structure divided into several polygonal plates, it behaves as a monolithic structure-to the plate micro-architecture, internal supports and stereom variability, all described as specific functional devices for bearing and transferring loads.
Aristotle's lantern
Most regular echinoids, extant or extinct, possess a complex dental apparatus, traditionally called Aristotle's lantern. The apparatus is a biomechanical and dynamic system arranged according to perfectly pentameral symmetry and consisting of an intrinsic part, the lantern itself, and an extrinsic part, the perignathic girdle, i.e. the inner edge of the test [49-53]. These two parts are connected by muscle bundles (five pairs of retractor muscles and five pairs of protractor muscles), and ligamentous structures (peristomial membrane and five pairs of compass depressor bundles) [51] (figure 6). The lantern consists of forty anatomically distinct skeletal ossicles: ten demi-pyramids, ten epiphyses, five rotulae, ten compasses and five teeth (figure 7) [52]. They are all reciprocally joined by specific articulations (movable joints, semi-movable or rigid sutures), interconnected by articular ligaments and moved by anatomically and functionally well-defined muscles consisting in five pairs of retractor and protractor bundles, five massive inter-pyramidal muscles and five compass elevator muscles. The lantern muscular component is also represented by other muscular elements, namely myocytes of the lantern coelomic epithelium, which are involved to a minor extent (such as the thin muscle layer included in the compass depressor ligaments) [49, 53, [166][167][168][169].
Conversely, irregular echinoids do not generally possess a lantern, although in juveniles of Cassiduloida and Spatangoida this can appear as a vestigial trait, with the exceptions of adult Holectypoida and Clypeasteroida, [37,38]. However, these persistent lanterns differ remarkably from regular lantern models: they are flattened and relatively larger, non-protrusible [38] and provided with teeth that move horizontally with respect to the substrate and designed to crush sediment rather than to grasp [170,171]. Furthermore, in contrast to the lantern of regular echinoids, these flattened types appear to be used only for feeding: the Aristotle's lantern of regular sea urchins is employed in other important activities [33,53,166] such as digging, locomotion, respiration and circulation of coelomic fluid [170,172,173].
Static and dynamic mechanical studies were carried out on the echinoid lantern, specifically on its skeletal ossicles, muscular system and ligaments [50][51][52][53][194][195][196][197][198], as well as, on the peristomial membrane (figure 6(A)), a flexible area consisting mainly of fibrous connective tissue surrounding the mouth and connecting the lantern to the test; with its dynamic mechanical behaviour it contributes to the lantern's stability and motility [168,169,199,200]. Biomechanical models, experimental mechanical tests and computer simulations were elaborated and integrated to determine lantern movements, muscular forces and constraints during different activities in regular echinoid lanterns [49,184,192,193], whereas other mechanical studies were addressed to define the biting forces developed by the dental apparatus in sand dollars [170]. It was assessed that the overall lantern can show resistance to different mechanical stresses directly or indirectly related to motor activities by means of a number of specific macro-and micro-structural adaptations. From a macrostructural perspective, the first mechanical advantage of the lantern lies in its strategic subdivision into complementary parts and correlated pieces, starting with the five multipiece jaws (figure 7), each consisting of distinct elements sutured together (two symmetrical demi-pyramids and two symmetrical demi-epiphyses) providing a perfect alveolus that contains and protects the long internal tooth ensuring its continuous growth (see below) [52] (figures 7(B) and (C)). The second advantage regards the jaws that are joined to each other by means of complex multivalent articulations endowed with specialized articular ossicles, known as rotulae [52] (figure 3(C)). They play a role in the basic opening and closing of the jaw, modulating its reciprocal tilting and swinging, and in the independent movements of the compasses (raising/lowering) on the aboral side of the lantern. These are sophisticated devices enabling the structure to be mechanically versatile, resistant and deformable [51-53, 143]. Nonetheless, the major complex adaptations were found in both skeletal microstructural variations/differentiations (figure 3) and material composition. The micromechanical design of the skeletal parts of the lantern was extensively investigated and described using SEM by Candia Carnevali and co-workers in comparative studies of the cidarid [51] and camarodont [52] lanterns. Detailed SEM studies also focussed on the micro-structure of sea urchin teeth [171,183,[187][188][189][190][191]200]. Subsequently, pyramids and teeth were further analysed employing Micro-CT imaging, which permitted the acquisition of 3D images leading to detailed insights into different speciesspecific geometries and microstructures [120,131,[201][202][203][204][205] (figures 7(D)-(F)). These studies demonstrated that the lantern ossicles tend to have a similar basic organization in terms of adaptive stereom variability in relation to interactions with skeletal elements, ligaments or muscles, as well as, in relation to specific functional/mechanical requirements.
The only exception appears to be the teeth, which display a unique microstructural architecture composed of a magnesium-bearing calcite crystal combination, such as monocrystalline plate-elements, monocrystalline fibrous-elements and polycrystalline matrix, with a variable amount of organic macromolecules (about 0.2-0.25 wt %) [103,118,119,204,[206][207][208][209]. Echinoid teeth are elongated, moderately curved and highly variable in shape, and can be classified in four types (U, T, prism and wedge-shaped teeth) on the basis of their different cross-sectional profile (figure 7(C)) [21,173,191,205,210]. Along the longitudinal axis, each tooth displays three main well-differentiated parts: an aboral growing portion (plumula), a midshaft and a mature portion characterized by a sharp oral tip [191]. In order to cope with the constant tip abrasion due to the interaction with the substratum, the tooth grows continuously at the plumula level and then slowly descends along the jaw following an inner pyramidal furrow [191,211,212]. The mature part consists of three main zones characterized by well differentiated structures and functions: (1) the primary plate zone, organized in lamellar plates and prisms obliquely oriented with respect to the longitudinal tooth axis; (2) the stone part, formed by calcareous needles surrounded by a polycrystalline matrix and connected to the primary plates by lamellae; (3) the keel, consisting mainly of inner prisms and of outer secondary plates with peculiar carinar prolongations [118-120, 131, 202-205, 213]. Echinoid teeth were analysed in detail using various techniques, such as SEM, energy-dispersive x-ray spectroscopy analysis, x-ray micro-tomography and spectromicroscopy, as well as micro-and nanoindentation, in order to identify their microstructure, material distribution, mechanical behaviour, and chemical composition. These analyses allow an interpretation of the tooth's structural architecture and integration in relation to its complex mechanical performance [118][119][120]214]. In terms of structure-function correlation, the lamellar plate components appear to be a structural solution adapted to reinforce the zones subjected to maximum compressive stress (abaxial part), whereas the fibrous elements are employed in the zones of maximum tensile stress (adaxial part: the keel) [52, 120,202,215]. At the tooth tip, plates and fibrous elements split off due to shearing forces consequently creating a fracture at the surrounding organic layer, generating a mechanism for self-sharpening [119,215]. Recently, this mechanism has been further investigated using 3D techniques in-situ SEM experiments and mechanical measurements combined with a nonlinear finite-element analysis [216].
In conclusion, the tooth is adapted to minimize and respond to multiple and combined mechanical stresses such as shear, bending, torsion and buckling produced by gripping, scraping, digging and locomotion [52, 119,120,204,210,215]. The strategic employment of magnesium-calcitic material together with its mechanical properties, in combination with the orientation of a plate-and-prism arrangement (according to the lines of force of the applied loads), result in a remarkable increase in tooth hardness (twice that of inorganic calcite itself) allowing echinoids to dig efficiently and deeply into calcareous rocks [93,120,201,214,215,[217][218][219][220][221][222].
Accessory appendages
Echinoids possess a variety of articulated accessory appendages [18,32,77] including spines, pedicellariae and sphaeridia. Spines and pedicellariae are primarily involved in defence and cleaning and can often show signs of damage and repair or can even be autotomised [223][224][225]. Sphaeridia are minute skeletal spheres attached to the test around the peristomial ambulacral regions (lacking in cidaroids) and are considered to be statoreceptor and proprioceptor organs. However, little is known about their morphology and physiology [39,226,227].
Pedicellariae are minute pincer-like structures distributed on the test surface, particularly around the peristome (figure 8) and periproct [34,228] and are employed in different activities such as gripping, defence, covering and cleaning [34,39,47,77,[229][230][231][232][233][234][235][236]. As most musculo-skeletal organs, each pedicellaria consists of a stalk, neck and two to five valves. Pedicellariae are highly variable in shape, often denticulated and sometimes armed with venom glands [77,233,237,238]. Due to their variable shape, pedicellariae have been extensively used in taxonomy [5-11, 34, 239-241]. The valves show specific stereom structures and are equipped with functionally distinct muscles (abductors, adductors and flexors) and collagenous ligaments [242], which contribute to its gripping force [231,233]. They generally react to chemical and tactile stimuli, in fact most valves are equipped with fields of chemosensitive cells [243][244][245]. As reported by Cavey and Markel [39], and further investigated by Coppard and co-workers [34], there are four main types of pedicellariae: (1) globiferous pedicellariae, which possess venom glands and denticulated valves with large and strong adductor muscles: they are employed as a deterrent against medium and larger predators; (2) ophicephalous pedicellariae, which possess three denticulated valves, provided with a glandular portion involved in releasing anti-fouling substances onto the test surface, and larger processes for muscle attachments enabling them to exert more strength and reduce muscle fatigue during object holding (figure 8(B)); (3) triphyllous pedicellariae, which are the smallest type of pedicellariae, are characterized by three small valves, long muscular neck and stalk: they are not sensitive to touch, have limited holding time and are employed to free test surface of minute particles (figure 8(C)); (4) tridentate pedicellariae, which are the largest and most common type, consist of three denticulated valves: they are activated by tactile stimulation and employed in removing larger particles or preventing test contamination by invertebrate pests. Past studies on pedicellariae generally consisted in descriptions of their morphology, activities and functions [228,236,246,247]. Noteworthy are Campbell's studies that analyse in detail the forms and activities of the pedicellariae, identifying jaw movements, closing and opening responses (occurring after direct reflex-arc stimulation or indirect nerve stimulation), as well as, their latency, speed and duration, receptor distribution and reaction [229][230][231][232][233]244].
Spines are elongated structures consisting of shaft (neck and tip), milled ring and base [80]. Each spine is joined to a respective tubercle at a ball-and-socket joint [80,114,248] and can be moved or firmly maintained in position due to the combined synergic action of a muscle and a ligament, known as the 'catch apparatus' [249]. The spine base enclosed by an articular envelope including a continuous outer layer of parallel muscle fibres runs from the spine to the test, and an inner layer of parallel ligament fibres with spine-test attachments. The ligaments consist of MCT [for review see 162,250] enabled to drastically and quickly change its mechanical properties under nervous control. The presence of MCT allows the tensile state of the ligament to change rapidly from a soft and flexible condition, favouring muscle action during movement, to a rigid condition, locking the spine in position without muscle involvement, providing a fatigue/ energy-free holding mechanism [251]. Spine shape and size differ greatly from species to species: like a needle they can be long, hollow, thin and pointed as those in camarodonts; or look cylindrical or flattened, long or short, streaked or variously decorated, as in cidaroids; or moreover, appear modified and miniaturized as in irregular echinoids (figure 9) [29][30][31][32][33]. Spines perform different functions, such as locomotion, feeding and burrowing [29,39,42,252]. They also act as a protection from physical trauma and predators [40, 253,254] and as a stress impact reducer [43, [255][256][257][258], which is one of their main roles in the prevention of structural test damage. As reported by Tsafnat and co-workers [135] this is achieved by the spine microstructure, which improves resistance to compression. Thus, spines are structurally highly adapted to withstand different mechanical stresses, combining high impact resistance with high-energy absorption [43,54,59,61,65,123,132,133,135,239,[258][259][260][261]. As for other skeletal components, the mechanical performance of the spines is the result of three hierarchical features, i.e. material properties, microarchitecture and shape. With regard to material properties, even if each spine behaves as a single calcite crystal with the c-axis oriented to its long axis [262]-as shown by polarized light microscopy [89], x-ray diffraction [107], and electron backscatter diffraction [132]-it has a mesocrystalline structure [99] consisting of a highly oriented array of nanocrystals embedded in a matrix of ACC and macromolecules [95-103, 110, 263-265]. The presence of ACC and intracrystalline macromolecules determines a typical conchoidal fracture behaviour resulting in increased fracture resistance and structural flexibility, as shown in the other skeletal parts [95-99, 110-113, 116, 121]. The material composition within the spine is highly variable (particularly the magnesium concentration), implying diverse mechanical properties in terms of elastic moduli, hardness and stiffness, and is significantly higher in the septa rather than in the spine central core [61,132,133]. Apropos the microarchitecture, spine stereom types greatly vary from species to species and along the same spine [54, 132,133]. This leads to a very specific structural behaviour regarding the stress pattern distribution and resistance, as shown by the mechanical tests, such as three-point bending [54, 132,266] and bulk compression tests [61,133,258]. Spine growth lines have also been shown to possess a mechanical significance and their presence could enhance resistance to larger force values [54, 59, 61,67,123,133,[267][268][269]. Spines can display a peculiar morphology (widely recurrent in nature e.g. feather shafts and plant stems) consisting mainly of a hollow cylindrical porous structures, well known for their efficient mechanical advantage related to high strength-to-weight ratio [270]. In addition, many spines are characterized by sets of radial elements such as wedges [54, 59, 123], barbs and bridges, optimizing stress distribution [54, 65,123,135], increasing bending stress resistance [259] and preventing fracture propagation [43, 65,132,135,271,272]. In particular, in Centrostephanus rodgersii, a detailed analysis of spine behaviour under compression, tension and torsion loads by means of micro-CT scan and FE-Analysis has led to the identification of stress concentration patterns within spines and their role as mechanical supports [135].
Biomimetic applications
The term biomimetics identifies an interdisciplinary approach that combines the understanding of natural structures, systems and processes with their abstraction and translation into technological applications [69,[71][72][73][74]288]. Biomimetics is neither an imitation of nature nor a mere copy of forms, but rather it is an in-depth comprehension and translation of natural working-principles (e.g. constructional principles of organisms), which can optimize structures in building constructions, industrial products and technical processes [74][75][76].
The biomimetic process is supported by a series of analogies between biological and technical structures enabling the transfer of solutions on functional bases [289][290][291][292]. Indeed, organisms and artefacts are often faced with similar problems, such as the need to increase structural stability and resistance (skeleton/frame), pressure drag reduction (streamline shape and ribbing surfaces of marine animals/hull of boats) and reaction to external conditions (nastic movement of plants/dynamic facades) [45, 103,293]. Hence, by understanding and modelling the adaptive principles of organisms, functional solutions for innovative design inspirations or 'bioinspirations' can be identified stimulating technical implementations [69,71,[74][75][76][289][290][291][292].
Nevertheless, the constructional design of organisms is subjected to different factors such as heritage constraints and morpho-functional adaptations to biotic and abiotic factors [28]. Hence, structural and functional solutions adopted by organisms are often neither the most advantageous nor the most adapted in any situation and context, since they represent a compromise respect to evolutionary constraints [28]. A specific contextualization and optimisation of biomimetic technical solutions is therefore required and can be performed through an interdisciplinary collaboration between biology and other scientific fields (e.g. engineering, design, architecture, material science, etc) with the aid of specific tools, such as 'computer-aided design' (CAD), 'computer-aided optimization', knowledge database and algorithms [294,295]. Consequently, the abstracted, interpreted and contextualized biological principles can lead to new inspirations for the improvement of structures and/or processes based on analogies of functions [28, 69, 71, 74-76, 288-291, 295, 296].
On the other hand, biological structures significantly differ from artificial ones in various important aspects such as: growth process generating structures with full functionality and integrity at all stages of life [158,297]; use of basic autochthonous and sustainable materials usually characterized by heterogeneity, anisotropy and hierarchy that determine multiple functions and emerging properties [45, 109,158,298]; integration in the environment and ability to interact with biotic and abiotic components [299]. On pair with analogies, these differences can also lead to new design perspectives and opportunities [66,76,288,300], e.g. growing structures of material ecology [301,302] responsive dynamic façades for building constructions [288], and hybrid design products [303,304].
The biomimetic procedure is carried out in different steps and tools [for review see 74]. Although the methods adoptable in this field are different and numerous, they can be allocated in two types of approaches: bottom-up and top-down [305,306]. The bottom-up approach begins by identifying adaptive functional solutions in biological species, followed by the identification of the most suitable design and technological area for their transfer. This approach in literature has also been defined in diverse ways: solution-based, solution-driven, biology push, biomimetics by induction and biology to design. The top-down approach begins from the analysis of complex technical problems to the pursuing in nature of biological models offering novel solutions. In literature, this approach is also also known as: problem-driven, problem-based, challenge to biology, technology pull and biomimetics by analogy [69,[71][72][73][74]305,306].
A general bottom-up is here simplified in five key steps [74]; in addition, a case study on Paracentrotus lividus's test is used as an example [141,165]. This process is not frequently linear due to constraints, context and scaling difficulties [288]. In this regard, the dimensional scale is a crucial factor: organisms have highly different working principles based on their dimensional realm [109,307]. A direct scaling of the biological solution to the design dimension is not always possible, particularly in building constructions that concern not only size but also materials, external loading, life cycle, required safety range etc [288]. For this reason, the abstracted principles need to be usually translated, redesigned and contextualized to be successfully applied as new technical solutions [74,305].
In all these approaches, knowledge integration and interdisciplinary methods and tools are essential for investigation and designing of biologically inspired structures. The study of biomimetics embraces both life and engineering disciplines [72,289,308]. Although, the functional characteristics and processes of nature conducting to the design of new innovative artefacts are immeasurable (e.g. bio-mineralization, growth processes and regeneration), bio-mechanical aspects are the most studied and implemented in the biomimetic field. A series of mechanical principles based on physical-mathematical laws appear to govern the structure-function relationship in organisms, as in artificial structures [45,288]. Hence, the physicalmathematical approach can successfully describe bio-structures and their mechanical problems and performances. As shown by d'Arcy Thompson (1917), this biomechanical approach has been applied for decades [109,158,292,293,[309][310][311]. Nevertheless, the contemporary advances in computational imaging acquisition, virtual simulation and manufacturing, together with the increased instrumental biological analysis resolution, lead to new developments for inter-disciplinary mechanical studies and biomimetics [71,164,295,312]. Both biological structures and principles can be digitally analysed in depth at a micro-and nanoscale and better transferred into a multitude and various constructions and industrial products [295,[301][302][303][304]313]. Consequently, biological structures are converted and analysed as 2D/3D models and directly connected to the technical process, becoming archetypes and/or guides for the genesis of the products [66,295]. This creates a supporting process with efficacious tools for designers, engineers and scientists in the transition from real (organism) to digital (2D/3D archetype) and from digital (3D model) to the real entity (physical building, device or product), involving digital manufacturing techniques, which reproduce in a rigorous and functional way the analogous strategies and mathematical laws of nature [289].
These biomimetic methods and tools enable not only a successful transfer and unique application, but also a deeper understanding of biological structures, their bauplan and evolutionary process. This enhancing knowledge of the biological realm based on biomimetic approaches is referred to as 'reverse biomimetics'. In particular, it can be conceived as an interactive spiral where the results achieved by the biomimetic approach lead to a more detailed understanding of the biological systems, representing the basis for further investigation and conducting to eventual new transfers and developments in biomimetic products [305].
In this complex framework, the skeletal components and mechanical properties of the echinoid constructional design have revealed a high potential in transferring functional bioinspired solutions into new diverse technical applications [27, 55-58, 64, 258]. Recent studies have shown how the echinoid structure can be digitally investigated generating 3D models and applying FE-analyses to identify possible structural and mechanical principles [54-58, 64,138,216]. In addition, based on their primary function, skeletal components have found a major and coherent field of technological application from engineering and architecture to robotics, biomedical and material sciences.
Engineering and architecture
Echinoids have a long history as inspiring models for engineering structures. This interest has recently increased, in particular regarding rotationally symmetrical constructions, defined as echinodomes [314,315]. Detailed analyses of these structures including their mechanical advantages and limits have been technically described and generally well understood. Different load conditions, such as self-weight, snow loads, wind and hydrostatic loads, which can generate over-or under-pressure, can be calculated adapting constructions to specific mechanical needs and functions [315]. Echinodomes have been applied to several constructions including long-term storage containers for gas and liquid fuels such as automobile and aircraft gasoline, mineral oil, and other volatile substances [315]. The advantages of echinodomes are specifically due to their thin-shelled and double-curved architecture that results in mechanical behaviour predominantly following the membrane theory, i.e. in-plane membrane stress, reduced bending stress [315][316][317][318].
Additional studies have not only focussed on the overall shape of an echinoid test, but also on specific working principles that have recently been implemented in civil engineering. Grun et al [64,319,320] provided an overview on echinoid skeletal strategies in building constructions, by identifying in the skeleton various structural working principles on different hierarchical levels and their transfer into demonstrators. These are architectural constructions providing a proof-of-concept of specific functional aspects. Transferred structural principles based on echinoid skeleton include: (1) mosaic-arranged plates, where three plates meet in one point in order to avoid straight edges, which may cause kinking; (2) clypeasteroid-type plates, interconnected by skeletal protrusions leading to secure plate interlocking; (3) fibre-connected plates; (4) lightweight constructions; and (5) double-wall constructions as found in Clypeaster rosaceus [27,64].
Both structural elements and processes leading to specific echinoid morphologies have been investigated [64]. Plate distribution has been optimized using the echinoid skeleton as a role model [64,321] and high-performance structures, identified and analysed, have been abstracted and transferred in various ways into demonstrators. For example, the ICD/ITKE Research Pavilion 2011 ( figure 11(A)) [64,321,322] has well demonstrated the application of three structural principles among those cited above: (1) mosaic-arranged modules, where three modules meet in one point; (2) single hollow modules, made from multi-elements reflecting a lightweight construction; (3) modules interconnected by comb-joints. Similarly, a building construction in the form of the Landesgartenschau Exhibition Hall 2014 was realized ( figure 11(B)) [64,321,[323][324][325][326]. A second ICD/ITKE Research Pavilion developed in 2015 (figure 11(C)) focussed on (1) modules arrangement; (2) comb-joint refinement; (3) material differentiation using textile connections; (4) light-weight construction; (5) a double-shelled structure; (6) an evolutionarily optimized growth algorithm based on the echinoid growth process by plate addition [64,321]. In 2018, the Rosenstein Timber Pavilion was exhibited demonstrating further developed, high-performance characteristics based on echinoid skeleton, focussing on improved plate connections and optimized plate distribution [323]. Furthermore, these characteristics have also inspired the BUGA Wood Pavilion (2019, ICD/ITKE University of Stuttgart) (figure 11(D)), which was realized combining a new digital design approach for shapefunding structures with an automated robotic manufacturing using wood, thus receiving the German Design Award 2020 in the 'Excellent Architecture' category [327]. As a final example, the Rosenstein Pavilion was realized in 2019 as a functional graded concrete shell structure inspired by the stereom of Heterocentrotus mammillatus spines. In particular, the spine structure was investigated as a main biological model for the designing of a new functional graded porosity of a concrete shell. The abstracted principle lead to a structural efficiency improvement of the porous pavilion through a functional distribution of material in accordance to a dominant stress state, resulting 40% lighter [328].
Robotics
Various studies were carried out in the robotic sector from the analysis of echinoid biology and structures to the development of new robotic designs [329]. As an example, a sea urchin-like robot was designed as a new exploration platform enhancing access to unstructured environments or dangerous places [330]. Based on tube feet and spine locomotion a flexible spherical rolling robot was developed with retractable linear actuators and pendulum-driven mechanisms. Both strategies intended to overcome the locomotion difficulties of spherical robots on irregular surfaces [330]. Echinoderm tube feet have been a source of inspiration for a wide range of soft robotic actuators [331][332][333]. For example, studies based on tube feet models have resulted in a magnetically controlled crawling mechanism [334] and suction device optimized for grasping rough surfaces with a rapid release mechanism [335].
An interdisciplinary team of engineers and marine biologists from the Jacobs School of Engineering (University of San Diego, California USA) used the Aristotle's lantern to develop a space exploration robot with a new gripping device for sediment sample collection (figure 12) [336]. Starting from the analysis of the opening and closing mechanism of the lantern system and the bio-exploration of keeled and non-keeled teeth, a bioinspired model was built and tested via FEA determining the efficiency of the lantern-like mechanism and confirming the structural importance of the keel in the reinforcement of the sea urchin's tooth [336].
Biomedical engineering
An optomechanical biopsy device for minimally invasive surgery was realized [337] adopting the lantern's ability to simultaneously scrape and engulf food in alternating and combined movements of opening/protrusion and closing/retraction following Scarpa's pioneering bionic model [338,339]. The prototype was implemented as an extrudable steel tube (0.15 mm thickness and 4.3 mm diameter) provided with a cutting device, i.e. a crown-shaped system characterized by triangular teeth, designed to perform an accurate biopsy in less than a millisecond (figure 13) [337].
In the biomimetic industrial design field, especially in the biomedical sector, a recent study on the mechanical design of P. lividus test was carried out by an Italian team (Hybrid Design Lab, University of Campania 'Luigi Vanvitelli' and Department of Structures for Engineering and Architecture, University of Naples Federico II) [141,165,340]. As an example, the identified adaptive solutions of the test, as a modular system guaranteeing high integrity and structural stability in different stress conditions, were transferred into the design of two different biomedical devices: an arm-tutor and a cranial harmonizer. Shape and structure of the biological models were abstracted and applied, according to principles of functional analogy, and reproduced in parametric 3D CAD models responding to specific innovation needs expressed by users and medical experts, these are: (1) lightness, ensured by a controlled porous arrangement mimicking stereom structure; (2) resistance and stability, obtained by a discontinuous structure consisting of hexagonal modules connected by semi-flexible material reflecting the modular plated structure of P. lividus test and its low flexural stiffness at the sutures; (3) breathability, ensured by the high structural porosity and modular subdivision, reducing the presence of closed spaces; (4) free customization for different therapeutic needs and personal preferences, provided by an elevated versatility of shapes, geometries, colours and styles obtainable by parametric designs and digital manufacturing [340].
Pedicellariae-like devices have also been developed into new versatile tools in micromanipulation and micro-robotics fields for healthcare. Leigh and co-workers [341] designed bioinspired forceps using micro-stereolithography creating a pneumatic chamber that opens and closes the jaws by changing pressure using a syringe. The device can be used for functional grasping of microparticles and in addition can be activated hydraulically exhibiting a self-healing behaviour (isolating the damaged regions and maintaining the hydraulic mechanism efficiency) [341].
Material science
Echinoid spines revealed an important potential for innovative bio-inspired applications due to their sophisticated lightweight structure and material properties, in combination with strategic failure behaviour, high impact resistance and high-energy absorption [59, 61,65,133,258,342].
In particular, the echinoid microstructure was deeply studied as a functional model to create new prosthetic materials. During the 70's, Weber et al [343] successfully replicated the skeletal structure of the Heterocentrotus spines in epoxy resin and in sodium silicate. In particular, they recognized in the arrangement of the echinoid 3D microstructure some important characteristics, which transferred into new functional prosthetic materials, were able to provide structural strength and proficient surface for tissue growth. In this regard, the stereom was identified as an optimized construction ensuring a good permeability and functional porosity, as well as a periodic minimal surface structure, in which the interface between calcite and the organic phase offers maximum contact for crystal growth [116]. Following studies involved a direct conversion from echinoderm material to bioimplant materials [345,347]. In particular, based on a hydrothermal conversion, the spines of the echinoids H. mammillatus and Heterocentrotus trigonarius have been converted in Mg-substituted tricalcium phosphate for bone implant, maintaining the interconnected porous structure with a good bioactivity and osteoconductivity. Currently, high-resolution and advanced techniques in tissue engineering are able to reproduce new artificial scaffolds with a controlled porosity at micro and nanoscale; thus, these bioinspired solutions can be more effectively transferred creating new opportunities to realize innovative synthetic or hybrid materials [348,349].
In addition, different studies on the cidaroid Phyllacanthus imperialis and H. mammillatus spines were carried out, showing how the specific arrangement of porous material, associated with different densities and architectures, allows these species to have extremely light and resistant structures identified as ideal models for the realization of new aluminium ceramic and concrete materials [59, 65,67,139,261].
Lightweight structural ceramics have also been developed using the echinoid skeletal plates to template the synthesis of effective porous materials. As an example, porous gold structures with nearly regular 15 μm channels were prepared by coating skeletal plates with gold and dissolving them and leaving the original structural form [350,351]. These materials with a pore dimension comparable to optical wavelengths could be applied for their optical properties or used as catalyst supports. These examples highlight how biological principles can be successfully abstracted and transferred into technical applications [308]. Moreover, in a reverse biomimetic view, these analyses also provided a more detailed insight on morphology, function and integration of an organism in its ecosystem [46,58,269]. In particular, this allows a better understanding of an organism's adaptation to its environment, the evolutionary pathway of its structure, and its ecological and paleontological implementation [352,353]. For example, the comprehension of the structural design, skeletal strength and weaknesses of the echinoid test consents to interpret taphonomic processes and the potential preservation of the echinoid taxa [56]. Such knowledge can help ecologists and palaeontologists to better assess the effect of taphonomic filters and biases on echinoid communities helping to determine e.g. if predatory drill holes or other biotic traces can promote the potential preservation of an echinoid [354] or lead to a loss of information.
Conclusion
In the course of time, the original constructional design of the echinoid endoskeleton has attracted the attention of researchers from different scientific fields due to its unique morphology, structure and material properties. Currently, these features reveal a great potential for biomimetic applications, thus motivating further investigations. This review presents a comprehensive synthesis of important studies on mechanical design and principles of echinoid skeletal structures, emphasised the efficiency of the endoskeleton at different hierarchical levels. Each constructional element of the echinoid's skeleton demonstrated to have a major application as a biological role model: the test in building construction; Aristotle's lantern and pedicellariae in grabbing devices; tube feet in robotic locomotion systems; spine stereom and biomineral composition in innovative materials. Contemporary technological advances in computational imaging, numerical simulation and fabrication have paved the way to a new era for the study of mechanical principles in organisms and their functional transfer [64, 295, 301-304, 340, 355]. Mechanical strategies and performances of the various components can be highlighted by means of different types of digital advanced techniques, such as high-resolution x-ray microcomputed tomography, imagine analysis, 3D modelling and FEA. These technologies ensure high fidelity in the acquisition of biological models, great reliability of results and high reproducibility of complex geometry and structures through the new frontiers of digital manufacturing techniques [64, 164, 301-304, 319, 356, 357].
Consequently, a new virtual biology is emerging capable to provide novel answers to questions concerning the morphology, function and evolution of living and fossil species [164,356,357]. In this regard, studies of mechanical design in organisms are just at an initial phase. Nonetheless, according to present literature, there is evidence of a significant increase in research [46,55,57,59,124,135,217,218] regarding new future integrations between cuttingedge computer science and biology. In conclusion, this review aims to illustrate how the constructional design of echinoids reflects animal adaptations to specific mechanical needs related to different environmental stresses and lifestyles, which abstracted and transferred into engineering and industrial design, provide functional solutions improving structures, processes, and human health.
Acknowledgments
The Authors thank Prof. Arch. Mario Buono (University of Campania Luigi Vanvitelli, Aversa, Italy), the Zoological Station Antorn Dohrn (Naples, Italy) and the Hybrid Design Lab, (Naples, Italy) for their support. They also express thanks to John Slapcinsky (University of Florida, Florida Museum, Invertebrate Zoology) for the access to the echinoid spine collection; Ms. Laurajean Carbonaro-Tota for the English revision.
|
v3-fos-license
|
2017-05-22T22:34:53.111Z
|
2005-12-01T00:00:00.000
|
9927902
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2297-8747/10/3/359/pdf?version=1459481525",
"pdf_hash": "dfd9b2e36a2a19161ad12fbba3a6b89a87735be1",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42337",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "dfd9b2e36a2a19161ad12fbba3a6b89a87735be1",
"year": 2005
}
|
pes2o/s2orc
|
AN INVESTIGATION OF THE GAMOW-TELLER 1 + STATES IN 90 Nb ISOTOPES
In this study, based on Pyatov-Salamov method, the properties of the Gamow-Teller(GT) 1 + states in 90 Nb have been investigated and the agreement of our results calculated by this method for the energy of Gamow-Teller Resonance (GTR) and the corresponding strengths of the 1 + excitations in 90 Nb with the experimental values has been tested. As a result of the calculations, it was seen that the calculated values for the energy and strength of the GTR are sufficiently in agreement with the experimental ones. KeywordsGamow-Teller Resonance, Gamow-Teller strength.
INTRODUCTION
When the historical background of the GTR studies is reviewed, it is necessary to go back to 40 years ago.The theoretical predictions toward the existence of these resonances in 1963 and 1965 [1,2] played a pioneer role on the initiation of the studies on this matter.Although the detailed experimental investigation of the GTR have already started in the early of 1970`s [3][4][5], approximately 10 years later after theoretical predictions, the first experimental observation for the GTR was done in 1975 in the 90 Zr(p,n) 90 Nb reaction at the incident proton energy of 35 MeV [6].In 1980, the giant GTR was actually found to be preferentially excited in the (p,n) reactions at high bombarding energies [7].The (p,n) reaction has become a powerful tool in the study of the GTR at intermediate energies and it has been widely used.Therefore, there has also been many attempts to measure the strength of the GT excitation in the 90 Nb isotope via the (p.n) reaction at different energies [6][7][8][9][10][11][12][13][14][15].The second alternative to measure this strength experimentally is to use the ( 3 He,t) reaction.Using this reaction, the GT strength in 90 Nb has been investigated at various energies [16][17][18][19][20].Although most charge exchange studies have used the (p,n) and the ( 3 He, t) reaction, the ( 6 Li, 6 He) reaction was found to be a suitable and alternative probe for the investigation of spinisospin modes and for the determination of the GT strength with high accuracy [21][22][23][24][25][26][27].
In this study, the properties of the GT 1 + states in 90 Nb are investigated by using Pyatov-Salamov method.For this purpose, the GTR energy, the contribution of the GT strength to the Ikeda Sum Rule and the differential cross sections for the 90 Zr(p,n) 90 Nb and 90 Zr( 3 He,t) 90 Nb reactions at energies of 120 and 450 MeV are calculated.The results of the calculations have been compared with the corresponding experimental data.
FORMALISM
Our formalism is based on Pyatov-Salamov method in which the effective interaction strength has been determined self-consistently by relating it to average field.Let us now briefly mention about the details of this method.As it is known, the central term in the nuclear part of the shell model single particle Hamiltonian operator is not commutative with the GT operator.In other words, ), ( where H sp is the single particle Hamiltonian operator and it is defined as: V c is the Coulomb potential given by the following expression: 1 1 2 for neutrons; 1 ( )( ), -1 2 for protons, 2 with the radial part of the Coulomb potential: Here ρ p (r) is the proton density distribution in the ground state.
The term V ls is the spin-orbit part of the average field potential and it is defined as: All the notations in Eq.( 5) have been taken from Ref. [39]: , where V 0 , R 0 , ζ ls , η and a are the parameters of the average field potential.The GT beta transition operators ± µ G are defined as: σ µ (i) is the Pauli operator in the spherical basis (µ=0,±1).t -(i) (t + (i)) is the spin lowering(raising) operator.
In Pyatov-Salamov method, the commutativity of the central term in the Hamiltonian operator with the GT operators is provided by adding the effective interaction (h) to the commutation relation in Eq. ( 1), i.e.
where h is defined as: [37,40] [ ] [ ] Using Eq. ( 8), the effective interaction parameter γ can be obtained: The average is taken over the ground state of the parent nucleus.Then, the total Hamiltonian operator can be written in the form of The basic set of the particle-hole operators for the GT 1 + states generated by spin dependent charge exchange forces (h) is given by is the nucleon creation(annihilation) operators in a state with the momentum τ j and its projection τ m ) , ( p n = τ .The average value of the commutator of these operators is determined by the equation: . The effective interaction h defined in Eq. ( 8) can be written in terms of the boson operators as follows: A set of Hermitian operators can be constructed in terms of the boson operators: Without showing the details for the solution of Eq. ( 18), the resulting equation for the From Eq. ( 19), we have two different solutions: The analytical expressions for the real amplitudes are: where plus and minus signs correspond to the solutions of Eq. (21a) and (21b), respectively.The eigenstates of the total Hamiltonian in Eq. ( 11) with the energies k ω are the one-phonon excitations of the correlated phonon vacuum 0 of the parent The ± β transition matrix elements from the 0 + initial even-even nuclear state to the one phonon 1 + states in odd-odd final nucleus are expressed by: a) For the − β transitions (N,Z)⇒(N-1,Z+1), ). ( 2 For the GT beta strength function, we have ).
The differential cross section of zero degrees for the excitation of the GT 1 + states can be written as [8,9,16]: where J στ is the volume integral of the central part of the effective spin dependent nucleon nucleon interaction; µ and k denote the reduced mass and the wave number in the center of mass system, respectively.N στ is the distortion factor which may be approximated by the function exp(-xA 1/3 ) [9] and the value of x is taken from Ref. [16].
3.RESULTS AND DISCUSSIONS
In this section, we have calculated the GTR energy, the contribution of the GT beta transition strength to the Ikeda sum rule, and the differential cross sections for the 90 Zr( 3 He,t) 90 Nb and 90 Zr(p,n) 90 Nb reactions at energies of 450 MeV and 120 MeV, respectively.In calculations, the Wood-Saxon potential with Chepurnov parametrization [39] was used (V 0 =53.3MeV, η=0.63, a=0.63 fm, ξ ls =0.263 fm 2 ).The basis used in our calculation contains all neutron-proton transitions which change the radial quantum number n by ∆n=0,1,2,3.The single particle Ikeda sum rule is fulfilled with the approximately ≈%1 accuracy.
The calculation results have been given in Table I.In the first column of Table I, the excitation energies of the GT 1 + states in 90 Nb have been presented.The second column gives the GT strengths corresponding to the excitation energies.In the last two columns, the calculated values of the differential cross sections for the 90 Zr( 3 He,t) 90 Nb and 90 Zr(p,n) 90 Nb reactions at energies of 450 MeV and 120 MeV has been shown, respectively.The excitation energies of the GT 1 + states in 90 Nb can be categorized into three energy regions: low energy region (0<ω GT <5 MeV), the GTR region (5<ω GT <12 MeV), high energy region ((12<ω GT <26 MeV).In the low energy region, there exists only one state at ω GT =2.02 MeV that exhausts 16.59% of the Ikeda sum rule.However, A. Krasznahorkay et al. [20] have found eight levels in the low energy region in 90 Nb.The reason for this difference can be attributed to the fact that the pairing correlations between nucleons has not been taken into account in our study.
In Table II, the experimental values for the GTR energy and the GT strengths have been presented.As seen from this table, the experimental values of the GTR energy range from 8.5 MeV to 8.9 MeV [7,[20][21][22][23][24]27].On the other hand, our calculation for this quantity gives a value of 7.61 MeV (See Table I).Then, it can be said that our calculated value for the GTR energy is not so far from the experimental value, i.e ~ 0.9-1.3MeV lower than the experimental one.Moreover, the GTR state amounts to 82.26% of the the Ikeda sum rule(See Table I).As compared to the values obtained for the GT strengths in different experimental studies [3,23,24,27] given in Table II, our value is within the range of the upper limits given in Ref. 23,24.We hope that all these differences between the calculated and experimental value for the GTR energy and the GT strengths will be partly removed by the consideration of the pairing correlations between nucleons.Finally, we have calculated the differential cross sections for the 90
CONCLUSION
We have applied Pyatov-Salamov method to the investigation of the GT 1 + states in 90 Nb and tested the agreement of the calculated quantities in the present study by this method with the experimental values.For this purpose, the excitation energies, the GT strengths of the 1 + states in 90 Nb and the differential cross sections for the 90 Zr( 3 He,t) 90 Nb and 90 Zr(p,n) 90 Nb reactions at energies of 450 MeV and 120 MeV have been calculated.As a result of our calculations, it has been seen that our calculated value for the GTR energy is sufficiently close to the experimental value, i.e ~ 0.9-1.3MeV lower than the experimental one, and our value for the contribution of the GTR to the Ikeda sum rule is within the range of the upper limits given in Ref. 23,24.We hope that all these differences between the calculated and experimental value for the GTR energy and the GT strengths will be partly removed by the consideration of the pairing correlations between nucleons.In the next step, the pairing correlations between nucleons will be included in the investigation of the GT 1 + states in 90 Nb and this will be done in our next study.
is the orbital angular momentum of the proton; occupation numbers of the neutron and proton states.
system of equations for the eigenenergies k ω of the Gamow-Teller 1 + states in the neighborhood odd-odd nucleus and the real amplitudes k are related to each other by the Ikeda sum rule:
Table I :
Calculation results for the GT strengths of the 1 + states in90Nb and the differential cross sections for the 90 Zr( 3 He,t)90Nb and 90 Zr(p,n)90Nb reactions at energies of 450 MeV and 120 MeV, respectively.
Zr( 3 He,t)90Nb and 90 Zr(p,n)90Nb reactions at the excitation energies of 450 MeV and 120 MeV.They have the values of 163.96 mb/sr and 17.78 mb/sr, respectively.
Table II :
The experimental values for the GTR energy and the GT strengths
|
v3-fos-license
|
2024-01-12T06:17:38.929Z
|
2024-01-10T00:00:00.000
|
266930926
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "e68e01f6627241a73bc57f85b81d1a20b213cc8c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42339",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "7a370293e50c6c2d3bddf162b211905679a36eac",
"year": 2024
}
|
pes2o/s2orc
|
Antibacterial properties and urease suppression ability of Lactobacillus inhibit the development of infectious urinary stones caused by Proteus mirabilis
Infectious urolithiasis is a type of urolithiasis, that is caused by infections of the urinary tract by bacteria producing urease such as Proteus mirabilis. Lactobacillus spp. have an antagonistic effect against many pathogens by secreting molecules, including organic acids. The aim of the study was to determine the impact of Lactobacillus strains isolated from human urine on crystallization of urine components caused by P. mirabilis by measuring bacterial viability (CFU/mL), pH, ammonia release, concentration of crystallized salts and by observing crystals by phase contrast microscopy. Moreover, the effect of lactic acid on the activity of urease was examined by the kinetic method and in silico study. In the presence of selected Lactobacillus strains, the crystallization process was inhibited. The results indicate that one of the mechanisms of this action was the antibacterial effect of Lactobacillus, especially in the presence of L. gasseri, where ten times less P. mirabilis bacteria was observed, compared to the control. It was also demonstrated that lactic acid inhibited urease activity by a competitive mechanism and had a higher binding affinity to the enzyme than urea. These results demonstrate that Lactobacillus and lactic acid have a great impact on the urinary stones development, which in the future may help to support the treatment of this health problem.
Antibacterial properties and urease suppression ability of Lactobacillus inhibit the development of infectious urinary stones caused by Proteus mirabilis
Dominika Szczerbiec 1 , Katarzyna Bednarska-Szczepaniak 2 & Agnieszka Torzewska 1* Infectious urolithiasis is a type of urolithiasis, that is caused by infections of the urinary tract by bacteria producing urease such as Proteus mirabilis.Lactobacillus spp.have an antagonistic effect against many pathogens by secreting molecules, including organic acids.The aim of the study was to determine the impact of Lactobacillus strains isolated from human urine on crystallization of urine components caused by P. mirabilis by measuring bacterial viability (CFU/mL), pH, ammonia release, concentration of crystallized salts and by observing crystals by phase contrast microscopy.Moreover, the effect of lactic acid on the activity of urease was examined by the kinetic method and in silico study.In the presence of selected Lactobacillus strains, the crystallization process was inhibited.The results indicate that one of the mechanisms of this action was the antibacterial effect of Lactobacillus, especially in the presence of L. gasseri, where ten times less P. mirabilis bacteria was observed, compared to the control.It was also demonstrated that lactic acid inhibited urease activity by a competitive mechanism and had a higher binding affinity to the enzyme than urea.These results demonstrate that Lactobacillus and lactic acid have a great impact on the urinary stones development, which in the future may help to support the treatment of this health problem.
Urolithiasis is one of the most common diseases of the urinary system with an incidence of 1-13%, depending on the geographical region.It is reported that the number of cases and deaths from this highly widespread disease is constantly increasing, while the age of people suffering from urolithiasis is decreasing 1,2 .Many factors may be responsible for this trend, including diet, climate, physical activity or obesity 3 .In general, based on their chemical composition, urinary stones can be divided into calcium oxalate stones (which are the most common type), calcium phosphate stones, uric acid stones, cystine stones and struvite stones.The last type includes stones, which are formed as a result of the infection in the urinary tract 4 .According to the literature data, infectious stones account for even 15% of all urinary stones [5][6][7] and they are caused by the activity of bacterial urease, a nickeldependent metalloenzyme 8 .Microorganisms responsible for this process belong mainly to the genus Proteus.They are isolated from up to 70% of infectious stones 9 and all of Proteus isolates from urinary stones produce urease 7,10 .Urease catalyzes the hydrolysis of urea into carbon dioxide (CO 2 ) and ammonia (NH 3 ), which increases urinary pH.The concentrations of ammonium, bicarbonate and phosphate ions increase, which in the presence of magnesium and calcium ions, leads to the precipitation of carbonate apatite (Ca 10 (PO 4 ) 6 CO 3 ) and struvite (MgNH 4 PO 4 •6H 2 O) 5 .Precipitation of mineral components of urine caused by their excessive concentration in relation to the solubility leads to crystallization (struvite and apatite crystals) -the initial stage of urinary stones formation.During the next phase, the struvite and apatite crystals aggregate, which results in the formation of a urinary stone and its retention in the urinary tract 11 .Treatment of infectious urolithiasis is a long-term and complicated process.It includes antibiotic treatment to eliminate the pathogen, stone removal using shock wave lithotripsy (SWL) or percutaneous nephrolithotomy (PCNL) and recurrence prevention 10,12 .Antibiotic therapy is often challenging.An antibiotic is not able to penetrate into the stone, where microorganisms can also live, which allows them to survive and leads to the formation of urinary stones de novo 13 .There are several methods
Bacterial strains
P. mirabilis strains (KP; 5628) were isolated at the Department of Microbiology from the urine of patients of the Children's Memorial Health Institute in Warsaw, Poland, who had been diagnosed with infectious urolithiasis.The other two P. mirabilis strains (608/221; K8/MC) were obtained from urinary stones and provided by the Provinicial Specialist Hospital M. Pirogow in Lodz.The strains were identified using the API 20E test (Biomerieux, Marcy-I'Etoile, France) and cultured on TSB (tryptic soy broth, BTL, Warsaw, Poland) for 24 h at 37 °C.
Lactobacillus strains were isolated from the urinary tract of healthy people, and were deposited in the bacterial strain collection at the Department of Biology of Bacteria, University of Lodz.The method of isolation of these strains and their characteristics had been described in our previous study 14 .Briefly, strains were obtained from human urine, of both men and women who had not been treated with antibiotics or probiotics in the last 3 months.Urine samples were obtained from all participants and/or legal guardians with their informed consent and the research were carried out in accordance with relevant guidelines and regulations.All experimental protocols were approved by University of Lodz Research Ethics Committee (approval number 4/(I)/KBBN-UŁ/ II/2020).Strains, were identified by mass spectrometry MALDI/TOF Microflex LT (Bruke, Billerica, MA, USA).Lactobacillus spp.were cultured on APT agar (BD Difco, Franklin Lakes, NJ, USA) and incubated in 5% CO 2 at 37 °C for 48 h.
Synthetic urine
Synthetic urine, the composition of which chemically corresponds to the mean concentrations found in normal human urine during a 24-h period was made as described by Griffith et al. 22 Cl, 1.0; urea 25.0; creatinine, 1.1 and tryptic soy broth, 10.0 (Sigma, St. Louis, MO, USA).The solution was prepared immediately before the experiment and sterilized by passing through a 0.2 µm pore-size filter (Sartorius, Goettingen, Germany).
Crystallization experiment in mixed cultures
The crystallization assay was performed according to the Torzewska et al. method 23 with some modifications.To 20 mL of synthetic urine, 20 µL of bacterial cultures were added in the ratio 1:5 of P. mirabilis (KP, K8/MC, 5628, 608/221) and Lactobacillus (L.crispatus 1.2, L. crispatus 4, L. jensenii 22.2 and L. gasseri 35.3) (it was determined that in this ratio, crystallization was most intensively inhibited by Lactobacillus, data not shown).Lactobacillus and Proteus strains were cultured as described in section "Bacterial strains".
At first, the induction time of crystallization i.e., the time between the creation of supersaturation and the appearance of crystals was assessed.The absorbance of the above samples was measured every half hour at a wavelength of 600 nm using a spectrophotometer (Ultraspec 2000, Pharmacia Biotech, USA).It allowed selection of crucial hours in which the intensity of crystallization was determined.
At 0, 3, 6, 8 and 24 h, the pH, the number of P. mirabilis, the amount of released ammonia, and the intensity of crystallization were assessed.Pure cultures of P. mirabilis in synthetic urine were controls in this experiment.Confirmation of the crystallization of struvite and apatite occurring under such experimental conditions was carried out in our previous studies 24,25 .The intensity of crystallization was assessed on the basis of quantitative and qualitative determinations.Quantitative research included the assessment of the Ca 2+ and Mg 2+ ions concentrations by atomic absorption spectroscopy (SpectAA-300 Varian, Palo Alto, California).For these analyses, 1 mL of each sample was collected, centrifuged (8000 rcf, 5 min), and treated for mineralization with 0.5 mL 65% HNO 3 .Struvite and apatite crystals were observed using a phase-contrast microscope (Nikon Eclipse TE-2000-S).The pH was determined using a pH meter (Elmetron Cp-215, Zabrze, Poland) and the ammonia release was measured by the phenol hypochlorite colorimetric method 26 .The results were expressed as % of inhibition of ammonia release, where 100% was the concentration of ammonia in the control, synthetic urine with P. mirabilis.The number of P. mirabilis was determined by spreading 100 µL of serially diluted suspensions on TSB agar with 0.1% phenol.Grown colonies were counted after 24 h incubation at 37 °C, to determine the number of
Determination of crystallization and urease activity in the presence of Lactobacillus and lactic acid
The crystallization micromethod allowed examining the impact of Lactobacillus strains and lactic acid produced by them on the crystallization process and on the activity of the urease enzyme.The degree of crystallization was assessed using phenol red as a pH indicator, due to the fact that an increase in pH indicates the start of the crystallization process.At first, the influence of four tested Lactobacillus (L.crispatus 1.2, L. crispatus 4, L. jensenii 22.2 and L. gasseri 35.3) on crystallization caused by P. mirabilis strains was assessed according to the Torzewska et al. method 23 .150 µL of synthetic urine with 0.001% phenol red was added to 96-well plates with 2 µL of P. mirabilis (2 × 10 8 CFU/mL) and Lactobacillus cultures in proportion 1:5 (P.mirabilis:Lactobacillus).Lactobacillus and Proteus strains were cultured as described in section "Bacterial strains".Negative control was 150 µL of synthetic urine with 0.001% phenol red and positive control was pure P. mirabilis cultures.All wells were covered with mineral oil to prevent the release of ammonia and increase the pH in the adjacent wells, and incubated for 24 h at 37 °C.The absorbance was measured using a microplate reader Multiskan Ex (Labsystems, Helsinki, Finland) at a wavelength of 550 nm.The impact of Lactobacillus strains on the activity of urease from Jack bean (Serva, Heidelberg, Germany) was performed similarly.190 µL of synthetic urine with 0.001% phenol red was added to 96-well plates with 1 µL of the urease enzyme (at the final concentration 0.105 U/mg) and 10 µL of Lactobacillus cultures.Positive control was synthetic urine with urease and negative control was synthetic urine.The plate was incubated at 37 °C, and every hour, up to 6 h of incubation, the absorbance was measured at a wavelength of 550 nm.The ammonia release inhibition was measured as described in section "Crystallization experiment in mixed cultures".The effect of different lactic acid (Sigma, St. Louis, MO, USA) concentrations (1.4 mM, 2.8 mM 5.5 mM, 11 mM, 22 mM) on urease was assessed using the same method.Briefly, 190 µL of synthetic urine with 0.001% phenol red was added to 96-well plates with 1 µL of urease and 10 µL of dilutions of lactic acid.Positive and negative controls were the same as in the assay described above.The plate was incubated at 37 °C and the absorbance was measured at a wavelength of 550 nm.
Urease inhibition assay
The assay was performed to determine the mechanism of urease inhibition by lactic acid.The method was based on the research conducted by Rashid et al.; Tan et al. and Du et al. [27][28][29] with our modifications.The assay was carried out in a phosphate buffer containing 5.9 mM EDTA and 25 mM HEPES (pH 8.0) in 96-well plates.The enzymatic mixture contained: 150 μL of urea solutions (Chempur, Piekary Śląskie, Poland) in different concentrations (5 mM, 10 mM, 15 mM, 30 mM, 45 mM), 40 µL of urease from the Jack bean enzyme (Serva, Heidelberg, Germany) at the final concentration 0.105 U/mg and 10 µL of lactic acid (final concentrations 11 mM, 38 mM, 55 mM, dissolved in distilled water).Distilled water was added to control samples instead of lactic acid.The plate was incubated at 37 ºC at time points: 2, 5, 10, 30, 60 and 120 min, the amount of released ammonia was determined using the phenol-hypochlorite colorimetric method 26 .The absorbance was measured after 30 min using a microplate reader Multiskan Ex (Labsystems, Helsinki, Finland) at a wavelength of 620 nm.The IC 50 of lactic acid was determined for the 0.105 U/mg concentration of urease and 10 mM of urea in the concentration range of lactic acid from 110 to 1 mM.The IC 50 value of lactic acid was calculated using a GraphPad Prism 8.0 (GraphPad Prism Software Inc., San Diego, CA, USA) for the dose-response curve.
Homology modeling of Proteus mirabilis urease
The sequence of the alpha subunit of P. mirabilis urease was obtained from UniProtKB Database 30 (URE1_ PROMH P17086).Top-ranked templates were selected in a SwissModel homology-modeling server 31 and obtained from the Protein Data Bank (PDB).The crystallographic structure of Jack bean (Canavalia ensiformis) urease was identified as the best-matching template based on HHBlits 32 .(PDB 4gy7.1.A; resolution 1.49 Å, sequence identity 60.6%, sequence similarity 0.48).In the SwissModel homology-modeling, the model of P. mirabilis urease was built by template alignment method using ProMod3 3.2.1 Version 3.2.1 33 .In ProMod3, biologically relevant non-covalently bound ligands are considered in the model if they have at least three coordinating residues in the protein, those residues are conserved in the target-template alignment and the resulting atomic interactions in the model are within the expected ranges.In our model, the interaction of amino acids at the catalytic site, including His residues, and Ni ion ligands was found to be conserved between the target and the template.Therefore, Ni ions in the template structure were identified as relevant ligands and transferred by homology to the model.Finally, the resulting model's geometry was regularized using a force field by ProMod3 tools.
The global and per-residue model quality has been assessed using the QMEAN scoring function 34 .Global model quality estimate GMQE was 0.88, QMEANDisCo global was 0.84 ± 0.05, QMEAN Z-score -0.97.Predicted local similarities of model and target histidine residues at the catalytic site (QMEANDisCo Local) were 0.93 for His346B, 0.93 for His272B, 0.93 for His134B, and 0.86 for His136B.The final model structure was pre-processed and optimized in Maestro Schrodinger 11.7 (Schrödinger, Inc., New York, NY, 2013).The reference structure of the Jack bean urease (.pdb file 4gy7.1) was also prepared in the Maestro Schrodinger 11.7 software for docking comparison.
Statistical analysis
All experiments were carried out at least in triplicate.Statistical analyses were based on the Kruskal-Wallis test and the Mann-Whitney U test, performed using TIBCO Software Inc. (2017).Statistica (data analysis software system), version 13. http:// stati stica.io.The results were considered to be statistically significant at p value < 0.05.
The effect of Lactobacillus strains on the intensity of crystallization caused by P. mirabilis strains in vitro
Changes in urinary pH and in a degree of ammonia release At the beginning of the experiment urinary pH in pure and mixed samples averaged 5.7 (data not shown).During the incubation the pH in those samples constantly increased but with a different pattern.At every hour of the experiment (except 24 h) the lowest pH (compared to the controls) was observed in the sample with L. gasseri 35.3.Significantly lower pH values in the tested samples were noticeable mainly in the sixth hour of incubation, whereas after 24 h the pH values were close to each other and averaged 9.0.The level of released ammonia was closely related to the pH level.The ammonia released by the action of the urease enzyme, increased the urinary pH.In Fig. 1. we can observe that in the presence of L. gasseri 35.3 the ammonia release was lower compared to the control.In particular, after 6 h of incubation, it was inhibited from 10% for P. mirabilis K8/MC to even 45% for P. mirabilis 5628.
Viability of P. mirabilis in pure and mixed cultures
At 0, 3, 6, 8, and 24 h of the assay, the number of P. mirabilis bacteria was determined.During the first three hours of the experiment, no significant changes in Proteus viability were observed in mixed cultures compared to the control.As shown in Fig. 2, after 6 h of incubation, the number of P. mirabilis bacteria in control samples reached a value of 2-4 × 10 8 CFU/mL.However, in some tested samples with Lactobacillus strains the growth inhibition of P. mirabilis strains was observed.All tested lactobacilli significantly inhibited P. mirabilis 5628 growth, especially in the presence of L. gasseri 35.3, 10 times less P. mirabilis 5628 bacteria were observed, compared to control (**p < 0.01).The same effect was noted in the sample with P. mirabilis 608/221.Inhibition of Proteus viability after 6 h was not observed in the remaining samples (P.mirabilis KP and K8/MC).It is worth mentioning that in the sample with P. mirabilis 5628, the inhibition of viability by the L. gasseri strain was still maintained after 8 h of incubation and the same effect was noted in co-culture of P. mirabilis KP and 608/221 with L. crispatus or L. gasseri.After 24 h the viability of P. mirabilis strains decreased in all samples and averaged 5 × 10 6 CFU/ mL.Only in co-culture with L. gasseri, it was observed that some of the tested P. mirabilis strains showed greater viability than in control during this hour.
Intensity of crystallization
Intensity of crystallization in the tested samples was assessed quantitatively and qualitatively.Quantitative analyses showed that in the first three hours no significant changes in the concentration of the tested ions were observed.Nevertheless, after 6 h of incubation the most significant differences were observed between the amount of Mg 2+ and Ca 2+ ions in the tested samples compared to the controls (pure P. mirabilis cultures in synthetic urine) (Fig. 3).A significant decrease in the amount of these two ions was noted especially in the presence of the L. gasseri strain, in contrast to L. jensenii which in most cases did not exhibit such properties.This trend was the greatest in co-culture with P. mirabilis 5628 and L. gasseri, where the calcium concentration was even 10 times lower than in the control, and the amount of magnesium was up to 5 times lower after 6 h.In the presence of L. gasseri, a lower content of the tested ions was still observed after eight hours of incubation in co-culture with P. mirabilis KP and 5628.The low content of Mg 2+ was maintained in those samples after 24 h.It is worth noting that from the eighth hour, the proportion of calcium and magnesium in the samples started changing, and higher concentrations of Mg 2+ , the main ion of struvite crystals were observed.Observation with a phase-contrast microscope enabled a qualitative assessment of the formation of carbonate apatite and struvite crystals (Fig. 4).After 6 h, in all tested samples crystals formed in the crystallization process were smaller compared to the control or even absent, mainly in the samples with L. gasseri 35 www.nature.com/scientificreports/showed "coffin-lid"-shaped appearance.The largest of struvite crystals were observed in control samples (even 90 µm width).After 8 and 24 h of incubation, crystals were found in all samples, but they were still smaller compared to the controls.
The impact of Lactobacillus strains and lactic acid on crystallization and urease activity
As shown in Fig. 5A.Lactobacillus strains inhibited the process of crystallization caused by P. mirabilis strains.However, the degree of inhibition varied in the presence of different strains.L. jensenii 22.2 exhibited the lowest inhibition ability, while the others were found to be effective in suppressing the crystallization of all P. mirabilis strains.On the other hand, Proteus strains also showed different levels of effectiveness in this process.We distinguished strains KP and K8/MC as those exhibiting the strongest ability of urine components crystallization, as opposed to strains 5628 and 608/221.The assay with the Jack bean urease enzyme brought a lot of crucial data about the influence of Lactobacillus strains on the activity of this enzyme.Figure 5B (lines) shows that all Lactobacillus strains inhibited the enzyme activity but with different intensity.L. gasseri constantly caused the highest level of crystallization inhibition compared to the control and this strain also inhibited the release of ammonia the most intensively (bars).One of the main extracellular substances with antagonistic activity which are secreted by Lactobacillus spp. is lactic acid.Regarding this, we also investigated the impact of this acid on urease activity.The results are shown in Fig. 5C.different concentrations of lactic acid inhibited crystallization, and it was observed that with the increase in the acid concentration, the inhibition process was also intensified.Lactic acid at the concentration of 22 mM suppressed crystallization most effectively, and in its presence no increase in crystals formation was observed even after 24 h.However, the lowest concentrations (2.8 mM, 1.4 mM) inhibited the process only up to about 4 h of the experiment.The obtained results show that lactic acid produced by the tested Lactobacillus strains is able to inhibit the crystallization process and the level of inhibition is related to the concentration of the acid.
Mechanism of urease inhibition by lactic acid
The IC 50 value of lactic acid was 38 mM ± 0.45 mM.Therefore, we used this concentration as well as IC 25 (11 mM) and IC 75 (55 mM) in the urease inhibition assay.Lactic acid inhibited urease activity in a dose-dependent manner.The Lineweaver-Burk plot showed that lactic acid interacted with a catalytic site of the enzyme, directly competing with the substrate (Fig. 6A).The type of inhibition was indicated by changes in enzyme kinetic parameters, of which the Michaelis-Menten constant (K m ) increased with the increase of the inhibitor concentration, while the maximum reaction (V max ) rate remained essentially unchanged (Fig. 6A).Specifically, K m without lactic acid was 5.06 ± 1.2 mM and V max 0.45 ± 0.06 mM/min, K m 6.6 ± 1.18 mM and V max 0.45 ± 0.07 mM/min for lactic acid 11 mM, K m 8.9 ± 1.3 mM, V max 0.44 ± 0.07 mM/min for lactic acid 38 mM and K m 10.88 ± 0.8 mM, V max 0.45 ± 0.06 mM/min for lactic acid 55 mM.The K m value increase indicated decreased affinity of urea to the active site of urease in the presence of lactic acid.However, as shown in Fig. 6B, increased concentrations of the substrate (urea) neutralized the effect of the inhibitor (lactic acid), which clearly indicates the mechanism of competitive inhibition.
Docking study
The structure of P. mirabilis urease was generated by the homology modeling method using the crystallographic structure of the Jack bean urease as a template (PDB 4gy7), and lactic acid (LA) was docked as a ligand, as described in the Materials and Methods.The model structure of the three subunits of P. mirabilis urease as a homo-trimer, binding six nickel ions in an active center is presented in Fig. 7.The two Ni ions were complexed by the linear and triagonal bonds of His residues (His246 and His272, and His134, His136) and Asp360 in the active center, which are stabilized by hydrophobic interactions and hydrogen bonds of N6-carboxylated Lys (KCX) (Table S1 and Fig. S2 Supplementary Material).Flexible docking showed the best docking position of LA in the catalytic center of P. mirabilis ureases in close proximity to two Ni ions and in the vicinity of the histidine residues involved in forming of salt bridges between the LA carboxylate and imidazole ring of His residues (Fig. 8A-C), or hydrogen bonds, similar to the Jack bean urease (Table S2).Two of the three residues forming hydrogen bonds with LA also participated in binding of urea (His219 and Gly277, Table S2 and Table S3).His136 involved in hydrogen bond with urea (Table S3), formed a salt bridge with LA (Table S3 and Fig. 8B, C).The estimated free energy of binding (ΔG, kcal/mol) was taken as a docking score (Autodock4.2.6).Comparison of docking of lactic acid (−8.22 kcal/mol) versus urea (−5.40 kcal/ mol) revealed a remarkably better interaction of LA with an active center of P. mirabilis urease, as in the case of the Jack bean urease.
Discussion
Urinary tracts of healthy women and men are inhabited by a diverse microbiota where the dominant species are microorganisms belonging to the genus Lactobacillus.They play a significant role in maintaining the homeostasis of this environment and the disturbance of which could affect the development of many uropathogens 15,39 .Many recent studies focus on the antibacterial and antibiofilm properties of natural microbiota of the urinary tract against the most common UTI pathogens such as: Escherichia coli, which cause approximately 80% of UTIs 40,41 , Proteus mirabilis 14,42 or Klebsiella pneumoniae 43,44 .Additionally, a lot of scientific works concentrate on the practical use of selected Lactobacillus strains to UTIs treatment.Clinical testing such as that performed by Stapleton et al. showed that a probiotic containing Lactobacillus strains reduces the rate of UTI recurrence even by half 45 .
While most previous studies focused on Lactobacillus isolated from food, genital tracts or gut and their influence on UTI and comorbidities [45][46][47] , our findings concerned the strains isolated from human urine and their impact on the development of urinary stones caused by bacteria.We designed our work in such a manner as to imitate the environment of the human urinary tract in order to understand how these strains influence each other, which makes this study unique.Infectious urolithiasis is one of the most common urological diseases, with an increasing morbidity rate and ineffective therapy.Due to the difficulties in the treatment and the lack of effective preventive measures for this disease, researchers have recently focused on finding alternative treatment opportunities.
Research conducted by Smanthong et al. 48, about the Sida acuta Burm.F. ethanolic leaf extract, indicated that this extract has anti-struvite crystals properties, which makes it a potential new treatment agent.Other scientific works focused on the possibility of using compounds such as trisodium citrate 49 or herbal extracts 50 .These facts encouraged us to investigate the impact of natural microbiota of the urinary tract on the development of urinary stones caused by P. mirabilis.On the basis of the results of the present study, it can be concluded that some of the tested Lactobacillus strains are able to inhibit the crystallization of urine components.In the course of the experiments, the parameters that testify the intensity of crystallization, such as pH, ammonia release, concentration of Mg 2+ and Ca 2+ ions, P. mirabilis viability were assessed.The crystals were observed using phase-contrast microscopy.Among all the tested strains, L. gasseri stood out the most, showing the greatest inhibitory properties in opposition to strain L. jensenii.We observed that pH, release of ammonia, concentration of Ca 2+ and Mg 2+ ions were significantly lower, up to 8 h of incubation in mixed samples with L. gasseri compared to the control samples.In our previous study we found that Lactobacillus strains had antibacterial properties and www.nature.com/scientificreports/abilities to inhibit the growth of Proteus strains at the level of up to 100%, mainly through secreted organic acids.L. gasseri showed the greatest antibacterial activity (72-97%) against many P. mirabilis strains (including those tested in this paper), while L. jensenii 22.2 exhibited the weakest properties 14 .Therefore, in the current study, the effect of these strains on the viability of Proteus bacteria in the environment of synthetic urine was also tested.It was assumed that this may be one of the mechanisms used by Lactobacillus strains to inhibit the crystallization process.Indeed, inhibition of P. mirabilis viability was observed in some tested samples.The strongest properties were noted in the samples with L. gasseri and P. mirabilis 5628 and 608/221 strains.In contrast to these results, the studies conducted by Torzewska et al. 47 on the effect of Lactobacillus strains isolated from food on the crystallization caused by P. mirabilis demonstrated that tested P. mirabilis strains showed better viability in co-culture with Lactobacillus, which intensified the process of crystallization.This suggests that Lactobacillus strains isolated from the urinary tract might have some specific properties which they developed in the particular environment.
The above results, indicating that some of the tested Lactobacillus strains are able to inhibit the crystallization process, were confirmed using a phase-contrast microscope.After 6 h of incubation, a lower amount or lack of struvite and apatite crystals were observed in the samples with L. gasseri and L. crispatus strains.Struvite crystals are formed at pH above 7.5 and can take various morphology depending on many factors, including pH.There are coffin-shaped crystals (like those observed in our research) and X-shaped dendrite crystals, formed due to a rapid increase in the pH value 51 .
Our previous work 14 concerned organic acids secreted by lactic acid bacteria and their antibacterial properties.We found that all of the tested Lactobacillus strains used in that study produced lactic and succinic acid, however L. jensenii was distinguished by producing the highest concentration of succinic acid and L. crispatus 1.2 and L.crispatus 4 of lactic acid.Due to the fact that for some P. mirabilis strains the inhibition of their viability was not observed (although the suppression of crystallization occurred), the influence of lactic acid on urease activity was investigated.
Lactobacillus strains inhibited the crystallization process even in the presence of urease without P. mirabilis (Fig. 5B) and this phenomenon was also observed with different concentrations of lactic acid alone (Fig. 5C).In order to confirm the hypothesis that lactic acid inhibits the crystallization process through a direct interaction with urease, we performed the enzyme activity inhibition test.So far, there are no literature data describing the mechanism of inhibiting the urease activity by lactic acid.However, it has been shown that organic acids can bind many trace metals, including nickel which is a part of the active site of urease 52 .P. mirabilis urease is metalloprotease, built of three subunits with a total mass of 200-700 kDa, containing a lot of cysteine and histidine residues and, as already mentioned, nickel atoms in the catalytic center.Many other microorganisms produce this enzyme and it is an important factor in the pathogenicity of these bacteria 53 .In the course of our experiments, to determine the lactic acid inhibitory activity, urease from Jack bean (Canavalia ensiformis) was used as a reference enzyme.Kinetic parameters of urease activity K m , and V max , revealed that lactic acid acts as a competitive inhibitor which binds to the active site of the enzyme.The docking experiments confirmed that lactic acid binds to the catalytic center of enzyme like competitive inhibitors, with remarkably higher affinity than urea did (scores −8 kcal/mol vs. −5 kcal/mol), which confirmed its inhibitory potency as a good ligand.In this context, lactic acid can be considered as a modulator of urease activity with pharmacological potential.Interestingly, the interaction between lactic acid and urea may involve the same histidine residues that bind urea, but in addition to hydrogen bonds (as in the case of urea), salt bridges may also be formed, stabilizing the position of lactic acid in the catalytic center of the enzyme in the ticks of histidine residues.In turn, as demonstrated by our in vitro study, the competing effect of high urea concentrations on the lactic acid residues in the enzymatic center may lead to displacing the lactic acid molecule from the site of its binding in urease.Based on in silico studies, we suggest that long-range salt bridges may be disrupted by high concentrations of urea.
To date, various types of urease inhibitors have been identified, e.g.hydroxamic acids, phosphoramidates, quinones, polyphenols or heterocyclic compounds 54 .Acetohydroxamic acid (AHA) is recommended in supporting the treatment of urolithiasis.However, therapeutic application of AHA has limitations due to many side effects like risk of hemolytic anemia or leukopenia 55 .Effective and safe urease inhibitors would be a breakthrough in the treatment of urolithiasis caused by urease-producing bacteria.Therefore, many researchers struggle with this issue, such as Milo S. et al. 56 , who pointed out the effectiveness of another low molecular weight organic acid like 2-MA as a safer alternative to the currently used AHA acid.
Conclusion
The results of our study indicate that Lactobacillus strains that inhabit the urinary tract are able to suppress the crystallization process, i.e. one of the initial stages of the urinary stones formation.In the course of the presented studies, we showed that their antagonistic effect is multidirectional, and in addition to the antibacterial properties against P. mirabilis, lactic acid produced by them affects the activity of urease through interaction with the catalytic domain of the enzyme.These studies are preliminary and their continuation on an in vivo model will confirm the usefulness of this lactic acid as a factor supporting the treatment of urolithiasis.
Figure 1 .
Figure 1.Percentage of ammonia release inhibition (bars) in mixed cultures in synthetic urine, where pure P. mirabilis samples were 100%, and changes in urinary pH (lines) in pure and mixed samples after 3, 6, 8 and 24 h of incubation.(A) Corresponds to P. mirabilis KP; (B) P. mirabilis K8/MC; (C) P. mirabilis 5628 and (D) P. mirabilis 608/221.The results are presented as mean ± standard deviation (SD) of three experiments; *p < 0.05 for comparison of the pH value and ammonia release of P. mirabilis pure culture vs. co-culture with Lactobacillus, Mann-Whitney U test.
Figure 3 .
Figure 3. Calcium and magnesium concentrations in the tested and control samples after 3, 6, 8 and 24 h of incubation in synthetic urine.(A) Corresponds to P. mirabilis KP; (B) P. mirabilis K8/MC; (C) P. mirabilis 5628 and (D) P. mirabilis 608/221.The results are presented as mean ± standard deviation (SD) of three experiments, **p < 0.01, *p < 0.05 for P. mirabilis viability in mixed cultures vs. pure culture, Mann-Whitney U test.
Figure 6 .
Figure 6.The Lineweaver-Burk plot showing the competitive inhibition of urease-catalysed hydrolysis of urea by different concentrations of lactic acid (A).Influence of increased concentrations of urea on urease activity in the presence of 38 mM lactic acid; data are presented as inhibition percentage of ammonia release (B); zero value means no inhibition of urease activity.The Michaelis-Menten plot of the predicted reaction rate of urea hydrolysis by urease as a function of a substrate concentration is shown in Fig. S1 (Supplementary Material).
Figure 7 .
Figure 7.The three-subunit structure of P. mirabilis urease (homo-trimer) built using homology modeling methods; nickel ions in an active site of subunits are marked red; three subunits are in different colors.
Figure 8 .
Figure 8. (A) Lactic acid (LA, in orange) docked to the catalytic center of subunit alpha of P. mirabilis urease located in the vicinity of two nickel ions (pink spheres).(B) The best docking pose of LA (orange) interacting with amino acids (blue tubes) in the active center of P. mirabilis urease, binding energy ΔG −8.22 kcal/mol (estimated Ki 0.94 μM); hydrogen bonds between LA and His219, Ala167 and Gly277 are visualized as solid lines; long-range interactions, salt bridges, between lactic acid carboxyl and imidazole groups of five histidine residues are marked as dashed yellow lines; yellow spheres are charge centers in imidazole rings and LA carboxyl; complexation of Ni ions (pink balls) are marked as dark dotted lines; hydrogen atoms are omitted for clarity.Structures were analyzed and visualized in the PLIP 2.2.0 and PyMOL 2.3.4 software.(C) A table with listed amino acids interacting with LA through hydrogen bond interaction and salt bridges; calculations in PLIP 2.2.0 software. 4 38eparation of ligandsThe 3D structures of L( +) lactic acid and urea were obtained in the form of .sdffilesfrom the PubChem database (CID:612 and CID:1176, respectively35, optimized in MarvinSketch 20.14.0, 2020, ChemAxon (http:// www.chema xon.com) and calculated at the DFT/B3LYP/6-31G* level (HyperChem 7.51, HyperCube Inc., Gainesville, FL, USA).Partial charges were preserved in docking experiments.experimentsNickel2+ionparameters were implemented into the AutoDock 4.2.6 database and to the AutoGrid parameter files: vdW diameter, Rii 1.41 Å 36 and vdW well depth, epsii 0.013 kcal/mol37; the remaining parameters were default as for metal ions.Kollman charges were assigned to protein atoms, charge + 2 to Ni ions.A docking area was centered at the average position of the intra-and extracellular part of a protein broadly covered catalytic center and external amino acids (80 × 80 × 80 box, grid point spacing 0.375 Å).The Lamarckian genetic algorithm was used for conformational search of a flexible ligand.Docking parameters were as follows: number of individuals in population 100, GA-LS runs 50, maximum number of energy evaluations 2.5 × 10 8 , maximum number of generations 27,000.The ligand docking poses were analyzed and visualized using the Protein-Ligand Interaction Profiler (PLIP 2.2.0 open source software)38and PyMOL Molecular Graphics System, Version 2.3.4,Schrödinger, LLC.
|
v3-fos-license
|
2017-06-16T17:13:39.528Z
|
2012-10-30T00:00:00.000
|
14967677
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcgenet.biomedcentral.com/track/pdf/10.1186/1471-2156-13-94",
"pdf_hash": "c2dda5b53bec54f868ee4919855b2ef40488d37c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42340",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"sha1": "c2dda5b53bec54f868ee4919855b2ef40488d37c",
"year": 2012
}
|
pes2o/s2orc
|
Fast and accurate haplotype frequency estimation for large haplotype vectors from pooled DNA data
Background Typically, the first phase of a genome wide association study (GWAS) includes genotyping across hundreds of individuals and validation of the most significant SNPs. Allelotyping of pooled genomic DNA is a common approach to reduce the overall cost of the study. Knowledge of haplotype structure can provide additional information to single locus analyses. Several methods have been proposed for estimating haplotype frequencies in a population from pooled DNA data. Results We introduce a technique for haplotype frequency estimation in a population from pooled DNA samples focusing on datasets containing a small number of individuals per pool (2 or 3 individuals) and a large number of markers. We compare our method with the publicly available state-of-the-art algorithms HIPPO and HAPLOPOOL on datasets of varying number of pools and marker sizes. We demonstrate that our algorithm provides improvements in terms of accuracy and computational time over competing methods for large number of markers while demonstrating comparable performance for smaller marker sizes. Our method is implemented in the "Tree-Based Deterministic Sampling Pool" (TDSPool) package which is available for download at http://www.ee.columbia.edu/~anastas/tdspool. Conclusions Using a tree-based determinstic sampling technique we present an algorithm for haplotype frequency estimation from pooled data. Our method demonstrates superior performance in datasets with large number of markers and could be the method of choice for haplotype frequency estimation in such datasets.
Conclusions: Using a tree-based determinstic sampling technique we present an algorithm for haplotype frequency estimation from pooled data. Our method demonstrates superior performance in datasets with large number of markers and could be the method of choice for haplotype frequency estimation in such datasets.
Background
In recent years large genetic association studies involving hundreds or thousands of individuals have become increasingly available, providing opportunities for biological and medical discoveries. In these studies, hundreds of thousands of SNPs are genotyped for the cases and the controls, and discrepancies between the haplotype distributions indicate an association between a genetic region and the disease. Typically, the first phase of a GWAS includes genotyping across hundreds of individuals and validation of the most significant SNPs. One possible approach to reducing the overall cost of GWAS is to replace individual genotyping in phase I with allelotyping of pooled genomic DNA [1][2][3][4][5][6]. Here, equimolar amounts of DNA are mixed into one sample prior to the amplification and sequencing steps. After genotyping, the frequency of an allele in each position is given [5].
Rather than examining SNPs independent of each other, simultaneously considering the values of multiple SNPs within haplotypes (combinations of alleles at multiple loci in individual chromosomes) can improve the power of detecting associations with disease and is also of general interest with the pooled data. To facilitate haplotype-based association analysis it is necessary to estimate haplotype frequencies from pooled DNA data.
A variety of algorithms have been suggested to estimate haplotype frequencies from pooled data. Available methods fall into two large categories. The first category consists of methods that focus on accurate solutions for small pool sizes (2 or 3 individuals per pool) and considerably large genotype segments. Many well known approaches that focus on small pool sizes use an expectation-maximization (EM) algorithm for maximizing the multinomial likelihood [7][8][9]. Pirinen et al. [10] extended the gold standard PHASE algorithm [11] to the case of pooled data. They introduced a novel step in the Markov Chain Monte Carlo (MCMC) scheme, during which the haplotypes within each pool were shuffled to simulate individuals on which the original PHASE algorithm could be run to estimate the haplotypes. A method based on perfect phylogeny, HAPLOPOOL, was suggested in [12] and was supplemented with the EM algorithm and linear regression in order to combine haplotype segments. HAPLOPOOL has demonstrated superior performance in terms of accuracy and computational time with respect to the competing EM algorithms. The second category consists of methods that focus on large pools (order of hundred of individuals per pool) and considerably smaller genotype segments. For this scenario, Zhang et al. [13] first proposed a method (PoooL) for estimating haplotype frequencies using a normal approximation for the distribution of pooled allele counts. Imposing a set of linear constraints they transformed the EM algorithm to a constrained maximum entropy problem which they solved using the iterative scaling method. Kuk et al. [14] improved the PoooL methodology, using the ratio of normal densities approximation in the EM, which resulted to the AEM method. Gasbarra et al. [15] introduced a Bayesian haplotype frequency estimation method combining the pooled allele frequency data with prior database knowledge about the set of existing haplotypes in the population. Finally, HIPPO [16] used a multinormal approximation of the likelihood and a reversiblejump Markov chain Monte Carlo (RJMCMC) algorithm to estimate the existing haplotypes in the population and their frequencies. The HIPPO framework is also able to accommodate prior database knowledge for the existing haplotypes in the population and has demonstrated improvements in the performance over the approximate EM -algorithm [16]. In this study we will therefore compare our proposed algorithm with the top performing methods from each category as discussed above, namely HIPPO and HAPLOPOOL.
Naturally, pooling techniques are more prone to errors and offer less possibilities for assessing the quality of the data than individual genotyping. As argued and discussed by Kirkpatrick et al. [12], pooling errors have much greater effect on larger pool sizes as opposed to small pool sizes with respect to the number of incorrect allele calls and the subsequent haplotype estimation. In specific, if σ is the error standard deviation (SD) in the estimates of allele frequencies, 2* σ should be less than the difference between allowable frequency estimates, in order for clustering algorithms to be able to correct the error. As more individuals are included in each pool, the difference between allowable allele frequencies decreases, which results in a higher percentage of incorrect calls. For example in pools of two individuals where the difference between allowable frequency calls is 0.25 (0,0.25, 0.5 ,0.75,1), an accuracy of σ <0.125 will ensure a low rate of incorrect calls (<1%).
In a recent study Kuk et al. [17] examined the efficiency of pooling relative to no pooling using asymptotic statistical theory. They found that under linkage equilibrium (not a typical case!) pooling suffers loss in efficiency when there are more than three independent loci (2 3 haplotypes) and up to four individuals per pool, whereas accuracy decreases with increasing pool size and number of loci. Rare alleles or linkage disequilibrium (LD) (or both) decrease the number of haplotypes that appear with non-negligible frequencies and thus pooling could remain efficient for larger haplotype blocks. In general, pooling could still remain more efficient in the case where only a small number of haplotypes can occur with appreciable frequency, as also suggested in Barratt et al. [18], and while pool size is kept considerably small.
In this paper we propose a new tree-based deterministic sampling method (TDSPool) for haplotype frequency estimation from pooled DNA data. Our method specifically focuses on small pool sizes and can handle arbitrarily large block sizes. In our study, we examine real data focusing on dense SNP areas, in which only a small number of haplotypes appear with appreciable frequency, so that our scenarios are within the limits of Kuk et al. [17]. We demonstrate that using our methodology we can achieve improved performance over existing state-of-the-art methods in datasets with large number of markers.
Results
In order to compare the accuracy of frequency estimation between the different methods and under the different scenarios examined, we compared the predicted haplotype frequencies from a given method, f, to the gold-standard frequencies, g, observed in the actual population. The measure we used was the χ 2 distance between the two distributions which is simply the result of the χ 2 statistic, where g is the expected distribution, i.e., 2 /g i and d is the number of gold standard haplotypes [12].
Datasets
To examine the performance of our methodology we have considered in our experiments real datasets for which estimates of the haplotype frequencies were already available and which cover a variety of dataset sizes.
We have first simulated using the three loci haplotypes and their associated frequencies from the dataset of Jain et al. [19] as the true distribution ( Table 1). The haplotypes and their frequencies were estimated using the EM algorithm from a set of 135 individuals genotyped on three SNPs and the estimates were used as the true haplotype distribution. We have simulated datasets with a variable number of pools T = 50, 75, 100 and 150. In each pool each individual was randomly selecting a pair of haplotypes according to the distribution of haplotypes. We have created pools with two different pool sizes, 2 and 3 individuals per pool. For each number of pools and each pool size we have created 100 datasets that were used as the datasets for our simulation.
Next, we considered two more cases with larger number of loci. In the second case which has L = 10 loci, we generated data according to the haplotype frequencies of the AGT gene considered in Yang et al. [9]. The haplotypes and their respective frequencies are given in Table 2. The procedure for creating datasets and pools was identical to the three loci case.
The third dataset consisted of SNPs from the first 7Mb (742 kb to 7124.8 kb) of the HapMap CEU population (HapMap 3 release 2-Phasing data). This chromosomal region was partitioned based on physical distance into disjoint blocks of 15 kb. The resulting blocks had a varying number of markers ranging from 2-28. For our purposes we have considered only the datasets that had more than 10 SNPs and less than 20 (which was the maximum number of loci so that HAPLOPOOL could produce estimates within a reasonable amount of time) which resulted in selecting a total of 80 blocks. On each block the parental haplotypes and their estimated frequencies were used as the true haplotype distribution. As in the previous cases, in each block two different pool sizes, 2 and 3 individuals per pool, were considered and four different number of pools per dataset.
Frequency estimation
We have examined the accuracy of our method and compared it against HIPPO and HAPLOPOOL on the three datasets described in our previous subsection. In all experiments considered in this subsection the DNA pools were simulated assuming no missing data or measurement error. The performance of the methods is shown in Figure 1.
For the 3 and 10 loci datasets the result presented is the average χ 2 distance from a 100 simulation experiments, whereas in the HapMap dataset the result presented is the average χ 2 distance on the 80 datasets considered. For the 3 loci dataset it can be seen that TDSPool and HAPLOPOOL produced similar accuracy. For the remaining two datasets with larger number of loci TDSPool demonstrated superior performance. For the HapMap dataset only TDSPool and HAPLOPOOL were evaluated since the maximum number of loci HIPPO can handle without prior knowledge of the major haplotypes in the population is 10. At the same time even though HAPLOPOOL can in principle handle larger datasets, due to excessive computational time for datasets with 24 and 28 loci we restricted our comparisons to datasets between 10 and 20 loci. We note here as well that since HIPPO is based on a central limit theorem it is likely to be a better approximation in large pools as opposed to small ones that we focus in our study.
From our experiments we can also see that the number of pools also affected accuracy. All algorithms demonstrated improved performance with increasing number of pools in the dataset.
Noise and missing data
In the previous subsection we have evaluated the performance of our method by simulating DNA pools without missing data and measurement errors. However, in allelotyping pooled DNA, allele frequencies may not be estimated properly in some practical situations and the data are consequently missing or have measurement errors.
In order to measure the effect of genotype error on the accuracy of the haplotype frequency estimation and evaluate the performance of our method under such scenarios, we have simulated genotyping error by adding a Gaussian error with SD σ to each called allele frequency. Suppose we denote the correct allele frequency at SNP j in pool i as c ij . The perturbed allele frequency is given byĉ ij ¼ c ij þ x where x ∼ N(0, σ 2 ). After simulating these Table 2 Haplotypes and their estimated frequencies for the 10 loci dataset perturbed haplotype frequencies, we discretize the resulting frequencies to produce perturbed allele counts that are consistent with the number of haplotypes in each pool. We have considered a variety of values for σ, ranging from 0 to 0.06 similar to Kirkpatrik et al. [12]. The perturbed datasets examined were derived from the unperturbed datasets used in the previous subsection with the procedure described above. The results are shown in Figure 2. Due to space limitations we give the results only when the number of pools is 75 but the shape of the figures is similar for the remaining number of pools examined in our previous subsection. For small number of loci, HAPLOPOOL achieves the best performance. However, for larger datasets TDSPool outperforms all competing methods.
Furthermore, we have evaluated the performance of our methodology using missing data. We have randomly masked 1 and 2% of the SNPs respectively on the 10 loci datasets and estimated the accuracy. As shown in Figure 3, missing SNPs result in small loses in the accuracy and as expected the error decreases with increasing pool number.
Timing results
The computational times for all datasets are displayed in Table 3. All methods were run with their default parameters. Specifically, for HIPPO the default number of iterations was 100000 and for TDSPool the default number of streams (as will be defined in the "Methods" section) used throughout our experiments was chosen to be 50. Based on these results HIPPO was the slowest performing method in all datasets performing more than 20 times slower than the remaining two algorithms in the ten loci dataset. For the three loci dataset all methods were able to estimate the haplotype frequencies within six seconds. For the ten loci dataset HAPLOPOOL and TDSPool were still able to produce the results in less than three seconds whereas HIPPO demanded more than 58 seconds to finish. For the HapMap datasets again both methods TDSPool and HAPLOPOOL were able to finish the procedure within four seconds. In the ten loci and HapMap datasets TDSPool demonstrated better performance compared to HAPLOPOOL when the number of pools in each dataset was more than 75. Therefore, for all practical applications all methods are fast enough and within limits for researchers to use.
Discussion
We have introduced a new algorithm for estimating haplotype frequencies from datasets with pooled DNA samples and we have compared it with existing available packages. We have shown that for datasets with small number of loci our algorithm has comparable performance to state-of-the-art methods in terms of accuracy and computational time but it demonstrates superior performance for datasets with larger number of loci.
Our method specifically focuses on small pool sizes and we have demonstrated the performance on pools of two or three individuals. In our experiments we have partitioned pooled genotype vectors in blocks of 4 SNPs as described in the "Partition-Ligation" subsection. We have chosen to partition the pooled genotypes every 4 SNPs so that computations are performed fast and we avoid cases with huge number of solutions. Partitioning the dataset every 3 SNPs had negligible impact on the accuracy of our results (results not shown) whereas partitioning every 5 SNPs in general can produce block pool genotypes with thousands of solutions, especially when missing data occur.
In the framework developed by Pirinen [16], which had resulted in HIPPO, the algorithm was able to accommodate prior database information on existing haplotypes in a population. Similarly, our methodology offers a framework that can easily incorporate prior knowledge in the form of known haplotypes from the same population as that from which the target pools were created. When such existing haplotypes are known (such as those available from the HapMap), they can be easily introduced in the form of a prior for the counts in the TDSPool algorithm. The presence of the extra information will improve the frequency estimation accuracy in the target population.
Conclusions
We have introduced a new algorithm for estimating haplotype frequencies from pooled DNA samples using a Tree-Based Deterministic sampling scheme. Algorithms for haplotype frequency estimation from pooled data fall into two categories. The first category consists of algorithms that focus on accurate solutions and allow for considerably large genotype segments and the second category of algorithms that focus on small segments but allow for a large number of individuals per pool. We have compared our methodology with state-of-the-art algorithms from each category, namely HAPLOPOOL and HIPPO. We have focused on scenarios and datasets in which the use of pooling data is suggested for haplotype frequency estimation according to the study of Kuk et al. [17]. In specific, our method focuses on scenarios where pools contain 2 or 3 individuals and we have shown that for such scenarios our method demonstrates comparable or better performance compared with competing algorithms for a small number of loci and outperforms these algorithms for a large number of loci. Furthermore, our TDSPool methodology provides a straightforward framework for incorporating prior database knowledge into the haplotype frequency estimation.
Methods
In the beginning of the section we introduce some notation. We then present the prior and posterior distribution given the data and derive the state update equations for the TDSPool estimator. We further present the modified Figure 3 Accuracy of haplotype frequency estimates with missing data. Estimating χ 2 distance for 10 loci dataset with 0,1 and 2% of missing SNPs.
partition-ligation procedure adjusted for the pooled data so that we are able to handle larger haplotype vectors and we finally give a summary of the proposed procedure.
Definitions and notation
Suppose we are given a set of pooled DNA measurements on L diallelic loci. We denote the two alleles at each locus by 0 and 1, for convenience of our representation. Following the common notation, we use the counts of allele 1 as the measurement for each allele on each pooled DNA sample, which can be converted from the estimated allele frequencies and consists the pool genotype. Therefore if the size of a pool is N individuals, the counts for each allele can vary between 0 and 2N.
Suppose that we have T such pools each one of them with size N j j = 1, . . ., T. We denote α t = {α t 1 , . . .α t L } to be the pool genotype of the t-th pool where α j i ∈ {0, . . ., 2N t }. Suppose also that A t = {a 1 , . . ., α t } is a set of pool genotypes of pools up to and including pool t and let A denote the full set of pool genotypes. In pool t we denote the haplotypes occurring in that pool as h t = {h t,1 , . . ., h t,2Nt } where h t,i ∈ {0, 1} L is a binary string of length L and the minor allele is present in position j in haplotype i if h t,i,j = 0. We further define H t = {h 1 , . . ., h t }, similarly to A t as the set of haplotypes for each genotype pool up to and including pool t. A schematic representation of the dataset and the notation used is given in Figure 4.
Let us also define Z = {z 1 , . . .z M } , where z m ∈ {0, 1} L is a binary string of length L in which 0 and 1 correspond to the two alleles at each locus, as the set containing all haplotype vectors of length L that are consistent with any pool genotype in the set A. To obtain Z from the given dataset A, we first enumerate for each α i the subset ψ i = {h i 1 , . . ., h i Y } i = 1,. . .,T that contains all possible haplotype assignments which are consistent with α i . The set Z is then given simply by Z = [ i=1 T ψ i . A set of population haplotype frequencies θ = {θ 1 , . . ., θ M } is also associated with the set Z of all possible haplotype vectors, where θ m is the probability with which the haplotype z m occurs in the total population.
Probabilistic model
Assuming random mating in the population it is clear that the number of each unique haplotype in H is drawn from a multinomial distribution based on the haplotype frequency θ [20]. This leads us to the use of the Dirichlet distribution as the prior distribution for θ [21] so that θ ∼ D(ρ 1 , . . ., ρ M ) With mean E θ i f g ¼ where we denote ρ m (t) m = 1,. . .,M as the parameters of the distribution of θ after the t-th pool and I (z m − h t,i ) with i = 1,. . .,2N t is the indicator function which equals 1 when z m − h t,i is a vector of zeros, and 0 otherwise. We have shown that the posterior distribution for θ is also Dirichlet with parameters as given in (1) and depends only on the sufficient statistics, T t = {ρ m (t), 1 ≤ m ≤ M} which can be easily updated based on T t−1 , h t , α t as given by (1) i.e. T t = T t (T t−1 , h t , α t ).
Inference problem
Following the notation we used in our previous subsections we can summarize the frequency estimation problem as follows: Given A = {α 1 , . . ., α T } the set of observed pool genotype vectors and Z = {z 1 , . . ., z M } the set of haplotypes compatible to the pool genotypes in A we wish to infer H = {h 1 , . . ., h T } the unknown haplotypes in each pool and θ = {θ 1 , . . ., θ M } the haplotype frequencies of all the haplotypes occurring in the population.
Computational algorithm (TDSPool)
Similar to traditional Sequential Monte Carlo (SMC) methods, we assume that by the time we have processed pool genotype α t-1 we have K sets of solution streams (i.e. sets of candidate haplotypes for pools 1,. . ., t-1) and properly weighted with respect to the posterior distribution p(H t−1 |A t−1 ). Figure 4 Schematic representation of the notation used in our methodology. For each pool genotype (α t ) and at each locus, the value of the pool genotype at that locus α t j is the sum of the values on that loci across all haplotypes in that pool i.e. α j t ¼ Σ 2Nt i¼1 h t;i;j .
Partition-Ligation
In the partition phase the dataset is divided into small segments of consecutive loci. Once the blocks are phased, they are ligated together using a modified extension of the Partition-Ligation (PL) method [21] for the case of pooled data. In our current implementation to be able to derive all possible solution combinations for each pool genotype efficiently we have decided to keep the maximum block length to 4 SNPs. Clearly the more SNPs are included in a block the more information about the LD patterns we can capture but at the same time the number of possible combinations increases and becomes prohibitive for more than 5 SNPs. For our experiments in a dataset with L loci we have considered L/4 blocks of 4 consecutive loci and the remaining SNPs were treated as a separate block.
The result of phasing for each block is a set of haplotype solutions for each pool genotype. Two neighbouring blocks are ligated by creating merged solutions for each pool genotype from all combinations of the block solutions, one from each block. When creating a merged solution for a pool genotype from the two separate solutions (one from each block), since we do not know which haplotypes belong to the same chromosome, all different possible assignments are examined. The TDSPool algorithm is then repeated in the same manner as it was for the individual blocks.
Furthermore, the order in which the individual blocks are ligated is not predetermined. We first ligate the blocks that would produce in each step the minimum entropy ligation. This procedure allows us to ligate first the most homogeneous blocks so that we have more certainty in the solutions that we produce while moving in the ligation procedure. • Ligate the blocks, following the procedure described in the Partition-Ligation section
|
v3-fos-license
|
2022-12-24T08:11:09.008Z
|
0001-01-01T00:00:00.000
|
254922372
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4425/13/12/2376/pdf?version=1671186889",
"pdf_hash": "c03c01ba2e38066898e39f21435edd218bf8f9a2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42342",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"sha1": "c03c01ba2e38066898e39f21435edd218bf8f9a2",
"year": 2022
}
|
pes2o/s2orc
|
Genome-Wide Identification and Expression Analysis of Kinesin Family in Barley ( Hordeum vulgare )
: Kinesin, as a member of the molecular motor protein superfamily, plays an essential function in various plants’ developmental processes. Especially at the early stages of plant growth, including influences on plants’ growth rate, yield, and quality. In this study, we did a genome-wide identification and expression profile analysis of the kinesin family in barley. Forty-two HvKINs were identified and screened from the barley genome, and a generated phylogenetic tree was used to compare the evolutionary relationships between Rice and Arabidopsis. The protein structure prediction, physicochemical properties, and bioinformatics of the HvKINs were also dissected. Our results reveal the important regulatory roles of HvKIN genes in barley growth. We found many cis-elements related to GA3 and ABA in homeopathic elements of the HvKIN gene and verified them by QRT-PCR, indicating their potential role in the barley kinesin family. The current study revealed the biological functions of barley kinesin genes in barley and will aid in further investigating the kinesin in other plant species.
Introduction
Kinesin superfamily proteins are important players in cellular transport in eukaryotic cells and are involved in complex cytological processes, mainly in protein transportation, mitosis, meiosis, signal transduction, and flagellar motility [1]. Kinesins use microtubules as motor "tracks" to produce energy through ATP hydrolysis, converting chemical energy into bioenergy and providing kinesins with the materials to carry out intracellular activities [2]. Kinesin was first discovered in 1985; kinesin and actin have similar core functions and play a role in processes such as cellular transport and mitosis [3,4]. Today, some new functions of kinesins are being discovered. More and more people are studying kinesins in greater depth.
Members of the kinesin family have different traits, but the core structure is essentially the same. The kinesin structure consists of two heavy chains (KHC) and two light chains (KLC) [5,6]. The four structural domains of the heavy chain are the motor domain, the dimerization domain, the neck chain, and the tail chain. The cartoon of the kinesin structure is shown in Figure 1; it has two functional sites, the ATP and microtubule binding sites [5,6]. The structural domain is located at the amino terminus of the molecule. This domain is the most conserved of the four structural domains [5,6]. The location of the motor domain in the molecule corresponds to several operating processes as well. N-kinesins are motor structures found at the nitrogen end of the molecule that can move along the positive pole of the vascular towards the cell's periphery, as opposed to C-kinesins, which move along the negative pole of the microtubule, and M-kinesin, which act to depolymerize the microtubule [1]. The dimerization domain is the molecule's stem and consists of a At present, kinesins have been classified into 14 groups (kinesin1~kinesin14) based on their conserved sequences [1]. Kinesin-1 to Kinesin-3 proteins, when not transporting cargo, reduce ATP consumption and microtubule utilization through an autoinhibitory conformation [10]. Kinesin-1 and kinesin-2 mainly form dimers for autoinhibition, whereas kinesin-3 completes autoinhibition by interlocking the neck and motor structures [11]. AtKRP125b belongs to the kinesin-5 family and is mainly involved in mitotic processes in Arabidopsis [12]. Similarly, knockout of the ATK1 gene will lead to abnormal mitosis and failure to form normal microspores in Arabidopsis [13]. Kinesin-12E regulates mid-mitotic spindle function and plays an important role in mitosis in Arabidopsis [14]. Changes in the structural domain of kinesin-2 can lead to abnormal pollen wall development in Arabidopsis [15]. In addition, kinesin is also involved in different processes, such as drought stress [2,16], nuclear division [17,18], and vesicle formation in the root periphery [19]. In rice, Kinesin-5 is the primary kinetic motor of the mitotic spindle [20]. Kinesin-14 can enter the nucleus in response to cold [21], and Kinesin-4 can regulate granule width by controlling cell proliferation [22]. SRS3, a Kinesin-13 protein subfamily gene, was discovered to be capable of regulating seed length. SAR1, on the other hand, is a kinesin gene that, like M-kinesin, can depolymerize cellular microtubules to affect seed shape and size [23]. However, the kinesin family's role in barley has received little attention. As a result, the role of kinesins in barley growth and development merits further investigation. At present, kinesins have been classified into 14 groups (kinesin1~kinesin14) based on their conserved sequences [1]. Kinesin-1 to Kinesin-3 proteins, when not transporting cargo, reduce ATP consumption and microtubule utilization through an autoinhibitory conformation [10]. Kinesin-1 and kinesin-2 mainly form dimers for autoinhibition, whereas kinesin-3 completes autoinhibition by interlocking the neck and motor structures [11]. AtKRP125b belongs to the kinesin-5 family and is mainly involved in mitotic processes in Arabidopsis [12]. Similarly, knockout of the ATK1 gene will lead to abnormal mitosis and failure to form normal microspores in Arabidopsis [13]. Kinesin-12E regulates mid-mitotic spindle function and plays an important role in mitosis in Arabidopsis [14]. Changes in the structural domain of kinesin-2 can lead to abnormal pollen wall development in Arabidopsis [15]. In addition, kinesin is also involved in different processes, such as drought stress [2,16], nuclear division [17,18], and vesicle formation in the root periphery [19]. In rice, Kinesin-5 is the primary kinetic motor of the mitotic spindle [20]. Kinesin-14 can enter the nucleus in response to cold [21], and Kinesin-4 can regulate granule width by controlling cell proliferation [22]. SRS3, a Kinesin-13 protein subfamily gene, was discovered to be capable of regulating seed length. SAR1, on the other hand, is a kinesin gene that, like M-kinesin, can depolymerize cellular microtubules to affect seed shape and size [23]. However, the kinesin family's role in barley has received little attention. As a result, the role of kinesins in barley growth and development merits further investigation.
Barley is the fourth largest crop in the world, followed by rice, maize, and wheat. Barley is grown on a large scale worldwide because of its high resistance and adaptability. Barley is highly utilized in many aspects of life, such as fodder, medicine, and brewing. Barley quality and yield are susceptible to key factors in the growth and development process. Among these are seed development, organ development, and cell division, all of which are influenced by the kinesins [22,24,25]. There are currently no specific reports on the study of barley kinesin gene families and their functions during various developmental stages. To predict the role of barley kinesin families in growth and development, we compared the evolutionary features of kinesin families in plants such as barley, rice, and Arabidopsis. The biological functions, expression patterns, and protein structures of the forty-two barley kinesins screened were also studied. The roles of barley kinesin genes in barley growth were further revealed, and some theoretical guidance for future studies was provided by analyzing the evolutionary genetic relationships, including gene structures, chromosomal location, conserved motifs, and cis-regulatory elements analysis.
Materials and Data Sources
Golden Promise barley was planted in the experimental field of the campus (31.85 • N, 117.26 • E) at a temperature of 15-25 • C. The plant genomic database phytozome (phytozomenext.jgi.doe.gov/ accessed on 9 October 2021) was used to retrieve the AtKINs, OsKINs, and HvKINs protein sequences [26]. Using PF00225 (http://pfam.xfam.org/ accessed on 9 October 2021) as the kinesin signature domain to perform the Hidden Markov Model (HMM) algorithm [27,28], set a threshold of E-value < 10 −10 and screened for the forty-two HvKINs as candidates.
Phylogenetic Analysis of Barley HvKIN Proteins
The multiple sequence alignments were performed by MEGA-X software (www. megasoftware.net/ accessed on 23 October 2021). The Clustal W tool in Multiple sequence comparisons of protein sequences was performed using ClustalW2 with parameters set to Gap Opening Penalty = 10, Gap Extension Penalty = 0.2, and Delay Divergent Cutoff = 30%. The evolutionary tree was then constructed using the maximum likelihood method with parameters set to Bootstrap = 1000 [30,31].
Chromosomal Localization, Gene Structure, Conserved Structural Domains, Promoter Analysis, and Covariance Analysis of the HvKIN Gene
The Barley Genome Database (phytozome.jgi.doe.gov/pz/portal.html accessed on 17 December 2021) was used to download the CDS and protein sequences. Using the barley IPK website (apex.ipk-gatersleben.de/apex/f?p=284:10 accessed on 17 December 2021) to predict the positions of each gene. Gene Structural analysis and chromosomal localization of the HvKIN genes were completed using TBtools software (bio.tools/tbtools accessed on 20 December 2021) [32]. The conserved functional, structural domains in the HvKIN protein sequences were completed using MEME (meme-suite.org/meme/tools/meme accessed on 20 December 2021). Two kb upstream of each gene genomic DNA sequence were retrieved from the barley genome, and the cis-regulatory elements were analyzed by the PlantCARE online tool [33,34]. Ka/Ks were predicted using the website tool (bio.tools/kaks_calculator accessed on 12 January 2022).
Expression Modelling Analysis of the HvKINs, QRT-PCR Analysis, and Hormone (ABA and GA3) Treatment
To investigate particular expression profiles of HvKINs in different barley tissues and phases of development, the results of RNA-Seq from various development stages were acquired from the IPK website, and heatmaps were plotted by the online TBtools tool. The expression level of the forty-two genes was verified by QRT-PCR in root seedlings (2 cm stem stage), stems (30 days), leaves (30 days), immature fruit (2-week-old sowings), and mature seeds of barley, using TRNzol (Invitrogen, Waltham, MA, USA) reagent to extract the total RNA. RNA quality and quantity parameters of all the samples were verified by Nano Drop 1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). DNA contamination was removed by reverse transcription using DNase I (TaKaRa), and first-strand cDNA was generated from one microgram of total RNA using HiScript III Reverse Transcriptase R302 (Nanjing Vazyme Biotech Co., Ltd., Nanjing, China). QRT-PCR procedures and data analysis were performed as previously described [35,36]. HvACTIN (HORVU1Hr1G074350.1) was chosen as an internal reference gene [37]. The primers used for QRT-PCR are listed in Supplementary Table S1.
Barley plants growing at 30 days were treated with 100 uM ABA [38] and 50 mg/L GA3 [35], while the control was sprayed with an equal amount of pure water. Leaves were selected at 0 h and 12 h, and the RNA was extracted and estimated by QRT-PCR to detect HvKIN gene expression.
Subcellular Localization of the HvKINs in Barley
To investigate the transient expression of HvKINs in tobacco leaves, full-length HvKIN CDS was PCR amplified using the primers containing the Bgl II and Spe I restriction enzymes (sequences in Supplementary Table S2) and ligated into the vector pCAMBIA1301-eGFP digested with Bgl II and Spe I to generate 1301-35Spro::HvKINCDS-GFP. The 1301-35Spro::HvKINCDS-GFP fusion protein translation initiates at the start codon in the Nco I restriction enzyme site, which is located before Bgl II and causes no frameshift change of the HvKIN proteins. Constructed vectors were transformed into Agrobacterium tumefaciens GV3101 and infiltrated into leaves of four-week-old tobacco plants, and after 48 h dark incubation, the fluorescent signal was observed using a confocal microscope (Leica TCS SP5) in the tobacco leaf epidermis. The detailed observation by microscope was performed as previously described [39].
HvKIN Genes Identification in the Barley
To fully understand the evolutionary history of the KIN family of barley, BLAST analysis and conserved structural domain analysis were performed using kinesin sequences from Oryza sativa and Arabidopsis thaliana. HMM searches were validated through the Pfam website and forty-two kinesin genes were recognized from the barley Morex genome in total and designated HvKIN1-HvKIN42 (Supplementary Table S3). In addition, we have analyzed the physical properties of each kinesin by bioinformatics prediction (Supplementary Table S3). Briefly, the HvKIN protein length ranged between 210 to 3016 amino acids, and the protein molecular weight (MW) ranged from 23.71 kDa~341.86 kDa. The isoelectric point (pI) is predicted to range from 4.85~9.82, with 71% (30/42) having an acidic isoelectric point of less than 7. Because the hydrophobicity indices of the family proteins are all negative, this indicates that the HvKINs are all hydrophilic proteins. It follows that all forty-two HvKINs proteins are stable.
As we can see from the results of gene distribution, HvKIN genes are localized on seven chromosomes and show an uneven distribution. Among them, 10 of these HvKIN genes are located on chromosome five, chromosome three is the least distributed, with only four genes, and the other chromosomes have 5-7 kinesin genes distributed on them ( Figure 2). of the family proteins are all negative, this indicates that the HvKINs are all hydrophilic proteins. It follows that all forty-two HvKINs proteins are stable.
As we can see from the results of gene distribution, HvKIN genes are localized on seven chromosomes and show an uneven distribution. Among them, 10 of these HvKIN genes are located on chromosome five, chromosome three is the least distributed, with only four genes, and the other chromosomes have 5-7 kinesin genes distributed on them ( Figure 2).
Phylogenetic Analysis of HvKINs Proteins
To investigate the evolutionary history and functional associations of kinesin family proteins, a maximum likelihood phylogenetic tree was constructed by MEGA X (https://www.megasoftware.net/ access on 9 October 2021), using the full-length amino acid sequence of KINs from 42 members in barley, 48 members in rice and 61 members in Arabidopsis ( Figure 3).
The phylogenetic tree shows that the kinesins of the three different species can be separated into ten subgroups, namely subgroups K1, K4, K5, K7, K8, K10, K11, K12, K13 and, K14 ( Figure 3). Of these, subgroup K14 belongs to the largest subfamily, consisting of 43 kinesins. Subgroup K7 is the second largest family with 37 kinesins. Subgroups K4 and K13 are the smallest subfamilies, consisting of only 5 kinesins. In the largest subfamily, K14, there are 9 barley kinesins, and in the smallest subfamily, K4 and K13 there is 1 barley kinesin. Phylogenetic tree analysis also showed that barley, rice, and Arabidopsis were broadly consistent in kinesin evolution. There was no significant variability between monocotyledons and dicots, fully reflecting plants' conserved nature of kinesins.
Phylogenetic Analysis of HvKINs Proteins
To investigate the evolutionary history and functional associations of kinesin family proteins, a maximum likelihood phylogenetic tree was constructed by MEGA X (https: //www.megasoftware.net/ accessed on 9 October 2021), using the full-length amino acid sequence of KINs from 42 members in barley, 48 members in rice and 61 members in Arabidopsis ( Figure 3).
The phylogenetic tree shows that the kinesins of the three different species can be separated into ten subgroups, namely subgroups K1, K4, K5, K7, K8, K10, K11, K12, K13 and, K14 ( Figure 3). Of these, subgroup K14 belongs to the largest subfamily, consisting of 43 kinesins. Subgroup K7 is the second largest family with 37 kinesins. Subgroups K4 and K13 are the smallest subfamilies, consisting of only 5 kinesins. In the largest subfamily, K14, there are 9 barley kinesins, and in the smallest subfamily, K4 and K13 there is 1 barley kinesin. Phylogenetic tree analysis also showed that barley, rice, and Arabidopsis were broadly consistent in kinesin evolution. There was no significant variability between monocotyledons and dicots, fully reflecting plants' conserved nature of kinesins.
Analysis of Gene Structure and Conserved Motif Distribution of Barley HvKINs
To investigate the structural features of KINs in barley, the conserved motifs were constructed by MEME software. The result shows there have 11 similar higher conserved motifs (Figure 4a, Supplementary Figure S1, Supplementary Table S4). Some barley kinesins possess the conserved sequences Motif1 (FAYGQTGSGKT) for the ATP-binding site, Motif2 (HVPYR), Motif3 (SSRSH) and Motif8 (VDLAGSE) for the microtubule-binding site [40][41][42]. Using SMART analysis of conserved motifs revealed that Motif1, Motif2, Motif3 and Motif8 are KIS superfamily structural domains, normally microtubule-dependent molecular motors that play an essential role in intracellular transport of organelles and cell division events [43]. To understand the structural features of barley kinesin genes, we analyzed the gene structure and intron/exon arrangement of HvKINs by downloading barley whole genome annotation GFF3/GTF files and using TBtools software (Figure 4b, Supplementary Figure S1). The results showed that the number of introns in all genes ranged from 7 to 38, with HvKIN28 containing 38 introns and HvKIN11 only 7 introns. Clearly, the gene structures differed considerably between the kinesin families.
Analysis of Cis-Acting Elements of the HvKINs Gene
To understand the evolutionary features of the HvKINs gene family, we performed a predictive analysis of the sequence 2000 bp upstream of the HvKINs gene transcription start site ( Figure 5, Supplementary Figure S2). The HvKINs family contains 3049 cis-acting elements, with a variety of cis-acting elements a high degree of variation exists between genes. Two essential elements, CAAT-BOX and TATA-BOX, commonly found in eukaryotes, were removed and the rest of the elements were plotted ( Figure 5). Among all the cis-acting elements, a large number of hormone-response-related regulatory elements are contained, mainly hormone (TCA-element and GARE-motif), gibberellin (TATC-box), salicylic acid (TCA-element), abscisic acid (ABRE), methyl jasmonate (CGTCA-motif and TGACG-motif) maize alcohol-soluble protein (O2-site), etc. There are also certain abiotic stress-responsive elements, including anaerobic (ARE), photo-responsive (ATC-motif, TCT-motif, G-box, GT1-motif, Sp1, MRE and Box 4), low temperature (LTR), and drought (MBS) ( Figure 5). The expression of HvKINs genes is influenced by various signals, suggesting that kinesin plays an important function in normal plant growth and development and maintains the balance of hormone metabolism in vivo.
Evolutionary Analysis of the HvKIN Genes
The duplication events may illuminate the mechanism of the expansion of the HvKIN gene family. Gene families formed by tandem duplication are mainly located on the same chromosome and are similar in sequence and function; genes formed by duplication of chromosomal segments are more distant and are usually on different chromosomes. Calculation of non-synonymous substitutions (Ka) to synonymous substitutions (Ks) rates were used to infer the size of the selection constraint and determine whether a proteincoding gene is subject to the selection pressure of gene duplication gene pairs in the evolution process. If Ka/Ks > 1, a positive selection effect is considered to exist; if Ka/Ks = 1, a neutral selection effect is assumed to exist; if Ka/Ks < 1, a negative selection effect (Purification effects or purification options) [44].
To determine the selection influence on the evolution of the HvKINs, seven pairs of homologous genes were screened in a study of the HvKINs gene family in barley (Figure 6, Supplementary Table S5). Our results show the Ka/Ks values were all < 1, which means HvKINs genes were primarily determined by stabilizing selection.
Structural Analysis of the 3D Protein of the HvKINs
According to the protein 3D structure prediction website (phyre2, http://www.sbg.bio.ic.ac.uk/~phyre2/html/page.cgi?id=index access on 9th March 2022), forty-two HvKIN proteins have a complex protein secondary structure and are mainly composed of α helix and Random coil (Figure 7). The percentage of α helix in the family ranges from 27.14% (HvKIN42) to 73.09% (HvKIN22), and the percentage of Random coil ranges from 17.31% (HvKIN28) to 52.56% (HvKIN14) (Figure 7). In addition, β-turn (β turn) and extended strand (Extended strand) are included, with β-turn ranging from 1.96% (HvKIN27) to 7.62% (HvKIN42) and extended strand ranging from 5.97% (HvKIN22) to 26.67% (HvKIN42) of the members. This indicates that the family proteins are mainly composed of α-helices and irregular coils.
Structural Analysis of the 3D Protein of the HvKINs
According to the protein 3D structure prediction website (phyre2, http://www.sbg.bio. ic.ac.uk/~phyre2/html/page.cgi?id=index accessed on 9 March 2022), forty-two HvKIN proteins have a complex protein secondary structure and are mainly composed of α helix and Random coil (Figure 7). The percentage of α helix in the family ranges from 27.14% (HvKIN42) to 73.09% (HvKIN22), and the percentage of Random coil ranges from 17.31% (HvKIN28) to 52.56% (HvKIN14) (Figure 7). In addition, β-turn (β turn) and extended strand (Extended strand) are included, with β-turn ranging from 1.96% (HvKIN27) to 7.62% (HvKIN42) and extended strand ranging from 5.97% (HvKIN22) to 26.67% (HvKIN42) of the members. This indicates that the family proteins are mainly composed of α-helices and irregular coils. Genes 2022, 13, x FOR PEER REVIEW 10 of 19 Figure 7. Structure of the HvKINs gene 3D protein.
Tissue-Specific Expression of HvKINs
To preliminarily dissect the functional role of HvKINs in the barley developmental process, a spatio-temporal expression profile of HvKINs was constructed with a hierarchical clustering model with the expression data of barley using different periods as well as other tissues (seeds, roots, stem, leaves, flowers, and fruits at different developmental stages) using the IPK (https://www.ipk-gatersleben.de/en/ access on 22th January 2022) ( Figure 8). As shown in Figure 8, most of the HvKINs genes were highly expressed in younger flowers (INF1, INF2), and HvKINs were more highly expressed in immature tissues than in older ones. The high expression of kinesin in young flowers is due to the high cell division in young tissues and the role of kinesin in material transport and mitosis [45]. The majority of the HvKINs did not show tissue-specific expression; it indicates they played an important role in all growth and developmental stages. However, some members had tissue specific expression patterns; for instance, HvKIN8 is specifically highly expressed in developing tillers, HvKIN11 is highly expressed in developing grain, HvKIN25 is highly expressed in the lemma, and HvKIN4 is highly expressed in the lodicule.
Tissue-Specific Expression of HvKINs
To preliminarily dissect the functional role of HvKINs in the barley developmental process, a spatio-temporal expression profile of HvKINs was constructed with a hierarchical clustering model with the expression data of barley using different periods as well as other tissues (seeds, roots, stem, leaves, flowers, and fruits at different developmental stages) using the IPK (https://www.ipk-gatersleben.de/en/ accessed on 22 January 2022) ( Figure 8). As shown in Figure 8, most of the HvKINs genes were highly expressed in younger flowers (INF1, INF2), and HvKINs were more highly expressed in immature tissues than in older ones. The high expression of kinesin in young flowers is due to the high cell division in young tissues and the role of kinesin in material transport and mitosis [45]. The majority of the HvKINs did not show tissue-specific expression; it indicates they played an important role in all growth and developmental stages. However, some members had tissue specific expression patterns; for instance, HvKIN8 is specifically highly expressed in developing tillers, HvKIN11 is highly expressed in developing grain, HvKIN25 is highly expressed in the lemma, and HvKIN4 is highly expressed in the lodicule. To better understand the potential impact of HvKIN genes in different tissues, the expression levels of forty-two genes were determined by QRT-PCR in barley using seedling roots (2 cm shoot stage), stems, leaves, unripe fruit (two-week-old fruiting) and mature seeds ( Figure 9). As shown in Figure 9, most of the HvKIN is highly expressed in the leaf, roots from seedlings, and inflorescences with lower expression in stem and mature seeds . HvKIN2, HvKIN3, HvKIN5, HvKIN7, HvKIN8, HvKIN9, HvKIN13, HvKIN14, HvKIN18, HvKIN35, HvKIN36. HvKIN38 and HvKIN39 were highly expressed in roots from seedlings. Kinesins may play an important role in young tissues. To better understand the potential impact of HvKIN genes in different tissues, the expression levels of forty-two genes were determined by QRT-PCR in barley using seedling roots (2 cm shoot stage), stems, leaves, unripe fruit (two-week-old fruiting) and mature seeds ( Figure 9). As shown in Figure 9, most of the HvKIN is highly expressed in the leaf, roots from seedlings, and inflorescences with lower expression in stem and mature seeds . HvKIN2, HvKIN3, HvKIN5, HvKIN7, HvKIN8, HvKIN9, HvKIN13, HvKIN14, HvKIN18, HvKIN35, HvKIN36. HvKIN38 and HvKIN39 were highly expressed in roots from seedlings. Kinesins may play an important role in young tissues.
Analysis of HvKINs Expression in Response to ABA and GA3 Treatment
The cis-element analysis identified several ABA and GA3-responsive elements in the promoters of selected HvKIN genes. We analyzed the transcript levels of forty-two kinesin genes after 12 h of treatment using ABA and GA3 sprays on leaves at 30 days of growth, respectively. The results show that half of the kinesins responded to ABA and GA3 treatment ( Figure 10, Supplementary Figure S3). Twenty-two of these genes are regulated by both ABA and GA3, but in different ways. After ABA treatment, the expression of 14 HvKINs was significantly up-regulated (>2-fold) and 11 HvKINs were significantly down-
Analysis of HvKINs Expression in Response to ABA and GA3 Treatment
The cis-element analysis identified several ABA and GA3-responsive elements in the promoters of selected HvKIN genes. We analyzed the transcript levels of forty-two kinesin genes after 12 h of treatment using ABA and GA3 sprays on leaves at 30 days of growth, respectively. The results show that half of the kinesins responded to ABA and GA3 treatment ( Figure 10, Supplementary Figure S3). Twenty-two of these genes are regulated by both ABA and GA3, but in different ways. After ABA treatment, the expression of 14 HvKINs was significantly up-regulated (>2-fold) and 11 HvKINs were significantly downregulated (<2-fold). However, 20 HvKINs expression was significantly down-regulated and three HvKINs (HvKIN7, HvKIN35 and HvKIN40) expression was significantly up-regulated upon GA3 treatment. Twenty-two HvKINs regulated by both ABA and GA3, ten HvKINs (HvKIN 6, HvKIN8, HvKIN10, HvKIN13, HvKIN20, HvKIN25, HvKIN30, HvKIN32, HvKIN33 and HvKIN34) were significantly down-regulated (<2-fold) simultaneously. In addition, HvKIN7 and HvKIN35 were significantly up-regulated (>2-fold) at the same time. These results suggest that these genes with significantly altered expression may be involved in regulating plant hormones. Furthermore, according to previous studies, kinesins and ATP have binding sites to microtubules and are jointly involved in the complex life activities of cells ( Figure 11) [46]. regulated (<2-fold). However, 20 HvKINs expression was significantly down-regulated and three HvKINs (HvKIN7, HvKIN35 and HvKIN40) expression was significantly upregulated upon GA3 treatment. Twenty-two HvKINs regulated by both ABA and GA3, ten HvKINs (HvKIN 6, HvKIN8, HvKIN10, HvKIN13, HvKIN20, HvKIN25, HvKIN30, HvKIN32, HvKIN33 and HvKIN34) were significantly down-regulated (<2-fold) simultaneously. In addition, HvKIN7 and HvKIN35 were significantly up-regulated (>2fold) at the same time. These results suggest that these genes with significantly altered expression may be involved in regulating plant hormones. Furthermore, according to previous studies, kinesins and ATP have binding sites to microtubules and are jointly involved in the complex life activities of cells ( Figure 11) [46].
Subcellular Localization of Selected HvKINs
The subcellular localization of selected HvKIN proteins (HvKIN6, HvKIN11, HvKIN30 and HvKIN40) were analyzed to assess the possible differences between KvKIN proteins in the barley. We used the heterologous expression of HvKINs fusion proteins in tobacco to analyze their subcellular localization. HvKIN6, HvKIN11, HvKIN30, and HvKIN40 are mainly localized to the cell membrane and nucleus and may be associated with material transport and cell division in plant cells (Figure 12).
Subcellular Localization of Selected HvKINs
The subcellular localization of selected HvKIN proteins (HvKIN6, HvKIN11, HvKIN30 and HvKIN40) were analyzed to assess the possible differences between KvKIN proteins in the barley. We used the heterologous expression of HvKINs fusion proteins in tobacco to analyze their subcellular localization . HvKIN6, HvKIN11, HvKIN30, and HvKIN40 are mainly localized to the cell membrane and nucleus and may be associated with material transport and cell division in plant cells (Figure 12).
Discussion
Kinesins are important microtubule-based motor proteins involved in the intracellular transport of substances and also in chromosome division and biological signaling during cell division [47][48][49]. Research on plant kinesins lags behind that of animals and fungi. The reason is not only because plants have evolved a unique kinesin family, but the number of family members is far more than that of animals. The identification and analysis of detailed expression characteristics and functions of kinesin family genes in barley remain elusive. The barley genome has been sequenced so that we can systematically study and analyze the barley kinesin family [50,51]. Through the retrieval, analysis, and organization of the HvKIN gene family in barley, a total of forty-two HvKINs were identified (Figure 2), and all members contain the KIS (tubulin folding cofactor A (KIESEL)) superfamily structural domain (Figure 4). Since rice has a comparable number of genes overall, the HvKIN family likely had a significant role in plant evolution. Using the maximum likelihood method, a phylogenetic tree was constructed by using 42 members from barley, 48 members from rice, and 61 members from A. thaliana (Figure 3). Phylogenetic results showed that 42 HvKINs were split into ten main branches, which shows that kinesins are conserved between monocotyledons and dicots ( Figure 3). The sequences and molecular weights of HvKINs vary considerably, but the structural domains and motif composition are conserved ( Figure 4, Supplementary Table S3).
Our results suggest that the majority of HvKINs are scattered near the ends of the chromosomes that are gene-rich (Figure 2), which is similar to previous observations for other gene families in barley [35]. One of the main factors in the complexity of the genome and the rapid expansion and evolution of gene families is gene duplication [52,53]. Gene chromosomal distribution demonstrates that at least seven pairs of HvKINs have undergone gene duplication (Figure 6), and most of these gene duplication events correspond to segment duplication. Only one pair of HvKINs (HvKIN28 and HvKIN29) belong to tandem replication ( Figure 6).
Studies have shown that in different species, different kinesins have specific functions in plant growth. BR HYPERSENSITIVE 1 belongs to the kinesin 13 subfamily. It plays a signaling role in rice development [54]. Distinctive kinesin-14 motors can associate with midzone microtubules to construct mitotic spindles with two convergent poles in Arabidopsis [55]. GmMs1 is a soybean fertility-associated kinesin [56]. MoKin5 and MoKin14 encode the conserved kinesin motor proteins that are essential to form and maintain the spindle and properly nucleate the primary hypha to exhibit canonical functions in Magnaporthe oryzae during rice infection [57]. In this study, the expression profiles of barley tissues showed significantly higher expression in immature barley fruits compared to other mature tissues and organs (Figures 8 and 9). There are also reports that several kinesins play an important role in early plant fruit development and play an important role in early fruit development in apples and cucumbers [58][59][60][61]. The role of kinesins in early watermelon fruit development has been investigated. It is significantly characterized by the fact that most kinesins are expressed at high levels in early watermelon fruit [38]. Therefore, based on the functional analysis of kinesins, kinesins may promote cell division during fruit development when cell division is vigorous. During cell division, chromosome duplication in the nucleus and rearrangements require kinesin to provide the impetus, leading to increased cell numbers [62,63]. However, it is not clear if kinesins play a role in which processes in plant fruit development, and future research is needed to focus on this area.
Plant hormones are a cluster of small signal molecules that have been sufficiently reported to play important roles in plant growth and development processes. Early studies found that most gene expressions were related to hormones. GDD1/BC12, a kinesinlike protein, plays an important role in regulating KO2 gene expression levels in the GA biosynthesis pathway to modulate microtubule rearrangement and cell elongation in rice [64]. Barley kinesin has a regulatory effect on hormones. We predicted the 2000 bp sequence upstream of the promoter of the HvKINs gene and found significant differences between individual genes ( Figure 5). A large number of hormone-responsive elements were included in all cis-acting elements, with gibberellin (TATC-box) and abscisic acid (ABRE) response elements present in most of the family's genes ( Figure 5). Therefore, we investigate the expression levels of kinesins after treatment with ABA and GA3 in barley ( Figure 10). The results showed that most gene expressions were affected in varying degrees ( Figure 10). In addition, HvKIN7 and HvKIN35 were significantly up-regulated (>2-fold) at the same time, indicating that they play a crucial role in response to phytohormone treatment ( Figure 9). Taken together, this provides a further theoretical basis for the relationship between plant hormones and gene expression. Conclusively, our work provides clear clues to further investigation of their detailed roles in barley reproductive development and response to hormones influence.
Conclusions
Barley kinesin genes have an important role in barley development; the precise roles of HvKIN gene family members in barley have not yet been elucidated. Here, our genomewide analysis and characterization of HvKIN genes revealed the physical-chemical properties, chromosome location, phylogeny, gene structure, cis-elements, and expression pattern of these genes. Expression profiling of the HvKINs gene was performed to reveal the tissue specificity of the HvKINs gene and also to analyze the potential role of the HvKINs gene in response to hormonal stimulation. We learned that kinesin expression is high in young plant tissues. Therefore, we speculate that kinesins have an important role in early plant development and flowering. Finally, four HvKINs genes (HvKIN6, HvKIN11, HvKIN30, and HvKIN40) exhibited high expression and potential functions in plant cells. Currently, there is no detailed genomic-wide analysis of the HvKINs gene family in barley. These findings will aid future investigations in the evolutionary origin of HvKINs as well as functional studies of candidates of HvKINs genes for molecular breeding in barley.
|
v3-fos-license
|
2017-05-31T23:13:42.072Z
|
2014-02-05T00:00:00.000
|
2180356
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-14-116",
"pdf_hash": "607703d12bb0c8e17195df24569e0c7722e404fc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42344",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "2fde5d03798ddb94a2ced1de92eb18d05664d2e5",
"year": 2014
}
|
pes2o/s2orc
|
Development and feasibility of a home-based education model for families of children with sickle cell disease
Background Children with sickle cell disease (SCD) commonly have cognitive deficits, even among toddlers. Much medical literature emphasizes disease-based factors to account for these deficits. However, the social environment plays a large role in child development. To address the specific needs of early childhood, a monthly hospital-based education program was initiated to educate parents about child development. Education sessions were poorly attended (20-25%) and deemed unsuccessful. This study describes the development and implementation of a home-based education service to teach parents about SCD, developmental milestones and positive parenting techniques. Methods This was a prospective, single-arm intervention to study the feasibility of a home-based caregiver education program for families with infants and toddlers with SCD. Parents of children aged 0-3 years with SCD from one Midwestern hospital were approached to participate in a home-based program. The program followed the Born to Learn™ curriculum provided through the Parents as Teachers™ National Center. Reminder calls or texts were provided the day before each visit. Results of the first twenty-six months of the program are presented. Results A total of 62% (56 of 91) of families approached agreed to participate; all were African American. The majority of caregivers were single mothers with a high school education or less and whose children had Medicaid for health coverage. The phenotypes of SCD represented in this sample were similar to those in the general SCD population. Over 26 months, 39 families received at least one home visit. Parents of infants (younger than 8 months) were more likely to participate in the home-based education program than parents of older children, (Fisher’s exact test, p < .001). Conclusions For participating families, home-based visits were a feasible method for reinforcing clinic education. About 43% of eligible families participated in the education, a two-fold increase in the poor attendance (20%) for a previous hospital-based program. A home visitation program for parents of infants with SCD could offer an effective approach to helping these children overcome adverse environmental conditions that are compounded by the complexities of a chronic health condition.
Background
In the United States (US), approximately 100,000 people live with sickle cell disease (SCD). The majority are African American [1]. SCD is an inherited blood disorder that causes red blood cells to be brittle, sticky and crescent shaped. Sickled cells have a shorter life span than normal red blood cells, and affected persons have chronic anemia. The abnormal cells are more likely to become trapped in blood vessels, causing vaso-occlusion and pain, the most common morbidity associated with the disease [2]. Other complications include cerebrovascular disease (stroke and cerebral infarcts), splenic sequestration (blood pools in the spleen), dactlyitis (swelling of the hands and feet), priapism (prolonged erection), acute chest syndrome and necrosis of the hip [3,4].
There are several forms of SCD that vary in prognosis and severity; the most prevalent and severe is hemoglobin SS (HbSS). In the US, an estimated 1 in 500 African American live births have the disease [1]. Additionally, approximately 1 out of 12 African Americans carry S trait. Therefore, SCD is one of the most common genetic disorders affecting people in the US, with approximately 3.4 million carrying the trait.
SCD is associated with an increased risk for cognitive deficits that can impact academic performance [5]. Compared to children with normal hemoglobin, children with SCD are far more likely to have a cerebrovascular accident (CVA) [6]. Approximately 40% of children with HbSS will have a silent cerebral infarct [7,8] or an overt stroke by adulthood [6,7,9]. Compared to children with no brain abnormalities (as confirmed by MRI examination), children with a history of CVA have significantly lower full scale intelligence quotient (IQ), verbal IQ, performance IQ and math achievement [10]. Over half of children who have had a silent infarct will require special services in school or be retained a grade level, indicating poor academic achievement and more subtle cognitive impairment [11]. However, developmental delay cannot be attributed solely to CVAs. Full scale IQ testing has reported that children with SCD and no MRI abnormalities have an IQ between 85 and 90 [10]. Furthermore, over a quarter of children with SCD and no cerebral insult required special services at school or needed to repeat a grade [11,12].
Developmental delay for children with SCD has been observed as young as nine months of age [13,14]. By 24 months, nearly 40% of children with SCD are deemed to be at risk for clinically significant developmental delay [15]. By three to four years of age, up to 50% of children with SCD have delays [16]. Although developmental delay in children with SCD has been documented in several studies, the cause of delay is not clear. SCD alone does not account for poor academic outcomes [17].
Disease severity and environmental risk factors combine to influence the outcomes of children with SCD. A recent model of school-aged children with SCD showed that the educational status of a parent actually contributed more to a child's full scale IQ than the presence of a silent cerebral infarct [18].
Children with SCD face more environmental challenges than most. Many children who suffer the physical effects of SCD also live in dangerous, impoverished neighborhoods and have limited access to educational opportunities [19]. Children living in poverty are at an increased risk for deficits in cognition, language and school readiness [17,20]. By three years of age, children growing up in low-income households have smaller vocabularies than their more advantaged peers [21]. Language delays severely impact children's ability to participate in school and as a result, children in poverty have lower academic achievement [20]. Children growing up in poverty often have limited exposure to materials, experiences, and environments that can influence the achievement of developmental milestones and have a significant positive impact on school readiness [22][23][24][25]. The quality of the home environment, including parenting techniques, has been shown to mediate the influence of the neighborhood and the child's cognitive abilities as early as age three [26,27].
Previous interventions
The local SCD program receives an average of 25-30 newborns each year. We initiated a monthly, Saturday morning hospital-based parent education program to address educational needs of families that were new to the clinic. Families with children under 36 months of age were invited to attend at clinic visits, mailed letters and called to confirm attendance if they had indicated interest. The sessions were held if there was a minimum of three confirmed attendees. The total number of children (newborn to three years) for that period was 100-120. Over a period of 21 months, 25 families attended one education session. Thus, only 20-25% of the families of children in that age group received one educational session. However, nine sessions had no attendees and half had only one family despite reminder phone calls with confirmed attendance. The low rate of attendance demonstrated that the hospital-based, Saturday parent education and developmental screening was not feasible for this population.
Current intervention
Prior to the present intervention, few of the young children with SCD treated at our SCD clinic were receiving early intervention or parent education services such Parents as Teachers™, despite eligibility. Parents as Teachers™ is a home-based parent education curriculum that aims to provide information, support and encouragement to help children reach developmental milestones during the first few years of life. Parents of children with SCD in our center were unaware of available resources and were exposed to a high number of daily stressors including poverty, highly mobile households, overly crowded homes and community violence. Among pre-school-aged children with SCD, psychosocial factors may have a greater impact on early childhood development than sickle cell disease related factors [16]. In order to ameliorate these challenges among the families of infant/toddlers with SCD, we proposed a home-based parent education program to reinforce information regarding SCD provided in the clinic as well as address developmental milestones.
We implemented a home-based education model that might eliminate many of the barriers to participation in a hospital-based educational program for parents of children with SCD. A home visitation model would enable the clinic team to better determine factors related to the home environments that could affect development and the ability of the caregivers to respond to the needs of their children with SCD. The purpose of the current study was to determine if a home based parent education program targeting parenting skills and typical developmental milestones was feasible as defined by 50% consent rate for those recruited for the study and at least 50% completion of scheduled home visits.
Methods
The current study was a prospective, single arm intervention. Approval was obtained from the Institutional Review Board of Washington University School of Medicine. Participants were recruited from the local SCD program. At our clinic, newborns are initially seen at about two months of age and return appointments are approximately three months apart. Older children may be seen every four to six months.
Inclusion criteria
All participants had a confirmed diagnosis of SCD and were active patients at the clinic. Children were between the ages of 3-36 months at the time of recruitment, lived within 30 miles of the hospital and caregivers spoke English fluently. The parent/primary caregiver provided consent for participation.
Exclusion criteria
Patient/caregiver dyads were excluded if the primary caregiver did not have stable housing.
Recruitment
Caregivers of all eligible children were approached during regularly scheduled visits to the clinic. Families of newborns were approached for the current study after their second or third clinic visit, typically when the child was between four to six months of age. Older children and their caregivers were approached at their first visit following the initiation of the study. Caregivers were offered the opportunity to participate in an accredited Parents as Teachers™ (PAT) Born to Learn curriculum provided by an occupational therapist that was certified as a PAT provider and was educated about risks associated with SCD.
Retention
Upon consent, a date was scheduled for the educator to visit the family's home. Families received reminder phone calls the day before their scheduled visit and visits were rescheduled as needed. During home visits, the educator addressed caregiver concerns regarding SCD and development. Caregiver education focused on developmental milestones and age appropriate skilllearning activities during infancy and toddlerhood that might mediate some of these effects.
Caregivers were encouraged to participate in play and reading to their child during the visit and were asked to bring up any concerns. Most visits lasted approximately one hour. Every visit incorporated an age-specific activity to challenge emerging skills, handouts about development and a book for the child to keep. Books were donated to the program.
Parents as Teachers™ Born to Learn
Parents as Teachers (PAT) is an internationally recognized educational curriculum for children 0-36 months and their caregivers that was developed to teach parents skills to help them engage with their child and increase awareness of developmental milestones (www.parentsasteachers.org). The PAT program has previously been shown to increase school readiness [28]. PAT utilizes a home-based visitation method in which a trained parent educator goes to the home at least once a month. The curriculum provides activities and handouts based on the child's age. The parent educator addresses topics relevant to development at the child's specific age and discusses emerging skills for the parent and child to work on in the coming weeks. The parent educator also assists families in getting connected with local community organizations and available resources.
Educational materials
The parent educator selected additional handouts as appropriate for each family's needs. Families reviewed SCD information through handouts, flipcharts and videos. Handouts were created by the team to help families understand how to manage physical activities, changing seasons and cold weather with a child with SCD. Additional support materials were used as needed such as the Act Early program provided from the Centers for Disease Control and Prevention (CDC) [29]. The CDC provides informational brochures, handouts and books about developmental milestones that are available at no cost through their website.
Outcome measures
Demographic information was collected from the primary caregiver and medical records upon enrollment in the current study. Feasibility was determined by the acceptance (families that were approached for participation compared to the number that consented) and the number who actually participated in a home visit. The number of scheduled visits completed was also recorded. Participating families were asked to complete a satisfaction survey after completing a minimum of four home visits. Field notes were taken following each home visit. Notes included documentation of the handouts that were provided, who participated in the visit, topics discussed and the child's current level of functioning in intellectual, language, motor and social-emotional development.
Results
All families were African American. As shown in Table 1, the majority of families were living at or near poverty as indicated by the percent (82%) that received health care coverage via Medicaid. One fifth of families who participated had three or more children under the age of five years living in the home.
Consented vs. Non-consented families
There was no significant difference in sickle cell phenotype between those who participated in PAT and those who chose not to participate, (Hb SS, 50% vs. 58%; Mann-Whitney U, p > .2). There was also no significant difference in the insurance coverage between those who participated in PAT and those who did not, (Medicaid, 77% vs. 71%; Mann-Whitney U, p > .9). Similar distribution of SCD phenotype and economic status (as measured by insurance provider) indicate that non-participants did not vary significantly from families who participated.
Parents of younger children were more likely to schedule a home visit All children who met inclusion criteria were approached (N = 91). Over a period of 26 months, 56 families with a total of 58 children (64% of those eligible) consented to participate. Of those 58 children, a visit was scheduled for 39 (70%). Table 2 indicates that significantly more families consented if children were 2-7 months of age than if children were 8-36 months of age (77% vs., 62%, respectively, Fisher's exact test p < 0.05). For those who consented, significantly more visits were scheduled if the child was seven months of age or younger than if the child was more than seven months of age, (87% vs. 58%, respectively; Fisher's exact test p < 0.001).
Thirty-nine families participated in at least one home visit. Sixteen families (41%) had between 1-5 visits, thirteen (33%) had between 6-12 and ten (26%) families had over 13 visits to the home. Over this time, nine children aged out of the program, three parents scheduled in person but never answered the phone to confirm, and two have been lost to follow up because they moved. Of those that completed a visit, at least 50% depended on other forms of state or government assistance such as a supplemental nutrition program, food stamps or Social Security Income. For families that were lost, the cause was most often that the phone number had changed and the family could not be contacted. A social worker was contacted to help locate families for medical care. Over the past 26 months, 15-24 families actively participated each month. The age of children of families that did not consent was obtained through retrospective analysis of the patients' appointment records. When the program was initiated, families of older children were called because clinic visits are less frequent.
Evaluation of PAT program
Participating families were asked to complete a satisfaction survey of the home visitation program after participating in the program for at least four visits. The parent educator assured them that evaluations were anonymous and they could mail them in or give them to the nurse practitioner in the clinic. In one circumstance, the parent struggled with low literacy and the parent educator offered to read the statements aloud and write in answers for them. Caregivers were asked to check the box that describes how they feel on a Likert scale of one to five ranging from strongly agree to strongly disagree. Of the 23 families who completed more than four visits, 13 evaluated the program. All reported that they agree or strongly agree that they like PAT visits and that they strongly agree that PAT visits helped the caregiver understand development and engage with their child.
There were two open-ended questions asking what aspect of PAT they liked best and if they could make changes, what would they be. No one recommended changes.
Qualitative answers to evaluation
One parent of a 20 month old stated in her evaluation "I read to her because you kept telling me to. And you know, she brings me books. She likes it". When this child was 8 months old the mom was initially hesitant to read to her infant because she did not like to read and she did not believe that her daughter would enjoy it. Another parent stated, "I like having the visits. She (parent educator) gives me ideas how to play with my child". One mom of a 10 month old said "I feel better now that I understand more about SCD. I'm not as scared anymore".
Recruitment and program retention
Recruitment was continuous throughout the study period; therefore the number of visits per family is not reflective of the number of families that are currently active in the program. For the 36% of families that elected not to participate in this free program, most stated that they did not feel that they had time, did not have consistent housing, or did not feel that they needed the services. During the study period, nine children aged out of the program (> 36 months of age) and could no longer receive visits. Additionally, four families requested to stop services, and three were lost to follow up.
The most common barrier was maintaining contact with families. When the family could not be reached to confirm, visits were not completed. Visits were rescheduled often; the most common reasons were that the child was hospitalized or a change in the caregivers' schedule. During the first six months of the program, only about 50% of scheduled visits were completed. Initially, all calls were made from an office phone affiliated with the hospital or university. Beginning in the seventh month of the program, we incorporated a dedicated cell phone to contact families. In the one-month period prior to acquiring the cell phone, 9 of 18 scheduled visits were completed (50%). That rate was representative of the number of scheduled visits completed when using the university-based landline. A cell phone was obtained under the name "Sickle Cell" with texting capabilities in August 2011. Rate of adherence to scheduled sessions increased from 50% to 79% after inclusion of the cell phone to contact families prior to the home visits ( Figure 1). Adherence remained at 77.3% for the remainder of the study (months [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24]. A cell phone was obtained under the name "Sickle Cell" with texting capabilities in August 2011. Rate of adherence to scheduled sessions increased from 50% to 79% after inclusion of the cell phone to contact families prior to the home visits. Adherence remained at 77.3% for the remainder of the study (months 8-24).
Home visits
Qualitative observance of parenting practices revealed at least three common needs across many of the families, including lack of appropriate toys, failure to read/talk to the child, and inability to deal with challenging child behaviors during mealtime and bedtime. During home visits, strategies were discussed with caregivers about how they could engage with their child using pictures, books, or common items around the home. Table 3 lists some of the outcomes observed from these discussions. Examples of ways to play with items around the home, such as coffee cans, juice bottles or paper plates were demonstrated. Parents also had opportunities at each visit to discuss concerns they might have and referrals were made to community resources to address any urgent needs the family may have such as food, birth control, health care, lead testing, and employment. These discussions helped build rapport and trust between the provider and the family.
Home visits and relation to sickle cell education
The parent educator was trained and educated on the genetic inheritance of SCD, morbidities associated with the disease and their impact on child development. The parent educator had the hospital version of the parent education program available with her at all times to review if families expressed need. The parent educator was able to reinforce training provided during visits to the sickle cell clinic such as how to palpate for an enlarged spleen, what temperature to monitor for and how to identify dactylitis. Several caregivers had questions regarding medications such as penicillin and folic acid and what they were for. Parents were directed to call the SCD clinic with any medical questions or concerns.
Discussion
This study provides preliminary data indicating that a home-based program can be a feasible method for education of parents of infants with SCD. Given the prevalence of SCD and the risks for significant delay, a reliable method for providing early intervention to families of children with SCD is greatly needed [13,15,[30][31][32][33][34]. Providing education at the hospital regarding parenting techniques and developmental milestones was previously not successful because of barriers concerning transportation and work schedules. A home-based program to provide services to these families may be more successful and improve outcomes for these children.
Recruitment and retention were primary concerns when initiating this pilot program. Since enrollment was continuous, families initiated visits at different times and consequently have varying numbers of visits to date. Parents of younger infants were more likely to commit to the parenting program. Possibly, these parents are more open to suggestions and education because they are eager to maximize their child's health and development in the face of a newly diagnosed chronic disease. Initially, visits were scheduled with families in advance and the parent educator went to the home at the scheduled time. Unfortunately, there was a high incidence of uncompleted visits due to families not being home or forgetting their scheduled appointment. Reminder phone calls the day prior to a visit increased the completion rate substantially, but there was still significant difficulty communicating with some families, particularly younger parents. Consequently, text message reminders were implemented for parents that indicated that texting was a convenient form of communication. Using a combination of reminder phone calls and texting greatly improved retention, particularly for younger caregivers who preferred texting to phone calls or had unlimited texting plans but minimal or no minutes available for phone calls. With this system, the parent educator did not go to the home unless a family confirmed the visit and services were terminated if a family was not home for three scheduled and confirmed visits.
While several studies have documented the developmental delay of young children with SCD, few, if any, interventions have been documented to ameliorate these challenges. Home based interventions enable providers to connect with caregivers and identify aspects of their environment that can be used for learning and describe these benefits individually for the child within their natural environment. A formal parenting program fills a gap in our current education plan for the parents of children with SCD, addressing both the medical and psychosocial needs of the children. Most of the families that agreed to participate in the program scheduled and completed multiple visits, and many of them remained active in the program.
In our observation, families of children with SCD often struggle with many challenges that they do not identify or reveal within a clinic visit. We observed that many caregivers have not had the opportunity to learn parenting strategies and they appreciate the information, encouragement and praise for their actions such as providing support and encouragement when family members stop smoking in the home or acknowledging family members engaging the child in conversation or interactive play. Further, caregivers seemed to appreciate having their challenges recognized and being given tools to advocate for themselves and their children. It is of utmost importance that providers are trained in cultural sensitivity and communication to adequately meet these families' needs.
Caregivers verbalized that they did not understand the purpose of medications or various treatments, and many admitted to not being adherent to suggestions. The Health Belief Model describes the importance of considering one's understanding of a health related issue and adherence with medical advice [35]. This model applies to our population and helps to explain caregiver insecurities or disinterest in a parent education program. Possibly, many parents do want the best for their child, but do not perceive that there is serious risk for their child, or they may not understand that the child may have challenges that are necessary to address. Additionally, caregivers may not fully trust people affiliated with the medical community. Lack of understanding, perception of risk or distrust may affect caregivers' willingness to communicate and participate in a parent education program.
The cost of this program included the salary of the primary provider, which in this case was an occupational therapist. It would be possible for future programs to use alternative providers such as child life specialists, social workers, or those with qualified training in child development and SCD. Associated costs to the implementation of this program included mileage for the provider, materials for home visits and training in the PAT™ curriculum. Additionally, in this sample we identified that families of newborns were more likely to be active participants in this program and it is possible that a more targeted program could be more cost effective. Future directions can include evaluation of the impact of the program on child development, parental knowledge of SCD and health care utilization.
Limitations
This pilot study had several limitations. As a single center, single arm intervention, generalizability is limited. However, for our purpose, we learned that families are interested in early childhood and parenting and are willing to welcome an educator into their homes. The satisfaction surveys were given to families following a home visit, which may have biased caregivers to answer more positively since many completed them immediately. Families were encouraged to keep evaluations anonymous and fold them up when they were completed. Another limitation of this program was that it was not coordinated with the school system. We chose to have a private PAT provider to ensure that each family would be able to receive services regardless of school district staffing or budget restrictions. This method was effective in providing services but required more time to help families get involved with other community organizations. Caregivers who choose not to participate in home-based parenting interventions can be provided information about local community or online resources for education and support. Despite limitations, this pilot study demonstrated that in our location, families are interested in participating in a home-based parent education program.
Conclusions
Children with SCD are a vulnerable population. With a home-based program, we were not only able to achieve a two-fold increase in a single SCD education session but were also able to provide a monthly intervention. The ongoing visits facilitated the development of a trusting relationship that permitted the parent educator to identify barriers to developmental progress previously unrecognized in the clinic. Based on observations and discussions with parents during the study, many of the families who care for a child with SCD struggle with understanding typical developmental milestones and lack knowledge of activities that encourage and challenge the child to meet these goals. Home-based services that address parenting skills and therapeutic activity along with repetition of concerns specific for SCD are a feasible way to reach this population. A dedicated cellular phone increased retention by providing reminder phone calls and text messages. The convenient communication opportunities from text messaging were well received. Providing skilled educational and supportive services in the home is also beneficial by helping parents make modifications to the home environment to increase safety and accessibility to appropriate activities by the child. More research should be conducted to determine the effects and outcomes of children receiving this intervention. A home evaluation of parent interaction, environment, and child development at baseline and following the intervention would objectively demonstrate the outcomes of providing in home services to this population.
|
v3-fos-license
|
2024-04-04T05:08:48.110Z
|
2024-04-02T00:00:00.000
|
268872673
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "8d84745d08f0a275c476899ae946c8852fd21572",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42346",
"s2fieldsofstudy": [
"Economics",
"Mathematics"
],
"sha1": "8d84745d08f0a275c476899ae946c8852fd21572",
"year": 2024
}
|
pes2o/s2orc
|
The Demographic-Wealth model for cliodynamics
Cliodynamics is a still a relatively new research area with the purpose of investigating and modelling historical processes. One of its first important mathematical models was proposed by Turchin and called “Demographic-Fiscal Model” (DFM). This DFM was one of the first and is one of a few models that link population with state dynamics. In this work, we propose a possible alternative to the classical Turchin DFM, which contributes to further model development and comparison essential for the field of cliodynamics. Our “Demographic-Wealth Model” (DWM) aims to also model link between population and state dynamics but makes different modelling assumptions, particularly about the type of possible taxation. As an important contribution, we employ tools from nonlinear dynamics, e.g., existence theory for periodic orbits as well as analytical and numerical bifurcation analysis, to analyze the DWM. We believe that these tools can also be helpful for many other current and future models in cliodynamics. One particular focus of our analysis is the occurrence of Hopf bifurcations. Therefore, a detailed analysis is developed regarding equilibria and their possible bifurcations. Especially noticeable is the behavior of the so-called coexistence point. While changing different parameters, a variety of Hopf bifurcations occur. In addition, it is indicated, what role Hopf bifurcations may play in the interplay between population and state dynamics. There are critical values of different parameters that yield periodic behavior and limit cycles when exceeded, similar to the “paradox of enrichment” known in ecology. This means that the DWM provides one possible avenue setup to explain in a simple format the existence of secular cycles, which have been observed in historical data. In summary, our model aims to balance simplicity, linking to the underlying processes and the goal to represent secular cycles.
Introduction and modelling background
Although Cliodynamics [1] is a rather new research area, there are already several interesting theories regarding the interplay of state and population dynamics.To derive a model, we first briefly review and examine basic theories of state and population interplay to illustrate the multitude of possible approaches that have been proposed.In fact, this diversity of approaches justifies to also try out different mathematical models, which is the route taken in this work.
We only review theories briefly to indicate their diversity and complementary aspects.Turchin states, that empires typically can be in three different stages during their existence [2, p. 133]: (T1).Polity formation and ethnogenesis, accompanied by initial expansion within an ethnically similar substrate.
(T2).Expansion to peak size, in the process acquiring a multiethnic character.
Although (T1)-(T3) are theoretical assumptions that could be used as the basic for a mathematical model for the rise-and-fall of states, there are certainly many nuanced factors that one may want to take into account for this process.According Khaldun [3] there are two major causes for state collapse, ideological and economic.The according theory, [3, p. 355], can be roughly summarised by the following statements: (K1).Recently established states are moderate in expenditure and just in administration, resulting in light taxation.
(K3).Prosperity leads to increased spending to maintain wealth.
(K4).Higher wealth demands higher expenditures for state security and bureaucracy.
(K6).Maintaining prosperity without population growth leads to higher taxation and exploitation of the residents.
(K7).Higher taxation may ruin the economy and eventually leads to famines, rebellion or political unrest.
Observe that one view of the progression of events (K1)-(K7) is already quite a fine-grained view of secular cycles.Yet, the more economically focused explanation by Khaldun is just one possibility to explain the detailed cycle of rise-and-fall for states and populations.Another complementary theory is described by Goldstone [4, p. 24].It centers around the idea that population growth causes social crisis indirectly, affecting social institutions, which in turn affect social stability.This means that if the population grows in excess of the productivity gains of the land, there are multiple effects on social institutions: (G1).Excessive population growth leads to inflation which may prevent or reduce tax revenues due to economic uncertainty.
(G2).Increased population leads to expansion of armies and rising real costs.
(G4).Increased population leads to expansion of youth cohorts, often impacted by lack of employment opportunities.
(G5).Rising costs and low revenues lead to high taxation.
The final outcome of this theory is similar to the one of Khaldun, state bankruptcy, loss of military control and rebellion.Yet another variant for this process can be based on considerations by Olson [5], which is a social-economic hybrid explanation.The theory states that rapid economic growth means rapid economic change which entails social dislocation.Both gainers and losers from economic growth can be destabilizing.More precisely, the steps considered by Olson are: (O1).Economic growth increases the number of "nouveaux riches".(O2).Economic growth also creates a large number of "nouveaux pauvres".(O3).The "nouveaux riches" can use the gained power to change social and political order in their interest.
(O4).The "noveaux pauvres" will be more resentful of their poverty than those who have known nothing else.
(O5).Individuals gain economic power incompatible with their positions in the social and political order before.
(O6).That new power can be used in their own interest to change the political or social order.
Based on (O1)-(O6), and observing that the economic, social and political systems are clearly interdependent parts, a quick change in one part may lead to instability in other parts of the society.This can lead to a change in social and political order that is suited to the new distribution of economic power.But as the growth is very rapid, the path to this new equilibrium may be very unstable [5, p. 533].Huntington [6], concurred with Olson's theory and asserts that fast economic growth and the rapid political change that often follows heightened expectations are destabilizing factors, especially if the inclusion of new political participants into the new political system is too slow.Implicitly, both Olson and Huntington are arguing that fast growth in market dynamics leads to dis-integrative behavior when the political or administrative power grows too slow or is low.This leads to destabilisation of the existing system.
The last theory we want to briefly mention here is described by Olson [7] and Collins [8].It deals with the effects of increased population size.Despite the fact that increased population can improve geopolitical condition because a larger amount of soldiers and labor is available, it also rises the per capita military cost.Therefore, a bigger population is only an advantage if the "competitor's" situation regarding population and technology remains the same.Also, despite the possibility that increased population can be transferred into more power of the state, the state also has to incorporate and protect the expanded population.Larger population also means that there is a need of more living space which makes the states territory more difficult and more expensive to protect and to rule.This can again lead to destabilisation of the state with a possible breakdown.
In summary, the various theories in cliodynamics that could be used as a basis to explain the rise-and-fall of states are highly complex and often describe complementary potential factors for state-rise and state-collapse.Once one goes beyond basic very macroscopic (in space and time) principles such as (T1)-(T3), there are various options available that may be plausible and complementary.The examples we have given here regarding (K1)-(K7), (G1)-(G5) and (O1)-(O6) are certainly not exhaustive.Furthermore, giving a pure verification from a data-analytic viewpoint of certain theories can be complicated as the systems are highly complex and data has to be gathered on extremely long time scales to obtain reliable statements about human behaviour during rise-and-fall of states.Such a situation is not uncommon in many other sciences, e.g., climate science and ecology, face a similar dilemma.Hence, using mathematical modelling and simulation tools can greatly help to explore the space of possibilities and explanations.This is the viewpoint we take in this paper.
However, this triggers the question: How to mathematically model the theories we recalled above?Suppose we agree on a macroscopic model only including the main observables, which already discards quite substantial issues such as complex network coupling between agents.Then we should start to list the main time-dependent observables and ask, how they depend on the economic, social, and other factors that we outlined in the theories above.Table 1 shows some possible macroscopic variables that one might consider and how they are influenced by certain events.The table clearly shows that it is already extremely difficult to decide, (a) which macro-variables to select and (b) how to even model their qualitative changes for certain events.For example, suppose we have a change in political power.What would this entail?This is evidently unclear and depends very much on the historical and political circumstances.Therefore, many existing mathematical models in cliodynamics have taken the approach to limit the dynamics to a very small set of macro-observables such as population size and state wealth.Although this very likely overly simplistic, it at least gives an idea of which effects are possible.Therefore, we have to keep in mind that validating effects against limited historical data will in most cases be insufficient to directly match the model with a particular situation.Nevertheless, starting with simple models can clearly help us to understand certain patterns better and slowly improve our understanding, what the crucial ingredients of a more precise theory should be.
In this work, we are going to start with a model, the "Demographic-Fiscal Model" (DFM), introduced by Turchin and then use the previous modelling from this introduction to develop a variant of this model, which we call the "Demographic-Wealth Model" (DWM).We emphasize that our approach to propose another model is simply motivated by the fact that the social, political and economic theories behind the initial models proposed in cliodynamics are extremely diverse.We want to reflect this diversity also on the side of the mathematical models and this motivated us to introduce the DWM.In the important technical part of this work, we are going to use mathematical tools from nonlinear dynamics to analyze the DWM in more detail.We believe that these tools can also be useful for other models in cliodynamics to explore modelling options and parameter spaces.
Derivation of the DFM
The "Demographic-Fiscal model" (DFM) was derived by Peter Turchin in 2003 [2].Turchin pays attention to a particular possible correlation, a feedback effect of political instability on Table 1.This table lists a possible, yet certainly very much up to debate, attempt to identify some time-dependent macro state variables (leftmost column) and how they might depend upon certain observed processes/events such as population growth, hitting a situation close to carrying capacity, high state costs, rebellions/ wars, changes in political power, etc. Signs indicate, whether the event/process might positively (+) or negatively (-) change the dynamics of the macro-variables.We clearly observe that there could be many many more possible variables and that already the positive/negative influence modelling is difficult, which does not even bring up the matter of possible functional form relationships to express the model precisely.population dynamics.He concludes that political instability can negatively affect both demographic rates and the productive capacity of the society, see also [9].The goal of the DFM is to derive a model in which population dynamics is an endogenous process, not only consisting of the link between population growth and state breakdown.Turchin considers the economic factor of state decline and mainly uses two theories, the ones by Khaldun and Goldstone, presented briefly in Section 1.The mathematical model consists of two variables, the population density N(t) and the accumulated state resources S(t), measured in grain,
Macro
where r, ρ 0 , β > 0 are parameters.The functional form of k(S) implies that a strong state has a positive effect on population dynamics.More precisely, the "carrying capacity" is an increasing function of S. As k cannot increase without bound, because at some point all available land and space is used and maximum productivity is reached, there has to be a bound k max .This yields Here k 0 is the carrying capacity of a stateless population, c = k max − k 0 is the maximum possible gain in increasing k and s 0 indicates how the improvement in k depends on S.
Analysis of the DFM
The analysis provided below mainly deals with the dynamics and the stationary states of the DFM as well as the discussion of important consequences of the derivation and the analysis provided by Turchin.An important condition for the analysis of model ( 1) is that the state is not allowed to get into dept, leading to the condition S � 0. With all parameters being positive we first take a look at possible stationary states of the system.Therefore, we need Aside from the trivial state N = 0 where no population is present, it is clear that system (1) in its natural form cannot have any additional equilibrium points.But with the criterion S � 0, which we can enforce by setting the time derivative of S equal to zero once S reaches zero, an additional stationary state can be found, namely (N*, S*) = (k 0 , 0), which is locally stable.The equilibrium only exists because it is assumed that as soon as S would become negative after some time t* the state collapses, so S is set to 0. Setting S = 0 results in the system after the time t*.Looking at the first equation, _ N ¼ 0 yields the stationary state (N*, S*) which is obviously stable.The outcome of Turchin's model in general is that once a state has risen it is determined to vanish after a certain time span.The only possibility for another cycle to form is a perturbation of the system, so either N has to be decreased or S has to be increased.
Discussion of the DFM
In the derivation of the dynamics for N it is assumed that the per capita rate of population increase is a linear function of the per capita rate of surplus production, yielding the logistic model for population growth.For the dynamics of the state, the same approach is used with other parameters, which is a reasonable explanation.
We take a look at the results of the DFM in Figs 1 and 2. Take β = 0.25 (blue line) as the original expenditure rate.For values greater than 0.25 (red line), which means more expenditures for the state, the result would be negative for both, state and population.Higher expenditures lead to a shorter and less wealthy lifetime of the state while the population approaches its equilibrium k 0 sooner.As an alternative, one might want to model the case that a higher expenditure rate should be beneficial for the population.For values smaller than 0.25 (green and black line) the outcome of the system is more beneficial for both, the state and the population.In case of the state this would make sense as lower expenditures would yield a higher surplus, but one might want another possible outcome for the population.The extreme case β = 0 would lead to the situation where the population would approach its maximum k max and the state would never collapse.Hence, the model takes a quite stabilizing view on low state expenditure for the population.Yet, one might also want to consider alternative models, where the feedback situation between expenditure and population size is different.
Another important effect in the DFM is the modelling of the response from S to N. The feedback effect from S to N through k(S) is a positive one, increasing the carrying capacity for the population.This is definitely a valid approach but alternatively one could aim to model the negative effect of excessive taxation, which might be bad for the population, especially if the taxes are far too high or if the taxes are counter-productive due to their structure.In other words increasing the tax rate may only be beneficial for the population up to a specific value.For the system (1) for different values of ρ 0 it can be seen that an increase of the tax rate is beneficial for both, the state and the population.Indeed, the population will at some point break down again to its equilibrium and the state will also collapse at some point but in the time between the population reaches a higher level and lasts longer, as well as the state.This illustrates the different modelling choices and interpretations that are put into the DFM model as well as the wide variety of possible approaches to modelling taxation [10].
An extensive analysis of the DFM is also done by Maini [11].To summarize, the main outcome of the DFM is a stateless society, where the population stays at its starting value k 0 .In other words, the long-time outcome of the system is the starting point of the system, which is still reasonable if one is interested in the transient dynamics and uses re-initalization of the system.Yet, one could also aim for alternative models, where periodicity is immediately built in to explain the recurring theme of rise-and-fall of states.In particular, one may ask, whether there is a simple alternative model with slightly different assumptions that leads to periodicity.
Model derivation
The main difference in the interpretation of the model compared to the DFM is the state's role in it.In the DFM the state's surplus is determined through revenues and expenditures, measured in taxes, respectively more money that has to be spent in order to maintain the state's infrastructure with growing population numbers.In our "Demographic-Wealth model" (DWM) the state's surplus/wealth is measured in wealth gain and wealth loss and the gains are mainly determined by two aspects: • Taxes that are collected from the population.
• Wealth that is generated with existing surplus, for example land gain through warfare, trade or strategic investments.
In addition, the wealth loss is mainly on the state's wealth level.The more wealth the state has (more money or more land and therefore gaining more attention), the more expenditures it has to make in order to secure this wealth (from attacks, land loss and maintaining infrastructure), similar to the theory of Olson.Starting with the dynamics for N, as in the DFM, a logistic growth for the population is assumed, with r being the intrinsic rate of population growth, so in absence of the state, the population will grow until its "carrying capacity" k.The carrying capacity k is a functional response to the state's wealth, meaning that more wealth and therefore more land and financial possibilities lead to more space and resources to live.The parameter k 0 is the carrying capacity in absence of the state and c determines the dependence of the carrying capacity on the change in state's wealth.The difference to the functional k(S) in the DFM comes from the interpretation of S as the wealth of the state (land gain).A new part for the dynamics of N is the a negative feedback effect from S to N, inspired by Olson.A state that is growing in wealth has at some point negative influence on population numbers, for example, one may consider the scenarios: • Growing wealth leads to growing expenditures which lead to exploitation of the population.
• Growing wealth leads to more warfare and therefore to a higher death rate or emigration.
In addition, with growing population and only a limited amount of food and living space available and taxes that have to be paid, a growing fraction of the population cannot afford living in the state.So, there will be a growing fraction of the population that leaves the state or dies.On the other hand, the remaining people have more resources and space, so it is assumed that the decreasing rate approaches one.Together with the negative feedback effect from S to N this behavior is described by the term −αX(N, S).In this model the functional XðN; SÞ ¼ SN dþN is chosen, because of the situation described above, where d > 0 controls the strength of the negative feedback from S to N in the usual way of a Holling type-II response.Overall, the dynamics of N has the form The dynamics of S are determined by wealth gain and wealth loss of two parties, the population and the state itself.The state collects taxes and can reinvest a portion of the surplus gained through the population for some extra wealth, for example through loans, warfare or land gain.This results in the term gSN, with g = τρ, τ being the tax rate and ρ the fraction of the surplus that is gained through investing/expanding.But there are also expenditures that the state has to make.The larger the country and the more wealth the state has, the more money it has to spend for protection or maintaining the wealth.For example, if the state gains wealth through capturing new land, it has to pay additional attention to protect the new land by paying more soldiers and civil servants.In addition, it needs to provide a suitable living space, so it has to reinvest a growing amount of money into the infrastructure of the country.In summary, we consider that the dynamics of S has the form with β being the fraction of the wealth that has to be spent.Together with the Eqs (2) and (3) the "Demographic-Wealth model" (DWM) is given by Note that different choices for the interaction functions between population and state are definitely possible, e.g., the nonlinearities may also carry different powers for S and N.Here we have effectively applied the principle to start with the simplest non-trivial case of a direct production interaction leading to the terms SN.
Model analysis
For a theoretical background regarding stability and bifurcations we refer to the introductory textbook [12] and the more advanced monographs [13,14].Looking for possible stationary states the following equations have to be satisfied One can see that (0, 0) and (k 0 , 0) are stationary states.For a possible coexistence point (N*, S*) it follows that N * ¼ b g and S* has to satisfy With the calculations for the Jacobian and its determinant and trace the stability behavior of the stationary states can be determined.For (0, 0) one has detðJð0; 0ÞÞ ¼ À rb which yields a saddle point and therefore (0, 0) is always unstable.For the stationary state (k 0 , 0) the following can be calculated: It follows that b g > k 0 yields a stable equilibrium (k 0 , 0), more precisely a sink.If b g < k 0 the equilibrium is a source, therefore unstable.For the analysis of the coexistence point the following functions are introduced.
For a proof, we refer to Appendix A. In fact, in Appendix A, we also give a more general description of the phase plane analysis of the DWM within the proof.Unfortunately Theorem 4.1 is rather implicit and is difficult to apply in practice as one would prefer a result with more explicit parameter dependence.For a special case, the result becomes more transparent.If we assume that c = 0 the state's wealth no longer influences the carrying capacity.In reality this could for instance happen if there is simply no additional room left (an island for example).Now, the stability behavior of the equilibria (0, 0) and (k 0 , 0) do not depend on the parameter c and we get a more explicit result: Then Hopf bifurcations occur in the DWM whenever i.e., periodic solutions are generated at the Hopf bifurcation point.
The proof and a more detailed phase plane analysis can be found in Appendix B. We note that the detailed mathematical analysis of the dynamics will likely be very difficult if one considers varying through all possible parameter configurations.Yet, the main conclusion from Theorems 4.1-4.2 is that the DWM can naturally yield periodic behaviour without any resets.Now, we have to check whether our dynamical analysis is consistent with the intended interpretation/ modelling from the theories underlying cliodynamics.Yet, we already emphasize that any lowdimensional model of such a high-dimensional complex system such as state/population models must necessarily simplify and can only depict certain aspects of the overall dynamics.
Interpretation of different behaviors
Starting with the equilibrium (k 0 , 0) the criterion for stability is b g > k 0 .For the simple case k 0 = 1, this means that if the expenditures β exceed the "growth-rate" g of the state then the state cannot survive and the stateless population approaches its carrying capacity.Another interpretation could be that the carrying capacity k 0 is too small in comparison to a given expenditure/ growth-rate ratio for a state to survive.
We have also used numerical continuation of the equilibrium to track its stability using the software MatCont, see [15] for details.The numerical continuation runs, reported in more detail below, for the equilibrium (k 0 , 0) is consistent with the analysis provided above, meaning that for the parameters g, β, and k 0 it shows a loss of stability whenever b g < k 0 .Following the analysis above, the equilibrium (N*, S*) becomes stable as the equilibrium (k 0 , 0) becomes unstable.The typical behavior is shown in Fig 3 .Both the population numbers and the state's wealth exhibit damped oscillations until the coexistence point is reached.
As described in the analysis of the coexistence point, there are two possible ways for a Hopf bifurcation to occur, either making N* small enough, or changing S* in a way that μ(N*, S*) is equal to zero.In the case of N* there is the possibility to chose g big enough, β small enough or changing k 0 .Checking the numerical continuations shown in Figs 4-6, indeed a Hopf bifurcation occurs for both parameters, at g = 0.992 and β = 0.015, both rounded to 3 decimal digits.By calculating μ(N*, S*) for these sets of parameter values one gets approximately zero in both cases.
For the remaining parameters of the model r, α and c, the continuation also shows Hopf bifurcations for certain parameter values.This corresponds to an increase in S* yielding μ(N*, S*) = 0.
Interpretation of the DWM
We want to look at the parameters of the "Demographic-Wealth model" and whether their change induces meaningful dynamics based upon their interpretation, starting with the three most important parameters, g, β and k 0 .As a starting point, a stable equilibrium (N*, S*) is assumed.For the parameter g one can see in Fig 4 that for values between 0.1 and 0.992 the equilibrium remains stable.A smaller parameter g results in higher population numbers and lower state's wealth, which results from lower taxes.If g < 0.1, (N*, S*) and (k 0 , 0) swap stability, meaning the state cannot survive.One explanation for this behavior is, that the state collects too little taxes, respectively the growth is too small so it cannot survive.On the other hand a greater parameter g results in greater wealth of the state but lower population numbers up to a point where (N*, S*) becomes unstable and a limit cycle occurs, meaning a repeated sequence of state growth and state decline.This could happen because if taxes are too high or growth is too fast, population numbers are plummeting rapidly, and the state cannot compensate the tax loss.The behavior for the parameter β is the other way around, meaning if the state has high expenditures, it benefits the population until the state breaks down, and if the expenditures are low, population numbers are shrinking to the point where the state cannot compensate the loss of population and a limit cycle occurs again at β = 0.015.
In case of the parameter k 0 , Fig 6 shows that if the carrying capacity is too low (k 0 < 2 3 ) the state has no chance of surviving because the starting capacity is too small.For higher k 0 , ( 23 < k 0 < 2:095), a greater k 0 results in greater wealth for the state whereas the population number remains unaffected.On the other hand, if k 0 is too big (k 0 > 2.095) it leads again to the occurrence of limit cycles.In contrast to the parameters g and β the occurring limit cycles approach zero for N and S way closer, so it can be seen as a sequence of state growth and collapse, same holds for the population.Similar limit cycle behavior as for k 0 can only be seen for the parameters r and α.For the other parameters either the limit cycle is further away from zero or the values for N and S are not representative anymore.
In summary, we argued that the behavior of the model regarding the three parameters is quite consistent with the theory of Olson described in Section 1, as rapid economic growth (similar to growing g) or slow adaption to new situations (similar to small β) both can destabilize the state.In addition, the limit cycle behavior for the parameter k 0 coincides with Olson's theory that a large territory can also lead to state decline due to feedback.Furthermore, we observe that the occurrence of Hopf bifurcations with limit cycles that approach zero very fast is comparable with a similar behavior in biology, the so-called paradox of enrichment.
3.4.1 Paradox of enrichment for the DWM.The paradox of enrichment is a theory from population ecology first described by Rosenzweig in 1971 [16].In general, enrichment of resource in a predator-prey model [17, p. 111ff] leads to destabilization of the system, leading to collapsing predator and prey population [18, p. 421].More precisely, Rosenzweig showed that if the carrying capacity of the prey population is increased sufficiently, the coexistence state becomes unstable and the system exhibits limit cycles.Further increase of the carrying capacity leads to growing cycles that approach zero more and more, meaning it can lead to extinction upon small stochastic fluctuation once the limit cycle is close to the zero population level [18, p. 421].
However, this behavior has rarely been observed in real ecosystems [19], and could not be proven in several experiments [20,21].Because the "Demographic-Wealth model" is quite similar to a predator-prey model, it should be checked whether such a paradox occurs in the DWM too.Indeed, from the analysis above there appear Hopf bifurcations for the parameters r, α and k 0 which yield the same behavior as for the predator prey models and therefore for such a paradox.
Because in the original predator-prey model it is the carrying capacity that leads to such a paradox, the behavior resulting of k 0 is examined first.Looking at the continuation in Fig 7, one can see that as soon as the Hopf-point is reached the limit-cycles approach zero very fast.To find a possible explanation for the behavior a look at the situation before the Hopf Although (N*, S*) is stable in both cases, it takes way longer with growing k 0 until the equilibrium is reached.In addition, the differences between maximum and minimum become greater.Especially for S, the minimum is almost approaching zero despite the fact that the equilibrium for S increases.One possible explanation for this behavior could be that with greater carrying capacity there is more room to grow fast but the population cannot grow with the same rate as the state's wealth does.To hold the growth the state has to collect more and more resources per capita until the population cannot endure it anymore and collapses.With that being the case, the state loses its most important support and breaks down.In case of the limit cycle, the carrying capacity becomes too big for the state to establish an enduring society and finds itself in a cycle of growth and collapse.Another explanation for the behavior could be the already mentioned theory of Olson.
For the parameter r it is also worthwhile to examine the behavior shortly before the Hopf bifurcation and compare it with the initial parameter combination in Fig 3.
When comparing Figs 9 and 10, we notice that the sum of oscillations increases greatly with growing r.This could be a reason for the faster population growth.In addition, the amplitude of the oscillations grow with greater r.One possible explanation of the Hopf bifurcation and the series of growth and breakdowns could be that excessive population growth leads to fast state growth but when the population growth starts shrinking again due to lack of space, the state cannot hold up its fast gained wealth and starts decaying.
The last parameter one should look at is α, which describes the negative influence from S to N and of N to itself.Unlike the parameter r the main behavior that changes with smaller α is the amplitude, especially for S, as Fig 10 shows.The state can gain more wealth before it declines again.This could be due to the fact that if the negative influence of S on N shrinks, the state can gain wealth longer before the shrinking population has an effect.If α becomes too small, one could say the state uses the little negative influence and starts acting recklessly regarding the population, which again leads to a series of breakdowns and growth.
All of the statements above should be viewed as explorative considerations.Making an experiment in order to clarify if such paradoxes exist in reality is simply not possible on a suitable time scale, let alone on ethical principles.Also looking at old records and comparing it with the considerations is difficult as there are many factors that influence state dynamics, which cannot be ignored.But as there can be limit cycles in the model, it should checked whether these can be found in data, which seems to be indeed the case.
3.4.2Secular cycles.The pattern of population change is strongly affected by the scale at which it is observed.Population numbers can fluctuate in a span of years for example through bad harvests caused by bad weather.But when considering a longer time scale, decades or centuries, one tends to observe a dominant pattern called Secular Cycles.These cycles are determined by long periods of population growth followed by years of decline.More theory can be found in the work of Turchin and Nefedov [22].A well-known example for these cycles is Western Europe from the thirteenth to the eighteenth century when there was a growth period in the thirteenth, a decline period in the fourteenth and fifteenth century and again a period of growth in the sixteenth followed by a decline and stagnation in the seventeenth century [1, p. 175].An application can be found in the work of Alexander, [23].Another example where comparatively a lot of historical data is available is the Chinese history, since a detailed population history was published by Zhao and Xie [24].The data in [24] and the respective illustration [25, p. 5] show at least four sharper peaks, where each peak was achieved during the great unifying dynasties.Indeed, our DWM can produce oscillating/periodic population numbers with sharper peaks in the population numbers; see Fig 11.Furthermore, our DWM can also produce transient oscillations of varying amplitude in quite a parameter regions as one may approach a limit cycle or approach a weakly-stable spiral equilibrium; see also Yet, in summary it is very important to point again that an exact matching to a particular historical situation would require a reduction of the number of parameters via a detailed statistical fitting.Indeed, if we have too many free parameters, then the number of dynamical possibilities is too large and one encounters a classical 'overfitting' phenomenon.Unfortunately, it is in general very difficult to have a good estimator for all relevant parameters, even for relatively simple models, from historical data.Hence, one is currently limited to qualitative explanations and the discovery of general effects.
Conclusion
One should always have in mind that the "Demographic-Wealth model" (DWM), as well as the most other models, provides a simplified view of state and population dynamics and only takes into account few fundamental drivers; in particular, there are certainly more complex models that have been developed [26][27][28].However, the DWM can be seen as an useful alternative or variant of the "Demographic-Fiscal model" (DFM).The advantage of the DWM is that the fiscal component is not the only and most important one of the model.Other parameters like the carrying capacity, the state expenditures or the wealth gain play a significant role too.Furthermore, the DWM still remains tractable in terms of direct mathematical analysis, which is often not the case for models of high(er) complexity.
In addition, the model is supported by the theories of Khaldun, Goldstone and Olson.These theories can also explain partly the appearance of Hopf bifurcations and the according limit cycle behavior.In particular, we have seen that the DWM can show very reasonable and quite realistic behaviour such as coexistence, bifurcations and periodicity.These can be interpreted within the framework of the theories we outlined initially.Furthermore, we can link the DWM to the paradox of enrichment as well to secular cycles.Yet, the DWM is still relatively simple algebraically allowing for analytical results as well as fast numerical continuation runs.
Of course, many further extensions of the DWM could be developed.For example, adding class structure would be a meaningful development of the model, although it would make the model more complicated.Turchin himself, as well as Goldstone, state that class structure, respectively elites, play an important role in state dynamics, that is, why Turchin extended the DFM with a class structure to have a more realistic view on state dynamics.For the DWM it would be interesting to see how class structure would affect the behavior of the system.Especially the effect on the appearance of Hopf bifurcations could be of special interest.In addition, studying different nonlinearities might be a very useful next step.For example, this may include additional nonlinear saturation effects for taxation, which we have not considered as already the population saturation is enough together with the coupling structure to produce interesting and practically relevant bounded orbits such as secular cycles.
The nullclines for S are and due to the conditions on q there is a unique point N .For N it holds for the nullclines that Due to the fact that the axes are composed of the orbits, R 2 þ is invariant.Two cases are possible.In the first case, assume that N* < k 0 + cS*, then in R 2 þ there are three equilibria where For the Jacobian one gets JðN; SÞ ¼ gðN; SÞ þ N @ @N gðN; SÞ À Sp 0 ðNÞ N @ @S gðN; SÞ À pðNÞ Sq 0 ðNÞ À d þ qðNÞ Therefore, x0 is always a saddle point, with the stable manifold on S-axis and unstable manifold on N-axis.
The equilibrium x1 can either be stable or unstable.Indeed, the Jacobian yields Due to the assumptions, g 0 (k 0 , 0) < 0, therefore, if q(k 0 ) > d then (k 0 , 0) is a saddle point, and if q(k 0 ) < d then (k 0 , 0) is a stable node.If (k 0 , 0) is a saddle, then, according to the analysis, also Here, the Jacobian yields Therefore, it holds that trðJðx 2 ÞÞ ¼ mðN * ; S * Þ; detðJðx 2 ÞÞ ¼ À S * q 0 ðN * Þ N * @ @S gðN * ; S * Þ À pðN * Þ � � : For a possible stable equilibrium x2 it is required that pðN * Þ > N * @ @S gðN * ; S * Þ: If this condition holds, the sign of trðJðx 2 ÞÞ is determined by the slope of the tangent line to l 2 at N*.If this slope is positive, then x2 is an unstable node or focus.This implies that there exists a small neighborhood N ðx 2 Þ of x2 which is negative invariant (the orbits leave this neighborhood with t > 0).Next, consider the unstable manifold of x1 .It emanates from x1 in the region where _ N < 0; _ S > 0 and therefore it has to reach the nullcline N = N*, named B 1 .After B 1 _ N < 0 and _ S < 0, therefore the orbit has to cross l 2 at the point B 2 .Now the domain is _ N > 0; _ S < 0, which means that the orbit has to cross l 1 again at the point B 3 .Let B 4 = (N*, 0).Consider the closed path x1 ; B 1 ; B 2 ; B 3 ; B 1 , and x1 .Define G as the domain confined by this path without N ðx 2 Þ.
G is positive invariant as the set whose boundaries are either orbits or the line on which the direction of the phase flow points "inwards" of G.By construction, G does not possess equilibria and hence, by the Poincare ´-Bendixson theory, see [29] for details, it must contain at least one closed curve, which has to be an ω-limit set for the orbits entering G. Assume that N*, S* satisfy μ(N*, S*) < 0. Then x2 is stable.If N* is decreased to the point where the slope of l 2 is zero, then μ(N*, S*) = 0, and there is a non-hyperbolic equilibrium x2 with two purely imaginary eigenvalues.As for the DWM μ is also dependent on S, a similar situation as for N* can happen with S*.This means that there could be a possibility where S* is changed to a point where also μ(N*, S*) = 0.This point corresponds to a Hopf bifurcation of x2 .
Appendix B. Proof of Theorem 4.2
Proof.We use the same steps as in the proof provided in Appendix A with the difference that c = 0 and therefore the function g(N, S) only depends on N. For the equilibria (0, 0) and (k 0 , 0) the conditions remain the same, (0, 0) is always a saddle and (k 0 , 0) is a stable node if q(k 0 ) < d and a saddle if q(k 0 ) > d.In the case of a saddle, also ðN * ; Here, the Jacobian yields where mðN * Þ ¼ pðNÞ NgðNÞ pðNÞ Therefore, it holds that trðJðx 2 ÞÞ ¼ mðN * Þ; detðJðx 2 ÞÞ ¼ S * q 0 ðN * ÞpðN * Þ > 0: Following again the argumentation of the proof above and assuming that N* satisfies μ(N*) < 0, Then x2 is stable.If N* is decreased to the point where the slope of l 2 is zero, then μ(N*) = 0, and there is a non-hyperbolic equilibrium x2 with two purely imaginary eigenvalues.This point corresponds to a Hopf bifurcation of x2 .
For the stability of the coexistence point it is necessary that trðJðx 2 ÞÞ < 0, otherwise a Hopf bifurcation occurs if the trace equals zero (limit cycle if greater than zero).As the limit cycle behavior is more interesting, the following has to be satisfied, This yields the equations for the different parameters, The assumptions at the beginning of Theorem 4.2 are needed for the existence of the Hopf bifurcations.For b g < k 0 � 2 b g there cannot be a Hopf bifurcation for the parameter d, because the equation cannot be fulfilled for d > 0. For d � k 0 , there cannot be a Hopf bifurcation for the parameters g and β, because the equations cannot be fulfilled for β > 0, g > 0.
Fig 7 .
Fig 7. Periodic orbit continuation for the parameter k 0 .Continuation of the Hopf bifurcation at k 0 = 2.095, as in Fig 6.The amplitude of the periodic orbit is growing upon increasing the carrying capacity and it eventually comes close to zero population level.In particular, we have the hallmarks of the paradox of enrichment.https://doi.org/10.1371/journal.pone.0298318.g007
|
v3-fos-license
|
2018-08-22T17:59:37.000Z
|
2018-08-22T00:00:00.000
|
119089380
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevX.9.021034",
"pdf_hash": "be0780a8895a6a15e3fc21123c71edf4e11cb23f",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42348",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "be0780a8895a6a15e3fc21123c71edf4e11cb23f",
"year": 2018
}
|
pes2o/s2orc
|
An Adventure in Topological Phase Transitions in 3 + 1-D: Non-abelian Deconfined Quantum Criticalities and a Possible Duality
Continuous quantum phase transitions that are beyond the conventional paradigm of fluctuations of a symmetry breaking order parameter are challenging for theory. These phase transitions often involve emergent deconfined gauge fields at the critical points as demonstrated in 2+1-dimensions. Examples include phase transitions in quantum magnetism as well as those between Symmetry Protected Topological phases. In this paper, we present several examples of Deconfined Quantum Critical Points (DQCP) between Symmetry Protected Topological phases in 3+1-D for both bosonic and fermionic systems. Some of the critical theories can be formulated as non-abelian gauge theories either in their Infra-Red free regime, or in the conformal window when they flow to the Banks-Zaks fixed points. We explicitly demonstrate several interesting quantum critical phenomena. We describe situations in which the same phase transition allows for multiple universality classes controlled by distinct fixed points. We exhibit the possibility - which we dub"unnecessary quantum critical points"- of stable generic continuous phase transitions within the same phase. We present examples of interaction driven band-theory- forbidden continuous phase transitions between two distinct band insulators. The understanding we develop leads us to suggest an interesting possible 3+1-D field theory duality between SU(2) gauge theory coupled to one massless adjoint Dirac fermion and the theory of a single massless Dirac fermion augmented by a decoupled topological field theory.
In this paper we will describe a number of surprising quantum critical phenomena for which there are no (or very few) previous examples as far as we know. Many of our results are obtained by considering the phase diagram of non-abelian gauge theories in space-time dimensions D = 3 + 1. If massless we interpret the corresponding theory as a quantum critical point in the phase diagram and identify the phases obtained by turning on relevant perturbations. As a bonus of the results on fermionic deconfined quantum critical points, we will discuss a striking possible duality of fermions in 3 + 1-D. Specifically we will show that an SU (2) gauge theory coupled to massless adjoint fermions and massive fundamental bosons may share the same Infra-Red (IR) physics with a theory of a free Dirac fermion supplemented by a gapped topological field theory. Both theories have the same local operators, and the same global symmetries and anomalies. Further they support the same massive phases. These checks lend hope that the massless theories may also be infra-red dual. Closely related work on SU (2) gauge theories with adjoint fermions has recently appeared in Ref. 60, 61, and we will use some of their results. In 2 + 1-D, dualities of Yang-Mills theories with adjoint fermions have been explored in recent work [62]. There are many famous examples of dualities of supersymmetric field theories in diverse dimensions [63]. Many interesting non-supersymmetric dualities have been found in 2 + 1-D (starting from old work [64][65][66] on charge-vortex duality in bosonic theories), particulary in recent years [56][57][58][59][67][68][69][70][71][72][73][74][75][76][77][78]. However there are no simple dualities of non-supersymmetric theories that are known to us in 3 + 1-D.
A. Free massless Dirac fermion as a quantum critical point
In this section we review how to interpret free massless Dirac fermion theories in space-time dimensions D = 3 + 1 as quantum critical points. This will enable us to introduce many ideas and methods that will be useful to us later on in a simple setting.
Consider a free Dirac fermion described by the Lagrangian Here ψ is a 4-component Dirac fermion. We will regard this as the low energy theory of electrons with global symmetry U (1) × Z T 2 (denoted class AIII [51,52] in the condensed matter literature). With this choice the electric charge of the global U (1) symmetry is odd under time reversal Z T 2 . To probe the physics of the system it will be convenient to introduce a background U (1) gauge field A (more precisely a spin c connection 1 ). We will also allow placing the theory on an arbitrary smooth oriented space-time manifold with metric g. Examining the partition function for arbitrary (A, g) will allow us to distinguish phases based on the response to these probes.
Consider the phase diagram as a function of the mass m. So long as |m| = 0 there is a gap in the spectrum. However the phase with m > 0 is distinct from the one with m < 0. Taking the m < 0 phase 2 to be a trivial insulator the m > 0 phase will be a symmetry protected topological 1 Physically this is a device that enables keeping track of the fact that all physical fields with odd charge under A are fermionic. Formally if we try to formulate this theory on an arbitrary compact oriented space-time manifold, a Spin c connection is like a U (1) gauge field but with a modified flux quantization condition. Specifically a Spin c connection satisfies the following condition, F 2π = w T Y 4 2 2 mod 1, where F is the field strength for the U (1) gauge bundle, w T Y4 insulator. Thus the massless Dirac theory can be viewed as sitting at a quantum critical point between a trivial and a topological insulator.
The topological distinction between the two phases can be understood physically by studying a domain wall in space where the mass m changes sign. It is well known that at this domain wall there is a single massless Dirac fermion. This reveals that the phase for one sign of the mass is topological when the other is trivial.
It will be extremely useful to us to establish this result in a more formal but powerful way (see Thus the ratio of the partition functions is a topological invariant. Furthermore it is known [79] (by the Atiyah-Singer index theorem) that where F = dA and σ is an integer known as the signature of the space-time manifold. It may be expressed in terms of the Riemann curvature tensor: Eq. 6 thus gives exactly the right θ = π response of a topological insulator for one sign of mass if the other sign is chosen to be trivial.
We note that the massless Dirac theory has extra symmetries absent in the massive case. For instance, we can write the Dirac fermion as two flavors of Weyl fermions and do a flavor rotation of the two Weyl fermions. We will regard these symmetries as emergent symmetries of the critical point. These emergent symmetries have 't Hooft anomalies and we will discuss them later as needed.
We can readily generalize the discussion above to N free Dirac fermions, or equivalently 2N Majorana fernions with SO(2N ) × Z T 2 symmetry. Taking the m < 0 theory to be trivial, the m > 0 theory will describe an SPT phase of fermions with SO(2N ) × Z T 2 symmetry. This is established by calculating the partition function ratio in the presence of a background SO(2N ) gauge field A SO(2N ) and metric g: The index J is a topological invariant related by the Atiyah-Singer theorem to (A SO(2N ) , g) by where p 1 is the first Poyntryagin index of the SO(2N ) gauge field defined by (11) Therefore, N massless free Dirac fermions can be viewed as the critical theory for the quantum phase transition between the trivial and SPT state of fermions with SO(2N ) × Z T 2 symmetry.
however, there are local operators that are fermions. We can view the theory as emerging from a UV system of these fermions (see later for more detail).
The infrared behavior of 3 + 1-D quantum chromodynamics with massless matter fields is an extremely important and intensively studied topic in particle physics. The renormalization group (RG) flow equation of the gauge coupling, for SU (N c ) gauge theory with N f flavors of fermions 4 in the representation R, reads where β 0 and β 1 are functions that depends on N c , N f and the representation R. For instance, if R is the fundamental representation, the β 0 and β 1 are Based on the RG equation, the IR phases of the gauge theory can be divided into three classes.
Firstly, for N f bigger than a critical value N 1 (N c , R), the leading term β 0 is negative (β 1 is usually also negative for such N f ) and gauge coupling g 2 flows towards zero under RG, if we start from a weak initial coupling. In the IR, the theory is free, namely decoupled gluons and free fermions.
Secondly, for N f slightly smaller than the critical value N 1 , β 0 is a small positive parameter. When we take into account the g 6 term in the RG equation, there's a stable fixed point controlled by at finite g 2 * ∼ O(β 0 /|β 1 |) for β 1 < 0. This is the famous Bank-Zaks fixed point [5,6], which is an example of an interacting conformal field theory in 3 + 1-D. As N f decreases further from N 1 , in general |β 1 | decreases and g 2 * becomes larger. Eventually, for N f approaching certain critical value N 2 (N c , R), |β 1 | → 0 and the fixed point goes to infinity, in which case at low energy the gauge theory is belived to be in a confined phases. The RG flows of these three different regimes are summarized in Fig. (2.a). Naively, the critical N 2 can be estimated by solving equation β 1 (N c , N f = N 2 , R) = 0.
However, at that point, perturbative RG is far from a controlled limit. Therefore, the value of N 2 is usually determined through numerical calculations. The gauge theory is in the conformal window if N f ∈ (N 2 , N 1 ). The conformal windows are confirmed in numerical studies for SU (2) gauge theories with fundamental fermions and adjoint fermions [81][82][83]. For fundamental fermions, the conformal window of SU (2) theory is around 8 ∼ 11. For adjoint fermions, the conformal window is around 1 ∼ 2. One can find a plot for the conformal window of SU (N c ) gauge theories in Fig. (2.b). The IR behavior of Sp(N c ) and SO(N c ) gauge theories are similar to that of SU (N c ) gauge theories.
The corresponding conformal windows have also been discussed through various methods [84,85].
C. Summary of results
The IR-free gauge theories and the Bank-Zaks fixed points are interesting examples of 3 + 1-D conformal field theories. In this paper, we will show how to interpret them as quantum critical points in the phase diagram of the "microscopic" degrees of freedom of the system, similar to what we reviewed for the free massless Dirac fermion theories in the previous section. Remarkably we will find that these theories can be viewed as deconfined quantum critical points for the underlying boson or fermion systems. The gauge theory description emerges as a useful one right at the critical point (and its vicinity) though the phases on either side only have conventional excitations (i.e interesting crossover scales. At temperature T > m (or length scale l < ξ ∼ 1/m), the physics is controlled by the critical point and the system has deconfined massless fermions with weakly interacting gluons. For temperature m < T < m y (or length scale 1/m < L < 1/m y ) with y > 1 is a universal exponent, the system has deconfined but massive fermions and weakly interacting gluons. For temperature lower than ∼ m y (or L > 1/m y ), the gauge theory flows to strong coupling and the system is in a confined phase.
those that can described simply in terms of the underlying bosons/fermions and their composites.).
In all cases we study, these massless gauge theories provide valuable examples of quantum critical points associated with phase transitions between trivial and Symmetry Protected Topological (SPT) phases of the underlying boson/fermion system. Section III describes 3 + 1-D deconfined quantum critical points for bosonic systems. In section III.A, we begin with SU (2) gauge theory with N f fermions in the fundamental representation. For simplicity we will restrict attention to N f even in this paper. We will consider the theory in the presence of an arbitrary mass m that preserves the flavor symmetry. When m = 0 the theory flows, in the IR, to massive phases. The m = 0 point will correspond to a critical point. For general m the global symmetry of the theory is 5 We regard this gauge theory as the IR theory of a system of UV (gauge-invariant) bosons with P Sp(N f ) × Z T 2 global symmetry. To begin with consider N f large enough that the massless point is IR-free. Thus the gauge coupling g 2 flows to zero at the IR fixed point when m = 0. For any m = 0 however there is an induced effective action for the gauge field at low energies. The resulting pure SU (2) gauge theory flows to strong coupling and will be confined at long length scales. In Fig. 3 we sketch the expected RG flows for this theory in the g 2 , m plane for large N f . For even N f (the only case we consider) the confinement results in a trivial vacuum. Thus the massless IR-free fixed point separates two strongly coupled confined phases with trivial ground states. However we will see that these phases are potentially distinct Symmetry Protected Topological (SPT) phases of the underlying boson system with P Sp(N f ) × Z T 2 global symmetry. Just like in the free Dirac fermion, the massless theory has extra symmetry: we will regard this as an emergent symmetry of the massless fixed point, and not as a fundamental symmetry.
Note that the RG flows show that the Yang-Mills coupling g 2 is "dangerously irrelevant" in the vicinity of the massless fixed point. Naturally there are then two length scales that emerge in the vicinity of the critical point. There is a first length scale ξ ∼ 1 m associated with the mass of the gauge charged fermions. At this scale g 2 is still small. Confinement does not set in till a much larger second length scale ξ conf ∼ ξ y where y > 1 is a universal exponent 6 . For SU (N c ) gauge theory with N f fermion flavors in the fundamental representation y = 2N f 11Nc . Close to the critical point, at length scales smaller than ξ, the physics is that of the IR-free massless fixed point of the large-N f SU (2) gauge theory. For length scales between ξ and ξ conf the physics is that of massive fermions and massless gluons that are weakly interacting. Finally at the longest length scales ξ conf the physics is that of the trivial ground state of the underlying boson system (but potentially in an SPT phase).
These critical crossovers are also manifested at non-zero temperature as two distinct temperature scales (see Fig. (3)).
From a condensed matter perspective, consider SPT phases of systems of interacting bosons with As we tune parameters in such a system we can drive phase transitions between the various SPT phases. From this point of view, the SU (2) gauge theory coupled to N f flavors of massless Dirac fermions in the fundamental representation emerges as a description of the quantum critical point between trivial and SPT states of bosons. The SU (2) gauge field only appears at the critical point. For N f < 8, the SU (2) gauge theory is believed to be in a confined phase at low energy. This implies that either the phase transition can be first order or there can exist an intermediate spontaneous symmetry breaking phase separating the two 6 The precise value of y is readily determined by matching the RG flow for the gauge coupling at the m = 0 fixed point with that of the pure gauge theory SPT states. For N f > 10, the gauge theory provides a description of continuous phase transition between the trivial and SPT state, where the critical point is free SU (2) Yang-Mills theory with decoupled massless Dirac fermions. An interesting situation 7 is that, for N f = 10 and 8, the phase transition can be described by the Bank-Zaks fixed point which is an interacting conformal field theory in 3 + 1-D.
In section III.B, we find generalizations of the above construction. The phase transitions between P Sp(N f ) × Z T 2 bosonic SPT states can also be described by Sp(N c ) gauge theories coupled to N f fundamental massless Dirac fermion for any N c = 4Z + 1. The transition is continuous provided that N f is inside or above the conformal window of Sp(N c ) gauge theories. These theories are weakly dual to the SU (2) gauge theory described above in the sense that they are distinct low energy descriptions of the same underlying UV physical system (in our case bosons with global P Sp(N f ) × Z T 2 symmetry). Furthermore they describe the same phases and phase transition of this system. However, clearly the theories with fixed N f and different N c are truly distinct conformal field theories. First they clearly have different numbers of low energy massless fields -this may be formalized by computing their a-coefficients which are clearly different for these different theories. In section III.C, we discuss an interesting phenomenon which we call unnecessary phase transitions. We define unnecessary phase transition as generic continuous phase transitions within the same phase. We provide several explicit examples for this phenomenon. The first example is a bosonic system with P Sp(N f ) × Z T 2 symmetry at N f = 4Z. We show that there can be a generic continuous phase transition inside the topologically trivial phase of this bosonic system. The critical theory is an emergent Sp(N c ) gauge theory at N c = 4Z with N f = 4Z massless fundamental fermions. As the phases on the two sides of this critical point are identical, the transition can be bypassed by some symmetric path in the whole parameter space. However, the transition is locally 7 We expect, in this case, that since the fixed point appears at relatively weak coupling, introducing a non-zero bare mass will still drive the system to a confined phase. In other words there is no intermediate phase that appears for small bare mass. This is an assumption which is reasonable for theories in the conformal window which are "close" to the free fixed point. We will see later when we consider the gauge theory with light adjoint fermions that this assumption fails for theories far away from the perturbative regime. stable. We give another example which do not involve emergent gauge fields. We consider 16 copies of topological superconductor in DIII class with an additional SO(2) × SO(7) global symmetry.
In the topologically trivial phase of this system, there can exist a generic second order transition characterized by 16 gapless free Majorana fermions in 3 + 1-D. The transition can be circumvented by adding strong interaction. In condensed matter physics, it is common that two phases separated by a discontinuous (i.e. first order, as for the liquid-gas transition) phase transition can actually be the same phase. The examples in this section teach us that even a generic continuous phase transition does not necessarily change the nature of the state.
Sections IV and V contain examples of deconfined quantum critical points in fermionic systems for which there are very few previous examples. We study 3 + 1-D fermionic deconfined quantum critical points that can be formulated as an SU (2) gauge theory coupled to N A f flavors of adjoint Dirac fermions. This theory has local fermion operators (baryons) and we will therefore regard it as a low energy theory of a microscopic system of these local fermions. However, to enable this point of view we need to augment the theory by including a massive spin-1 2 (under the SU (2) gauge transformation) scalar particle in our spectrum. Otherwise the theory has physical loop degrees of freedom corresponding to 'electric' field lines in the spin-1/2 representation 8 . We call this massive spin-1 2 scalar the spectator field. To complete the theory, we need to specify its symmetry quantum numbers under the global symmetry, especially its time reversal properties 9 . The adjoint SU (2) theory can actually describe different quantum phase transitions depending on the time reversal symmetry properties of the spectator field.
For N A f > 2, the massless theory is free in the infrared limit. This theory, by tuning the fermion mass m, describes a quantum phase transition between a trivial and SPT state protected by the global symmetry which is We first discuss the fermion SPT classification for this symmetry. For example for N A f ∈ 2Z + 1, we show that it is Z 8 × Z 2 , generalizing the known results [86,87] for SO(2) × Z T 2 symmetry (known in the condensed matter literature as a class AIII topological superconductor.) This means that such systems form distinct SPT states labelled by a pair of integers (n, η) where n = 0, 1., ...., 7 mod 8, and η = 0, 1 mod 2. Phases with η = 0 8 A formal but very useful description is to say that the SU (2) gauge theory with adjoint matter but no fundamental matter has a global Z 2 1-form symmetry (denoted (Z 2 ) 1 ). Of course a microscopic condensed matter system of fermions has no such 1-form symmetry. Therefore we allow for an explicit breaking of the (Z 2 ) 1 symmetry by including the massive spin-1/2 scalar. 9 From a formal point of view this corresponds to how to define the theory on non-orientable manifolds.
are accessible within free fermion band theory. The IR-free massless gauge theory with N A f > 2 sits at the critical point between two such SPT phases. A subtlety arises with the time reversal properties of the theory. The precise SPT phase is changed depending on the symmetry properties of the massive spectator field. With one choice of spectator field, it describes the phase transition between the n = 0 (trivial) state and the (n = 3, η = 0) SPT state. This is a quantum phase transition that is not generically second order in the free fermion system where n can only jump by 1. Thus this is an example of an interaction-driven band-theory-forbidden quantum critical point between two band insulators. With a different choice of the spectator field, the adjoint SU (2) theory can describe the phase transition between the trivial state and (n = −1, η = 0) SPT state. This transition can also occur within band theory where it is described by a free Dirac theory of physical fermions. The gauge theory however yields a distinct fixed point for the same transition. This is yet another example of multiple universality classes for the same phase transition. For N A f ∈ 2Z, the m > 0 phase does not depend on the choice of spectator field.
If we banish the fundamental scalars from the spectrum, at the IR-free massless point, the 1form (Z 2 ) 1 symmetry is spontaneously broken. Turning on a small mass to the fermions confines the symmetry and restores the (Z 2 ) 1 symmetry. In other words electric loops in the spin-1/2 representation are tension-full in the massive phase. These loops are decoupled from the physical excitations of this phase (which are the local fermions). Now if we re-introduce the fundamental scalars, they will have no effect on the low energy properties at the critical point. However in the massive phase the scalars allow the loops to break. At the same time they also affect the SPT characterization of the phase.
In Sec. V we consider the interesting case N A f = 1 (augmented as above with a spectator fundamental scalar). This describes the familiar system of fermions with SO(2) × Z T 2 symmetry (the class AIII topological superconductor). This is an asymptotically free theory and there is some numerical evidence that it flows to a CFT in the IR [82]. We will therefore first consider the fate of this theory in the presence of a large mass (of either sign) when trivial confined phases will indeed result. The precise SPT identification of these massive phases depends on the symmetry realization on the spectator boson in exactly the same way as for general N A f ∈ 2Z + 1. In contrast to the previous examples, here the gauge theory description of the massless point is strongly coupled.
In Sec. VI we explore the possibility that the low energy theory consists of a free Dirac fermion together with a decoupled topological field theory. This may be viewed as a duality of the SU (2) gauge theory with N A f = 1 adjoint Dirac fermions and the theory of a free Dirac fermion augmented with a decoupled topological field theory 10 . The latter is needed to be able to match all the anomalies of the theory (in the absence of the spectator field) identified recently in Ref. 60. We discuss physical properties of this topological order. We will show that the free massless Dirac + topological theory has the same local operators and the same global symmetries (both exact and emergent), and further enable matching all 't Hooft anomalies of the emergent symmetries. While these checks are necessary to claim a duality they are not sufficient as a proof. A small mass in the gauge theory will map to a small mass of the physical Dirac fermions of the IR theory but will not destroy the extra topological order. This leads to a situation where between the two large mass insulators there is an intermediate phase which has an additional topologically ordered sector.
Several details are in the Appendices. In particular we present some simple models -not involving emergent gauge fields -for some of the phenomena depicted in Fig. 1. We also briefly discuss the fate of SU (2) gauge theory coupled to arbitrary N A f flavors of adjoint Dirac fermions.
III. BOSONIC DECONFINED CRITICAL POINTS IN 3 + 1-D
In this section, we study quantum phase transitions between trivial and SPT phases in 3 + 1-D systems of interacting bosons. The critical theories we construct for such transitions resemble the features of deconfined quantum phase transitions in 2 + 1-D [1,4]. In particular, the critical point has emergent non-abelian deconfined gauge fields and associated 'fractionalized' matter fields. To get an understanding of certain phase transition, it is often helpful to firstly identify the nature of the two nearby phases, which provide crucial information about the critical fluctuations at the phase transition. Here however we pursue a reversed logic by asking the following question: given some deconfined gauge theory in 3 + 1-D, what phase transition can this theory describe? To complete the phase diagram, we will start from the deconfined gauge theory and then identify its nearby gapped phases by perturbing the theory with a relevant perturbation. 10 A somewhat similar duality in 2 + 1-D was proposed [62] recently for SU (2) gauge theory with N f = 1/2 adjoint fermions (i.e with a single Majorana fermion in the adjoint representation. The IR theory was argued to consist of a free massless Majorana fermion augmented with a decoupled topological theory. A. SU (2) gauge theory with N f ∈ 2Z fundamental fermions Consider SU (2) gauge theory with N f Dirac fermions in the fundamental representation. We will label it SU (2) + N F f theory. A key observation is that in this theory all local (i.e, SU (2) gauge invariant) operators are bosonic because they are composed of even number of fundamental fermions 11 .Therefore, the theory describes a phase transition in a purely bosonic system.
A relevant perturbation that can drive the massless theory away from the critical point is the Dirac mass term that is uniform for all flavors.
We first show that both m < 0 and m > 0 phases (at least for large |m|) are trivial gapped phases if N f ∈ 2Z. Let us assume that in the m < 0 phase integrating out the massive fermions generates a trivial Θ-term for the SU (2) gauge theory. This is always possible by certain UV regularization.
Then on the m > 0 side, the massive fermions contribute a Θ-term for the SU (2) gauge field at Θ = πN f . With the condition N f ∈ 2Z, both phases have trivial SU (2) Θ-terms because of the 2π periodicity of the Θ-angle. Therefore, the SU (2) gauge theory enters a trivial confined phase at low energy and the system has gapped spectrum in both cases. Importantly it is believed that when pure SU (2) gauge theory confines the resulting ground state is also topologically trivial: there is a unique ground state on all spatial manifolds. In condensed matter parlance, we expect a "Short Ranged Entangled" (SRE) ground state [49,50]. In contrast, if N f ∈ 2Z + 1, we have an SU (2) gauge theory with Θ = π for the m > 0 phase. The dynamics of this gauge theory is nontrivial at low energy [88], and the ground state likely has long range entanglement. To keep things simple in this paper we will henceforth focus on the case N f ∈ 2Z.
With N f ∈ 2Z, by tuning the uniform Dirac mass from negative to positive, the system goes between two gapped phases through a quantum phase transition, which is described by the massless For large enough N f , the IR physics of the SU (2) + N F f theory is either free or controlled by the Bank-Zaks fixed point. Therefore, it describes a continuous phase transition. In the 11 From a formal point of view, despite the presence of fermionic matter fields, this theory can be defined on non-spin manifolds by choosing gauge bundles in . On a non-spin manifold we require w 2 (SO(3) g ) = w 2 (T Y 4 ) mod 2, where the left side is the second Stiefel-Whitney class of the SO(3) gauge bundle and the right side is the second Stiefel-Whitney class of the tangent bundle T Y 4 of the 4-manifold Y 4 . That the theory can be so defined on a non-spin manifold without imposing any conditions on bundles for background gauge field is an alternate way of seeing that the theory describes a physical system of bosons.
following, we will explain that the SU (2) + N F f theory with uniform Dirac mass has P Sp(N f ) × Z T 2 symmetry. With this global symmetry, the uniform Dirac mass is the only symmetry allowed relevant perturbation at the critical point. The m < 0 and m > 0 phases are the trivial and the symmetry protected topological phases of this global symmetry, respectively.
In order to illustrate the global symmetry in an explicit way, let us construct the SU (2) + N F f theory in a more systematic way. First, we consider 4N f flavors of Majorana fermions in 3 + 1-D (σ ij is the short notation of σ i ⊗ σ j .) At this stage, the system has an SO(4N f ) flavor symmetry and time reversal symmetry Z T 2 , whose actions on the Majorana fields are as the following.
It is easy to check that the SO(4N f ) and Z T 2 symmetry 12 commute with each other and T 2 = (−1) F . Next, we will gauge a diagonal SU (2) subgroup of the flavor symmetry. To specify the SU (2) subgroup, we reorganize the fermion fields into a matrix form [4]. Let us split the Majorana flavor index into two indices, namely labeling the Majorana fields as χ v,j with v = 1, 2, .., N f and j = 0, 1, 2, 3. The matrix fermion fields are defined as follows, where σ µ 's are pauli matrices and α, β = 1, 2. This step can be viewed as combining four real fields into one quaternion field. The theory written in terms of X is manifestly invariant under right SU (2) rotation and a left unitary rotation.
The left rotation L must satisfy the reality condition of Majorana fermions. As a result, L actually belongs to Sp(N f ) group. It turns out the Sp(N f ) group is the maximal symmetry group that commutes with the SU (2). They share the same center symmetry, namely We now gauge the SU (2) symmetry and get our SU (2) + N F f theory whereX = X † γ 0 . We can map this formulation back to the complex Dirac fermions in Eq. (15) by ψ α,i = iσ y α,β X 1,i;β , where α is the SU (2) spin index, i represents the flavor index. The global symmetry after gauging the SU (2) subgroup is manifestly G = P Sp(N f ) × Z T 2 . One can check that with this global symmetry the uniform Dirac mass is the only allowed mass term.
For example, the imXγ 5 X mass is not time reversal invariant. Any mass term of the formχ i S ij χ j In the two gapped phases, on any closed spatial manifold, the system has non-degenerate ground state and no spontaneous symmetry breaking. The distinction of the two phases can only come from their topological properties. They can be different Symmetry Protected Topological phases of the global symmetry G. Let us assume m < 0 phase is the trivial disordered phase under this symmetry. Now we want to understand the nature of the m > 0 phase. The strategy is to couple background gauge fields of the global symmetry P Sp(N f ) to the system and identify its topological response that is a signature of the SPT state. To achieve this, we will firstly turn on a background gauge field for the whole SO(4N f ) flavor group and find its topological response. Then we will reduce the response theory down to its SU (2) and P Sp(N f ) subgroups.
Let us start from Eq. (16) and turn on a background SO(4N f ) gauge field A SO(4N f ) . We consider the response to the SO(4N f ) gauge field after integrating out the massive fermions. To cancel dynamical contributions to the partition function, we calculate the ratio between the Euclidean partition functions with m < 0 and m > 0. From Eqns. 9 and 10, we get a purely topological response.
The topological action contains the Θ-terms of SO(4N f ) gauge field in terms of the first Pontryagin class p 1 and the gravitational Θ-term (written in terms of the manifold signature σ -see Eqn. 8).
The Pontryagin class is equal to twice of the instanton number of SO(4N f ) gauge field. More details about the definition for the Pontryagin class and instanton number are given in the Appendix A.
We restrict the SO(4N f ) to particular configurations which have seperate Sp(N f ) and SU (2) gauge fields.
) mod 4 (27) where l represents the instanton number for the gauge bundle, P(a) is the Pontryagin square operator (for a definition see Ref. 4, 89 and references therein), w 2 and w 4 are the second and fourth Stiefel-Whitney classes [79]. Here we have used the following relations between the instanton numbers and characteristic classes for the vector bundles [4,90].
Since our fermion transforms projectively under the SO(3) and P Sp(N f ) gauge bundle, in order for the theory to be consistently defined on any manifold with or without spin structure, we should impose the following constraint on the gauge bundles.
This is the obstruction free condition to lift a SO(3) . Based on this relation and the following few useful identities (for references, see Wang et al. [4]), we can simplify the response theory in Eq. (27). There are four types of response theories depending on N f /2 = k mod 4.
This is the usual Θ−term for the P Sp(N f ) gauge field, and the value of Θ = π is protected by Z T 2 symmetry.
This topological term is robust against Z T 2 breaking because w 4 is a Z 2 class. However, if Z T 2 is broken, theψiγ 5 ψ mass is also allowed at the critical point. Therefore, the Z T 2 symmetry must be preserved in order to have a generic second order transition.
The first term is the Θ-term for the P Sp(N f ) gauge fields, which requires Z T 2 symmetry to be stable. The second term is an independent topological term that can be non-trivial on a non-spin manifold. The second term is a Z 2 class and hence is stable against Z T 2 breaking.
Both terms are stable against Z T 2 symmetry breaking.
Numerically, the conformal window for SU (2) + N F f theory is N f ∼ 6 − 11. For N f > 11, the theory is free. Therefore, we have many examples of 3 + 1-D deconfined quantum phase transitions, which are described by free SU (2) + N F f theory, between the trivial and the P Sp(N f ) × Z T 2 SPT state, for even N f > 11. Assuming further that for N f = 8, 10, a small mass drives the Banks-Zaks theories to the large mass fixed points, we have two explicit examples of 3 + 1-D DQCP, which are described by strongly interacting CFTs. They separate trivial and the P Sp(N f ) × Z T 2 bosonic SPT states.
B. Multiple universality classes
In this section 13 , we demonstrate that the phase transition between the trivial and SPT state Fig. (4). In practice, such situation, although not forbidden by any physical principle, is not commonly observed in critical phenomena.
other. A schematic renormalization flow diagram is shown in
It is interesting that here we can show such an example explicitly in a controlled way.
To introduce these different transition theories, we consider a generalization of our previous construction of 3 + 1-D bosonic DQCP. We start with 4N c N f flavors of Majorana fermion in 3 + 1-D. The total flavor symmetry is SO(4N c N f ).
There is a well known group decomposition for the SO(4N c N f ) group.
We can understand the general group decomposition intuitively as follows. First, we use 4 real fermions to form a quaternion fermion. Then we arrange the N f N c quaternion fermions into a N f × N c quaternion matrix fermion field X . The Sp(N f ) transformation can be packed into a N f × N f quaternion matrix L and it has a natural action on a N f dimensional quaternion vector.
So the Sp(N f ) action on the X field is the left multiplication on the X matrix, namely X → LX .
Similarly, Sp(N c ) action is the right multiplication on X by a N c × N c quaternion matrix R, namely X → X R. 14 The group decomposition we used in the previous section is a special case with N c = 1 and N f = 2.
Let us gauge the Sp(N c ) part of the flavor symmetry. The result is an Sp(N c ) gauge theory with N f fundamental fermions, which we label as Sp(N c ) + N F f theory.
The global symmetry of this theory is again Notice the global symmetry only depends on N f but not on N c . Next, we need to identify the nature of m < 0 and m > 0 phases by their topological response to the background field of the global P Sp(N f ) symmetry. After integrating out fermions, we get the following topological action for m > 0 phase.
The instanton numbers have the following algebraic relations with the Stiefel-Whitney classes [90].
For n ∈ Z, Let's consider a case in which N f = 2p, p ∈ Z, N c = 4q + 1, q ∈ Z. With the above relations, we can simplify the topological action.
If p ∈ 4Z + 1, namely N f ∈ 8Z + 2, we get 14 For more details, see for example the appendix in [91].
There is the following consistency relation for the gauge and tangent bundles, which is the analog of Eq. (30).
We can prove the second term in Eq. (50) vanishes mod 4.
In the derivation, we have again used relations in Eq. (31)(32)(33)(34) to simplify the result. In the end, the topological response for the background P Sp(N f ) gauge field is quite simple and farmilar. The topological action reads One interesting observation is that the topological action does not depend on N c as long as N c ∈ 4Z + 1. For a fixed but very large N f ∈ 8Z + 2 and small enough N c ∈ 4Z + 1, the Sp(N c ) + N F f theory is free in the infrared limit. By increasing N c ∈ 4Z + 1 before it hits some critical value, we have different free Sp(N c ) gauge theories (They are labeled by the red dots in Fig. (5)). Most importantly these theories all describe a phase transition between the trivial state and the same SPT state protected by P Sp(N f ) × Z T 2 symmetry. These free theories are truly distinct conformal field theories. For instance they have different numbers of emergent low energy degrees of freedom. This may be formalized based on the atheorem. The quantity a is a universal number used to characterize 4D CFT. It is the 4D analogy of the central charge of 2D conformal field theories. The trace of the stress energy tensor of a 4D CFT can be expressed as the following, where E 4 is the Euler density and W 2 is the square of Weyl tensor. The a-value is a universal signature for a 4D CFT, similar to the central charge of 2D CFTs. It was conjectured and subsequently proven to be a monotonic function under RG flow [92], so-called a-theorem. Since these Sp(N c ) + N F f theories are IR-free theories, we know the answer for the a-values [92], For fixed N f , different N c 's give different a-values indicting that they are distinct 4D CFTs. Furthermore, if N c is in an appropriate range, the Sp(N c ) + N F f theory can possibly fall into the conformal window of Sp(N c ) gauge theory (labeled by the green dots in Fig.(5)), which is described by the Bank-Zaks fixed point. It is a strongly interacting deconfined gauge theory, which is clearly distinct from free theories. For instance, it has different scaling dimensions for the gauge invariant operators from those of the free theories [81].
The Sp(N c ) generalization provides an explicit example for the phenomenon that there can exist multiple distinct critical theories that describe the transition between the same two nearby phases.
In this controlled example, we are certain that these critical points are not dual to each other. We call them Multiversality classes. In later sections we will provide more examples of such phenomena for fermionic deconfined critical points.
C. Unnecessary continuous phase transitions
In this section, we introduce a phenomenon which we name unnecessary phase transition.
Unnecessary phase transitions are generic stable continuous phase transitions between two identical phases. We will show examples of such a phenomenon within the Sp(N c ) + N F f theory. We will also discusss examples that do not involve gauge fields.
The first example we consider is the Sp(N c ) + N F f theory with different N c and N f from previous sections. An interesting situation is N c = 4q ∈ 4Z and N f = 4p ∈ 4Z. With such condition, the two phases with m < 0 and m > 0 are actually the same phase. We can show the topological response for m > 0 phase is S topo = i2πZ, We can consider the phase transition from the trivial state to the topological state for all copies of the system. Certainly, 16 copies of 3 He-B is adiabatically connected to a trivial phase because the surface has no time-reversal anomaly [86]. However, the transition is not guaranteed to be a single generic transition. Different copies of the system can go through phase transition successively. In order to have a single transition, there must be some flavor rotation symmetry. The most naive one is an SO(16) symmetry, which rotates the 16 copies of TSC's. This symmetry together with Z T 2 symmetry will allow only one Majorana mass term. The low energy theory is Therefore, there is a generic continuous phase transition when we tune the mass from negative to positive. However, there is a problem in this situation. The two sides of the phase transition are different topological phases protected by the SO(16) symmetry. In particular, on one side, m < 0, we can regularize the system to be in the trivial phase, where we have a trivial response theory for the SO(16) background gauge field. On the other side, m > 0, the response theory of background SO(16) gauge field has a Θ-term with Θ = π, which indicates that the system is an SPT state protected by the SO(16) symmetry. Since the two sides are distinct topological phases of the SO(16) symmetry, there will always be a phase transition separating them. This seems to be a disappointing case. However, a slight modification of the symmetry gives us an example of another unnecessary continuous phase transition.
Consider breaking the flavor symmetry from SO (16) to SO(2)×SO (7) symmetry. The symmetry action on the fermions can be understood in the following way. Let us pack the 16 Majorana fields into a 2 × 8 matrix. The SO(2) and SO(7) symmetry are implemented by the left and right multiplication by orthogonal matrices. Here, the right multiplications are in the 8-dimensional spinor representation of the SO (7) group. This symmetry only allows theχχ mass. To see this, we can write down the general form of Lorentz and time reversal symmetric mass termχ i S ij χ, where S ij is a real symmetric matrix in flavor space. S can in general be decomposed into two classes (2) and SO (7) generators is I 2 ⊗ I 8 , which is the identity. Therefore, theχχ term is the only allowed mass term. This means, with SO(2) × SO (7) symmetry, there is still a generic phase transition as we tune the mass from negative to positive. Since the phase transition is described by free fermions, it is stable against small interactions.
Next we show that in the SO(2) × SO(7) case the m < 0 and m > 0 phases are actually the same phase. We argue this through the surface state of the system. At free fermion level, the natural 2 + 1-D surface state of the m > 0 phase has 16 gapless Majorana fermions. We will argue that the surface state can become symmetrically gapped out by interaction, which indicates the bulk state is in the same class of the trivial state. First, let us organize the 16 Majorana fermions into 8 Dirac fermions.
The SO(2) or the U (1) now is the phase rotation of the ψ fermion. The time reversal action is The ψ i 's also form the spinor representation of the SO (7) symmetry. Then we introduce a spin singlet pairing in the theory which completely gaps out the surface state.
This pairing obviously preserves the SO (7) symmetry. It breaks both U (1) and the time reversal T . However, it preserves another anti-unitary symmetryT = T U (π/2) [86]. The next step is to quantum fluctuate the pairing order parameter to restore the symmetries. This can be done by condensing the 2π vortices of the pairing order parameter. There are two key requirements for getting a symmetric gapped surface state after the condensation. First of all, in order to restore both U (1) and T , the condensation has to preserve theT symmetry. Secondly, the vortices must carry a gapped spectrum. These conditions need special care because the vortex core of the system carries majorana zero modes [96]. For our system, in a 2π vortex (π-flux for fermions), there will be 8 majorana zero modes, χ i , i = 1, ..., 8. TheirT transformation isT : χ i → χ i because theT does not change the vortex background. This time reversal symmetry will forbid us to gap out the zero modes by fermion bilinear term. However, it is well known that an SO (7) invariant four fermion interaction term, which is the so-called Fidkowski-Kitaev interaction [97], can give rise to a gapped spectrum for 8 majorana modes. With this interaction, we can condense the 2π vortices and get a symmetric gapped surface state. This indicates that the bulk state is topologically trivial.
The phase diagram of the system is demonstrated in Fig. (6). The m term precisely corresponds to the free fermion mass and U int to the Fidkowski-Kitaev interaction. The free fermion phase transition in 3 + 1-D is stable against small interaction. In the limit of large interaction, we can first diagonalize the interaction and treat the kinetic term as a perturbation. In the large interaction limit, the system is essentially a trivial insulator with tensor product wavefunction. Therefore, the phase transition can be avoided by going through the strongly interacting part of the phase diagram.
IV. FERMIONIC DECONFINED CRITICAL POINTS IN 3 + 1-D
In this section and following sections we study quantum critical points that can be formulated as 3 + 1-D SU (2) gauge theory coupled to N f flavors of massless adjoint Dirac fermions, denoted as SU (2) + N A f , and some generalizations of it. Based on the perturbative calculation, for N A f > 2, the theory is free in the infrared limit. Numerically, the N A f = 2 theory is inside the conformal window [83]. There are also numerical indications that the N A f = 1 theory is conformal in the IR [82]. In this section, we study in details the IR-free SU (2) gauge theories with N A f = 3 massless adjoint Dirac fermions and interpret them as a quantum critical points between fermionic SPT states. Since the gauge theory is free in the IR, we can make a lot of precise statements.
A. SU (2) gauge theory with N f = 3 adjoint fermions We consider a quantum critical point that can be described by 3 + 1-D SU (2) gauge theory with 3 flavors of adjoint Dirac fermions. The story will be very similar for all the odd N A f > 3. We label the fermions by ψ a i , where a = 1, 2, 3 is the SU (2) index, i = 1, 2, 3 is the flavor index. A key difference from the fundamental fermion case is that there are gauge singlet fermion operators (the baryons) such as abc ψ a ψ b ψ c and abc ψ a † ψ b ψ c . Indeed all local operators of the theory carry quantum numbers that can be built up as composites of these baryons. Therefore, the SU (2) gauge theory with adjoint fermion fields describes a critical theory in intrinsic fermionic systems.
The Lagrangian for the N A f = 3 theory reads where (2) generators in spin-1 representation. The theory has a Z T 2 symmetry 16 whose transformation on the fermion fields are as following.
Following the method in previous sections, we can construct the adjoint SU (2) theory from 18 Majorana fermions, and then gauge the diagonal SO(3) part of the total SO(18) flavor symmetry.
Since SO(18) ⊃ SO(3) × SO (6), the global symmetry after gauging is G = SO(6) × Z T 2 . 17 The Dirac mass in Eq. (62) is the only mass term allowed by the global symmetry.
As written the theory in Eq. (62) also has a global 1-form Z 2 center symmetry [98], because we did not include any matter field in the SU (2) fundamental representation. The physical manifestation of the 1-form symmetry is that the Hilbert space of the system contains unbreakable spin-1 2 electric flux loops. However if we are to view the gauge theory as emerging from a UV system of gauge invariant fermions (defined perhaps on a lattice), the 1-form symmetry can only be an infrared emergent symmetry. Therefore, we should allow for explicit breaking of the 1-form symmetry in the UV. To that end we will introduce a massive spin-1 2 particle into our theory, which we call the spectator field. The spectator field allows the spin-1/2 electric flux loops to break. We emphasis that, from the point of view adopted in this paper, the theory in Eq. (62) is not complete yet because we did not specify the properties of the spin-1 2 spectator fields under the global symmetry G. To have a complete theory, we need to specify the symmetry charges of the spectator field under the 0-form global symmetry G. (This is in some sense equivalent to defining the symmetry properties of the spin-1 2 electric flux lines.) Perhaps surprisingly, the symmetry charges of the massive spectator field crucially determine the nature of the m = 0 phases of this theory, although it does not participate in the low energy physics at all. We will explain this phenomenon in detail later. For now, let us restrict our attention only to the 0-form global symmetry of the system, which is G = SO(6) × Z T 2 . The theory in Eqn. 62 at the massless point is a free theory in the infrared. The fermion mass is a relevant perturbation which will drive the system to the infinite mass fixed point. Thus the massless theory describes a continuous quantum phase transition between m < 0 and m > 0 phases.
The schematic renormalization group flow of the fermion mass and gauge coupling is in Fig. (3).
Let us identify the phases with large negative or positive fermion masses. For large fermion mass, we can integrate out the fermions first. We choose a UV regularization such that in the m < 0 phase the SU (2) Θ-term generated by integrating out the massive fermions is zero. The SU (2) gauge theory confines at low energy and the resulting state is a trivial gapped state. For large m > 0 phase, one can show that the Θ-angle is 12π for the SU (2) gauge fields 18 . This is also trivial because of the 2π periodicity of the Θ angle, and the SU (2) gauge theory is again in a confined phase. In particular both confined phases are believed to be in a Short-Ranged-Entangled ground state. For both signs of the mass, in the large mass limit we expect a gapped and non-degenerate ground state with no symmetry breaking. They must fall into the classification of the fermionic SPT states with SO(6) × Z T 2 symmetry. Since this symmetry class is not usually considered in the literatures, let us first discuss the interacting classification of such SPT phases.
The classification of fermion SPTs for this symmetry in 3 + 1-D is Z 8 × Z 2 which can be labeled by two indices n ∈ Z 8 and η ∈ Z 2 . The Z 2 part comes from the pure Z T 2 SPT labeled [99] by ef mf . The Z 8 part is the reduced classification from the free fermion SPT with the same symmetry. Note that at the free fermion level SPTs with this symmetry have a Z classification which we will label by the same index n. The root n = 1 state of the free fermion SPT with SO(6) × Z T 2 symmetry can be viewed as 6 copies of topological superconductor with Z T 2 symmetry, namely the DIII class. The 6 copies form a vector representation under SO (6). At the surface, the n = 1 state has (within free fermion theory) 6 massless Majorana fermions. For general n there will correspondingly be 6n massless Majorana fermions at the surface. With interactions, we need to consider whether 18 Integrating out the fermion will generate an SO(3) Θ angle at 6π and the Θ angle is 12π once we restrict to SU (2) gauge bundle for some special n the surface is anomaly free. The anomaly on the surface has two parts: 1). pure time reversal anomaly; 2). mixed anomaly between the SO(6) and Z T 2 which is sometimes called (generalized) parity anomaly in the literature. The pure time reversal anomaly is Z 16 -fold.
Physically this means 16 copies of Majorana in 2 + 1-D is time reversal anomaly free. Therefore, at least 8 copies of the root states are needed to cancel the surface time reversal anomaly. The mixed anomaly between SO(6) and Z T 2 or the parity anomaly is 4-fold [100]. The physical diagnosis for this mixed anomaly is the quantum number of the background SO(6) monopole. One can show that for 4 copies of the root state the monopole of the background SO(6) gauge field is a trivial boson. Therefore, the surface of the n = 8 state is totally anomaly free. Hence with interactions the free fermion SPT classification collapses to Z 8 . In addition, the n = 4 state corresponds to the eT mT state [44,99]. For n = 4 there is no parity anomaly involving SO(6) and Z T 2 . The surface anomaly purely comes from the time reversal anomaly. For n = 4, the surface theory has 4 × 6 = 24 Majorana fermions. Since the time reversal anomaly is Z 16 periodic, the surface corresponds to the surface of ν = 24 ∼ 8 state in the DIII class, which is precisely equivalent to the eT mT anomalous surface.
Let us always assume the m < 0 phase is the trivial state (n = 0, η = 0) which can be achieved by certain UV regularization. The question is which (n, η) the m > 0 phase falls into. To answer this, we derive the topological response to a background SO(6) gauge field through the same method used before, namely gauging the total SO(18) group and restricting the gauge configurations to its subgroups. The topological action for the m > 0 phase (on an arbitrary closed oriented spacetime manifold) is where S SO(6) θ is the usual Θ-term for the SO(6) background gauge field, and the combination (S SO(6) θ − 3σ/8) is always an integer. The response theory until now indicates that the m > 0 phase is a non-trivial topological state. However, it is not enough to exactly determine the topological index of the state. In particular, we cannot tell whether the system belongs to n = 3 state or n = 7 ∼ −1 state. As the difference between the two is the n = 4 state or the eT mT state, whose partition function is always trivial on an orientable manifold. It turns out that to settle this we have to consider the symmetry properties of spectator field. We shall see that different symmetry properties of the spectator field lead to different topological phases on the m > 0 side.
To demonstrate the importance of the spectator field, we consider the following two different choices of spectators. There are other ways to choose spectator fields. We will leave them to future studies. From the discussions below, we shall see that the symmetry properties of the spectator field crucially determine the nature of the m > 0 phase.
B. Band-theory-forbidden phase transition between band-theory-allowed insulators
The simplest choice of the spectator is a bosonic particle which is neutral under all global symmetries, namely an SU (2) spin-1 2 boson which is a scalar under SO(6) and has T 2 = 1. We will see that this choice of spectator field leads to an interesting type of band-theory-forbidden phase transition between two band theory allowed states.
To consistently define this spectator field, we have a constraint on the gauge connections This relation must be satisfied on any base manifold Y 4 . Then the topological response can be simplified as which suggests n = 3 in the Z 8 classification.
To confirm the nature of the topological phase, let us investigate the surface state of the system.
The natural surface state of the system is a QCD 3 theory with an SU (2) gauge field coupled to 3 flavors of massless adjoint Dirac fermion [4]. The action for the 2 + 1-D surface theory can be written as follows. where Here we explicitly include the massive spectator field labeled by z. The time reversal symmetry and gauge transformations are The surface theory in Eq. (68) is not very illuminating to us because it involves gauge fields.
We want to deform the surface theory in a symmetry preserving manner to a more familiar surface state. Notice that the spectator boson z is only charged under the SU (2) Fig. (7). The phase transition between the trivial state and n = 3 state in the SO(6) × Z T 2 class can happen in two different routes. In the weakly interacting limit, a trivial superconductor can only become n = 3 TSC by three successive topological phase transitions. At each step, the topological index can only jump by 1 and the low energy theory is described by 6 massless Majorana fermions with SO (6) symmetry. However, the SU (2) + N A f = 3 formulation suggests another very striking possibility that, in the strong interaction region, it is possible to go between the trivial topological state and n = 3 state through a single generic second order transition. It is a quantum phase transition between two band insulators which is forbidden by band theory. In Appendix D we give a very simple example of this phenomenon that does not involve emergent gauge fields.
These two possibilities for the phase transition may merge at a multi-critical point somewhere in the phase diagram. One possible theory for the multi-critical point is the Higgs transition of the bosonic spin-1 2 spectator in the 3 + 1-D bulk. Once the spectator is condensed in the bulk, the SU (2) gauge fields are completely Higgsed out and each flavor of the adjoint fermions becomes three physical fermions with topological band structure. Let us label the physical fermions by c a j , a, j = 1, 2, 3. They can be expressed by the gauged fermion ψ and spectator field z in the following form. More explicitly, the resultant physical fermions have the following form.
The c fermions are gauge invariant operators. It can be easily checked that the c fermions share the same symmetry transformation as the ψ fermions. The three successive phase transitions can be viewed as the mass inversion transition for each flavor of the c fermion.
It is interesting to ask what happens if we first take the mass of the fundamental spectator scalar to infinity. Then the gauge theory has the Z 2 1-form symmetry associated with the spin-1/2 electric flux loops. This symmetry is spontaneously broken in the free theory that emerges at the massless point. Upon perturbing with a fermion mass the gauge theory enters a confined phase. Then the electric flux loops acquire a line tension, and the (Z 2 ) 1 symmetry is restored. The spin-1/2 electric loops are however decoupled from other excitations. If now we re-introduce the fundamental scalars to explicitly break the (Z 2 ) 1 symmetry, in either phase the loops can break but the sole effect on the phase is to determine the SPT character. At the massless critical point the explicit breaking of the (Z 2 ) 1 has no effect on low energy critical properties of the fermions. The spectator scalars will be deconfined at the critical point and gapped away from it.
C. Multiple universality classes in fermionic phase transitions
Another choice of the massive matter content is a spin-1 2 bosonic particle which is an SO(6) scalar but a Kramers doublet under time reversal, namely a spin-1 2 boson with Q SO(6) = 0 and T 2 = −1. This choice of spectator field implies the following constraint on the gauge connection.
It appears that this state also corresponds to the n = 3 state. However, it is known that the eT mT state, which corresponds to n = 4 state [86,101], is only visible in the partition function on a nonorientable manifold [100,[102][103][104]. Therefore, the topological response on an orientable manifold cannot tell us precisely what topological phase the m > 0 state belongs to. In the following, we will instead use physical surface arguments to determine the topological index of this system.
To determine the nature of the m > 0 phase, we again look at the boundary state. The surface theory has the same form as the QCD 3 theory written in Eq. (68), while the only difference is the time reversal symmetry transformation on z 20 In this situation, it appears that condensing the bosonic spectator field may break the time reversal is that the time reversal transformation on z can always be combined with an SU (2) gauge rotation.
Physical time reversal symmetry is preserved so long as such a combination of the time reversal action in Eqn. 76 and SU (2) gauge rotation is preserved. To be explicit about this, we consider a gauge equivalent time reversal transformationZ T 2 : 21 The boson is a Kramer's singlet for this time reversal symmetry transformation. 22 Notice this time reversal transformation also has a different action on the gauged fermion fields by an additional gauge rotation, (here we suppress the flavor index because all the operations are identical for the three flavors.)Z In component form,Z Now let us condense the spectator boson with Im(z) = 0 and Re(z) = 0. This condensate completely Higgses the SU (2) gauge theory while preserving the SO(6) ×Z T 2 symmetry. The three adjoint fermions become 9 physical Dirac fermions. But we need to be careful about their time reversal transformation in order to determine the topological index. In particular, the relative sign of the time reversal transformation of the surface Dirac fermions plays an important role here. In our convention, the Dirac fermion with the "+" transformation, namely ψ → +iγ 0 ψ † , contribute n = +1 to the topological index for the bulk. Correspondingly, the "−" transformation will contribute n = −1 [80]. Based on the transformation in Eq. (79), the surface state corresponds to the n = −1 + 1 − 1 = −1 state in the Z 8 classification.
From the above physical arguments, we see that the spectator field plays an important role in defining the global structure of the gauge fields and determining the nearby topological phase, although it is massive and never appears at low energy near the critical point. To our knowledge, 21 ThisZ T 2 transformation does not commute with the SU (2) gauge transformation. It however commutes with the SO(6) global symmetry. 22 One would think that because of the gauge transformation, the T 2 of the spectator is actually meaningless. This is true if we only have spin-1 2 boson in our theory. However, we also have adjoint fermion matter with fixed time reversal transformation. The T 2 for the spectator has physical implication in this case. this is not a widely appreciated phenomenon. However, it is not uncommon. We include an example of this phenomenon in 2 + 1-D bosonic Mott insulator to a time reversal symmetry enriched Z 2 spin liquid transition in Appendix E This provides a clear example of multiple universality classes in fermionic systems. The transition between the n = 0 state and n = −1 state can happen within band theory. The critical theory is described by 3 massless Dirac fermions in the bulk with the SO(6) × Z T 2 symmetry. The SU (2) + N A f = 3 theory gives another phase transition theory between n = 0 and n = −1 states. We know in the IR this theory contains just free SU (2) gluons and 9 Dirac fermions which is clearly different from the critical theory in free fermion limit. These two theories not only differ by their matter contents and but also by the emergent symmetries at the critical point. In particular, the gauge theory has an emergent Z 2 1-form symmetry which is spontaneously broken in the IR.
The theory discussed in this section is readily generalizable to all odd N A f > 3. With general N A f , the global symmetry of the system is SO The interacting fermionic SPT classification for this class is again Z 8 × Z 2 . With a Kramers singlet bosonic spectator field (SU (2) gauge spin- The SU (2) + N A f = 1 theory is a special case for the odd N A f series. The global symmetry in this case is SO(2) × Z T 2 ∼ U (1) × Z T 2 which is the symmetry of the topological superconductor in the AIII class. Since this theory is an strongly interacting gauge theory in the IR, its low energy fate is more subtle than previous examples. We will discuss this theory in detail in the following sections.
There is some numerical evidence that this theory is conformal in the IR [82]. We will explore its interpretation as a quantum critical point.
Note that the fermion mass is a relevant perturbation for the massless SU (2)+N A f = 1 theory [82]. However, the massless SU (2) + N A f = 1 theory is strongly coupled in the gauge theory description. A priori, we do not know whether an infinitesimal mass m perturbation will flow to the infinite mass fixed point. If small mass does lead to a flow to the infinite mass limit, we will have a direct second order phase transition between the two gapped phases. If this is not the case, there may be an intermediate phase in the small mass limit. In this section, we only discuss the properties of the system with large fermion mass, and determine the distinct gapped phases. Inspired by this understanding, in section VI, we describe a possible IR theory of the massless SU (2) + N A f = 1 theory. We will see that within this proposed IR theory there are indeed intermediate phases for small m which differ from the large m phases by the presence of an extra topological ordered state.
A. Global symmetry and topological response
As mentioned in the previous section, the SU (2) gauge theory with adjoint fermion fields describes a critical theory in intrinsically fermionic systems. The Lagrangian for the N A f = 1 theory reads The theory has U (1) × Z T 2 global symmetry whose transformations on the fermion fields are as following, U (1) × Z T 2 is the symmetry for topological superconductor in AIII class in condensed matter language. The Dirac mass in Eq. (80) is the only mass term allowed by the symmetry. As written the theory in Eq. (80) also has a global 1-form Z 2 center symmetry, because of the absence of the matter fields in the SU (2) fundamental representation. However as we emphasized before, this gauge theory is to be viewed as an emergent theory from a UV system of gauge invariant fermions where there is no 1-form symmetry. Therefore, we will impose explicit breaking of the 1-form symmetry in the UV by introducing a massive spin-1 2 spectator field into our theory. In this section, we will only consider the 0-form global symmetry of the system, which is G = U (1) × Z T 2 . We want to explore the theory in the large fermion mass limit. We can then analyze the theory by integrating out the fermions first. We choose a UV regularization such that in the m < 0 phase the SU (2) Θ-term is zero. The SU (2) gauge theory is confined at low energy and the resulting state is a trivial gapped state. For large m > 0 phase, one can show that the Θ-angle is 4π for the SU (2) gauge fields. This is also trivial because of the 2π periodicity, and the SU (2) gauge theory is again in a confined phase. In particular both confined phases are believed to be in a Short-Ranged-Entangled ground state.
For both signs of the mass, in the large mass limit we expect a gapped and non-degenerate ground state with no symmetry breaking. They must fall into the classification of the AIII topological superconductor (TSC) in 3+1-D, which as we mentioned before is Z 8 ×Z 2 once we include interaction effects [86]. We can denote different AIII TI states by two labels n ∈ Z 8 and η ∈ Z 2 . The n = 0 states are descendent of the free fermion AIII TSC. The typical 2 + 1-D surface state is n flavors of massless Dirac fermions. The n = 4 state is in the same phase of a bosonic SPT protected by Z T 2 symmetry, which is usually signified by its surface Z 2 topological order, the so-called eT mT state [44,53,99]. The η = 1 state is another Z T 2 bosonic SPT state, whose surface Z 2 topological order is the so-called ef mf state [44]. Let us always assume the m < 0 phase is the trivial state (n = 0, η = 0). We want to determine which (n, η) the m > 0 phase falls into.
We derive the topological response to a background U (1) gauge field through the same method used before. The topological action for the m > 0 phase (on an arbitrary closed oriented spacetime manifold) is ).
The response theory implies that the m > 0 phase is a non-trivial topological state. However, as before we cannot tell precisely which class the system belongs. There may be a n = 4 state or the eT mT state attached to the system, whose partition function is always trivial on an orientable manifold. This can be settled by considering the symmetry properties of the spectator field. Just like in the previous section we will demonstrate that different symmetry properties of the spectator field lead to different topological phases on the m > 0 side.
B. An alternate argument to identify the massive phases
It is straightforward to use the argument in the previous section to justify that 1). with T 2 = 1 charge neutral spin-1 2 spectator boson, the m > 0 phase is the n = 3 state in AIII class; 2) with T 2 = −1 spectator boson, the m > 0 phase is the n = −1 state. We will not repeat this argument again. However, in this section we provide a different argument to support this result.
We can justify the nature of the gapped phases from another point of view. Let us first consider the structure of the massive phases in the infinitely massive spectator limit. Later we will reinstate the finite mass of the spectator. We will particularly be interested in understanding the anomaly of the surface theory as a window into which SPT phase the bulk system is in. The way to identify the anomaly of the surface state is through the method of anomaly inflow.
Deep in the confined phases, all the SU (2) electric flux lines have line tension. In the infinitely massive spectator limit, the spin-1 2 electric flux lines cannot end in the bulk. In other words the system has an exact 1-form Z 2 symmetry. The physical difference between the two spectator choices in this case lies in the properties of the spin-1 2 electric flux lines. While for the T 2 = 1 case the spin-1 2 line has nothing special associated with it, the T 2 = −1 case physically corresponds to the situation that each spin-1 2 line is decorated with a Haldane chain protected by the time reversal symmetry [44,105]. For our system, the surface anomaly contributions come from both the bulk massive adjoint fermions and the unbreakable spin-1 2 electric flux loop sector. Here we want to do a comparison between the T 2 = 1 and T 2 = −1 spectator cases. Notice that the only physical difference between the two cases is whether we decorate the spin-1 2 loops with a Haldane chain protected by Z T 2 . Since the adjoint fermions are topologically decoupled from the spin-1 2 loops, changing the symmetry properties of these loops should not change the surface anomaly contributed by the bulk adjoint fermions. Therefore, we will be focusing on the differences in the surface anomalies contributed by the loop sector for the T 2 = 1 and T 2 = −1 spectator cases.
A useful formal approach to identify the surface Hilbert space and anomalies is to couple the system with background gauge field of the global symmetry. We can study the statistical and symmetry properties of the background symmetry fluxes in the bulk and then use the anomaly inflow argument to identify the surface excitations [106]. Since the anomaly on the surface is a renormalization group invariant property of the SPT phase, we can consider the anomalies in the weak coupling or UV limit in which we can do reliable calculations.
The symmetry we are interested here is the 1-form Z 2 symmetry. Let us coupled the system to a background 2-form gauge field for the 1-form Z 2 symmetry and consider a background It is sufficient to calculate the symmetry and statistical properties of the SO(3) monopole in the weak coupling limit -the answers will be unmodified in the strong coupling limit. Let us write down the surface action with a background SO(3) gauge flux along the z direction in color space.
where T z is the SO(3) generators along z direction. We can diagonalize the T z matrix by unitary rotations of the fermions and it has eigenvalues ±1 and 0. Let us label the three flavors of fermions as ψ + , ψ − , ψ 0 (ψ + ∼ ψ x + iψ y , ψ − ∼ ψ x − iψ y , ψ 0 ∼ ψ z ). Only ψ + and ψ − are coupled to a z µ with charge +1 and −1 respectively. Hence, ψ + feels 2π flux and ψ − feels −2π flux. With rotational symmetry in the color space, every monopole can always be viewed this way. The gauge fluxes in our case are time reversal invariant since time reversal symmetry flips the gauge charges instead of the fluxes. From the surface theory in Eq.(85), we know that 2π-flux of a z will trap two complex fermion zero modes guaranteed by the index theorem. One zero mode is associated with the ψ + fermion, which we label as f + . The other one is associated with the ψ − fermion, which we label as f − . Let us denote the flux background with both zero modes empty by |0 . There are in total four states which are labeled by |0 , f † . f + and f − carry opposite gauge charges but the same global U (1) charge. The gauge neutral states from the four states are |0 and f † + f † − |0 . But they carry opposite global U (1) charge of ±1. We can attach ψ 0 fermion to the monopole state to compensate the U (1) charge. However, this makes the monopole a fermionic object. Let us label the two states as The time reversal transformations on f + and f − is a bit subtle. After carefully solving the zero mode wavefunction in Appendix C, we find f † where the relative minus sign is because the fluxes are opposite. With these, we can work out the time reversal transformation on the flux as follows.
Since (γ 0 γ 5 ) 2 = −1, the SO(3) monopole is a Kramer's singlet fermion [107]. Note that this result cannot be altered if we redefine the Z T 2 transformation by combining with U (1) phase rotation, because the two states are gauge and global charge neutral.
Let consider an interface between the vacuum and our system. Now imagine a process in which we take a background SO(3) monopole in the vacuum and drag it into our system. This process can be viewed as an instanton event for the 2 + 1-D interface, where the background SO(3) flux changes from 0 to 2π. The SO(3) monopole is a neutral boson in the vacuum; however it becomes a neutral fermion in the bulk system. As a result, the instanton event, besides creating a 2π background flux on the surface, must also nucleate a neutral fermion excitation, labeled by f , in order to conserve the fermion parity of the whole system. Therefore the surface must have a neutral fermion excitation. Now let us introduce a finite mass spin-1 2 spectator boson on the surface which can be viewed as the end point of the spin-1 2 electric flux line on the boundary. We label it by e. In the weak coupling limit, they are deconfined particles on the surface. We need to determine the braiding statistics between e and f . The instanton event we described above is a local process on the surface. The locality implies that, if we adiabatically drag the spectator boson e around the location of the instanton event, there should be no difference in the accumulated Berry's phase before and after the instanton event. As a result, the braiding phase between the spectator and the neutral fermion f must cancel that between the spectator and the 2π background flux. Since the spin-1 2 spectator can be viewed as the half charge under SO(3), the braiding phase between the spectator and the 2π flux is π. Therefore, e and f have a mutual π braiding phase and they form a Z 2 topological order on the surface. Now let us consider the time reversal properties of the Z 2 topological order. For the first case with T 2 = 1 spectator, we have a vanilla Z 2 topological order which is not anomalous.
For the other case with T 2 = −1 spectator, since f is a Kramers singlet, the bound state m ∼ ef is also a Kramers doublet boson. The Z 2 topological order is the so-called eT mT state which carries time reversal anomaly of the n = 4 state in the AIII class.
We can also include the spin-1 2 matter and break the 1-form Z 2 symmetry in the bulk. Dynamically the SU (2) gauge theory will be in a confined phase for large fermion mass, which means all electric flux lines have finite line tension. With 1-form Z 2 symmetry, in the confined phase the system has unbreakable tension-full spin-1 2 electric flux loops. With finite mass spectators, these loops will break dynamically in the bulk and the system is in an ordinary confined phase. However since the time reversal anomaly on the surface does not involve the 1-form symmetry, it will survive even with a finite spectator mass.
VI. A POSSIBLE 3 + 1-D DUALITY
The SU (2)+N A f = 1 theory with T 2 = −1 spectator field potentially provides a continuous phase transition theory between n = 0 and n = −1 states in the AIII class. The same phase transition can also happen in a free fermion setting which is described by a free massless Dirac fermion. There are several possible scenarios about the relationship between the strongly coupled gauge theory and the single Dirac fermion theory. For examples, we can imagine 1). a simple possibility is the low energy theory of SU (2) * + N A f = 1 23 theory is a completely different critical theory from the single Dirac fermion. 24 ; 2). Perhaps the most exciting scenario is that the SU (2) * + N A f = 1 theory in the IR is strictly dual to a single Dirac fermion. Unfortunately, we will argue that this scenario is very unlikely. Instead a candidate low energy theory of the SU (2) * + N A f = 1 theory can be very close to a single Dirac fermion. In particular, we will suggest a possible IR theory which contains a single free Dirac fermion plus an decoupled gapped topological sector. For energy lower than the gap of the topological order, the theory is described purely by a free Dirac fermion.
An important consistency check on any proposed IR theory is anomaly matching with the UV FIG. 9: The transition between n = 0 state and n = −1 state can happen in two ways. One way is a free fermion transition with a single gapless Dirac fermion as the critical theory. The other way is through a strongly coupled non-abelian gauge theory, which we labeled SU (2) * + N A f = 1. A very exciting possibility is that the two 3 + 1-D conformal field theories are dual to each other in the infrared limit. Unfortunately, this is not likely the case. We will argue that the a possible IR theory of the theory is a single Dirac fermion plus a topological field theory.
theory. Our UV theory in the m = 0 limit has emergent global symmetries which are anomalous.
Matching the emergent symmetries and their anomalies between IR and UV provides nontrivial constraints. In particular, our theory in the infinite spectator mass limit is closely related to the celebrated Seiberg-Witten theory [60,108] whose global symmetry and anomaly structure are well understood in the high energy literature. Exploiting this, Ref. 60 recently provided a very nice discussion of the various anomalies of the SU (2) gauge theory with a single massless adjoint Dirac fermion. The exact 1-form (Z 2 ) 1 symmetry of this theory was shown to have mixed anomalies with the emergent global symmetry and geometry [60], which put more constraints on the possible low energy theories. Therefore we will start our discussion from the infinitely massive spectator limit and later reinstate the finite mass to the spectator field. We first identify the emergent 0-form global symmetry and their anomalies in the SU (2) + N A f = 1 theory. We will see that the 0-form emergent symmetry and anomaly can indeed be matched by a single Dirac fermion theory. However, the single Dirac fermion does not have the Z 2 1-form symmetry and hence cannot match the UV anomalies associated with it. This indicates that the low energy theory must contain additional either gapless or gapped topological degrees of freedom which could compensate the anomalies associated with the 1-form symmetry. Ref. 60 obtains such a candidate IR theory consisting of a single Dirac fermion plus a decoupled U (1) gauge theory in the Coulomb phase through supersymmetry breaking deformations from the Seiberg-Witten theory. We will propose a different candidate theory which has a single Dirac fermion plus a decoupled topological order. 25 The possibility of a topologically ordered state was also mentioned in Ref. 60.
A. The IR Dirac fermion
Let us label the proposed gauge invariant Dirac fermion in the IR theory by Ψ. (The notation for the UV degrees of freedom in the SU (2) gauge theory is defined in Eq. (62). ) The massless Ψ theory describes a phase transition from n = 0 to n = −1 state in AIII class. Therefore, the Ψ fermion should carry the following quantum number under the global symmetry U (1) × Z T 2 .
The "−" sign in the Z T 2 transformation has physical consequence [80]. (Notice that no linear transformation of the fermion field can change this sign.) The convention is that a gapless Dirac fermion with the "+" Z T 2 transformation describe a phase transition from the n = 0 to the n = 1 state in AIII class. Correspondingly, a Dirac fermion with the "−" Z T 2 transformation describe a transition from the n = 0 to the n = −1 state.
By matching symmetry quantum numbers, the IR Dirac fermion operator Ψ in terms of the UV degrees of freedom is The right hand side is an SU (2) gauge singlet operator. The global U (1) quantum number obviously 25 An easy consistency check is the a-theorem. As we introduced in Eq. (54) and (55), the quantity a is a universal property of every 4D CFT. It is known that a is a monotonic decreasing function under renormalization group flow, namely a U V > a IR [92]. The UV theory for the SU (2) + N A f = 1 theory is free SU (2) Yang-Mills theory with decoupled three free Dirac fermions. For free theories, we know the simple formula for the a value. Therefore, the UV value of a for the adjoint SU (2) theory is a U V = 3 × 11 + 62 × (2 2 − 1) = 219, which is indeed larger than the a value of a single Dirac fermion, a Dirac = 11. Hence, our proposed IR theory is consistent with the a-theorem conjecture.
matches. Theψ a ψ b is a Lorentz scalar andψ a iγ 5 ψ b is a Lorentz pseudo-scalar. The reason for the choice of this specific combination of scalar and pseudo-scalar in the mapping is two-fold. Firstly, it is chosen to match the time reversal transformation of the Ψ fermion. Secondly, as we discuss later, with such a combination, the single Dirac fermion theory matches the 't Hooft anomalies of the emergent symmetries in the SU (2) gauge theory. Let us see how the time reversal symmetry works out first. We can check explicitly that the Ψ in Eq. (89) satisfies the transformation in Eq. (88). First of all, let us write down Ψ † .
Recall that the time reversal action on the ψ fermions is ψ → γ 0 γ 5 ψ † . Also notice that the scalarψψ is invariant under time reversal while the pseudo-scalarψiγ 5 ψ is odd under time reversal. Therefore, the transformation of Ψ is which is indeed what we want. We list partially the gauge invariant Lorentz scalar and spinor operators in the Appendix J Since the operatorΨΨ and 3 a=1ψ a ψ a share the same quantum numbers under all the global symmetries, they will have finite overlap in the IR. The conjecture is that Ψ is free in the IR.
Therefore, the anomalous dimension for the 3 a=1ψ a ψ a operator should be zero. This could be checked in future numerical calculations.
B. The emergent symmetries and anomalies
For both the SU (2) * + N A f = 1 theory and the Dirac theory, the global G = U (1) × Z T 2 symmetry is a non-anomalous symmetry of the system for all value of the mass m. When the system is tuned to the critical point at m = 0, it has enlarged global symmetries. These emergent symmetries usually have 't Hooft anomalies. Coupling these emergent symmetries to background gauge fields will lead to an inconsistency in the theory which can be cured 26 by regarding the theory as living 26 From a formal point of view the we extend the background gauge fields but not the dynamical degrees of freedom to the higher dimensional bulk. The difference between two different such extensions is described by a topological action in terms of these background gauge fields. The boundary theory by itself is not gauge invariant but its combination with the bulk action is gauge invariant.
at the boundary of a higher dimenisonal SPT phase. In this section, we compare the emergent symmetries and their anomalies of the two theories at their critical points.
For the massless SU (2) * + N A f = 1 theory in the UV, the emergent symmetry is G = The SU (2) f is a flavor rotation symmetry and Z A 8 is a discrete axial rotation. The meaning of these symmetries will be clear in a moment. To understand these symmetries, let us look at the theory in Eq. (62) without the gauge field a µ . We can write down the Dirac fermions in the Weyl basis (we use a different set of γ matrices than we were using previously), in which a single Dirac fermion can be written as two Weyl fermions with different chiralities, Here ξ 1 and ξ 2 are both two component left-handed Weyl fermions. The iσ y ξ † 2 is particle-hole transformation of ξ 2 and has the opposite chirality. We can decompose our 3 Dirac fermions in Eq.
(62) into 6 left-handed Weyl fermions (after a particle-hole transformation). The Lagrangian can be written as The largest unitary symmetry on the system is U (6). Next, we want to gauge the diagonal SU (2) subgroup of the U (6) symmetry. Since the fermions are in the spin-1 representation, the gauge rotations on the Weyl fermions are SO(3) rotations, For convenience, we will use SO(3) g to denote the gauge group in the following. (But keep in mind that eventually this is an SU (2) gauge field because of the spin-1 2 spectator field.) The U (6) symmetry can be decomposed as U (6) ⊃ SU (3)×SU (2)×U (1) . Therefore, the global symmetry left after gauging is naively SU (2)×U (1) The SU (2) is a flavor rotation, therefore we denote it as SU (2) f . Its action on the Weyl fermions is: The 6 Weyl fermions form three fundamental representations of the SU (2) f . The action of the U (1) symmetry is Because of the particle hole transformation on the ξ i,2 fields, this U (1) rotation is the γ 5 rotation of the original Dirac fermion, which is usually called the axial rotation. We label it as U (1) A . The familiar charge U (1) rotation of the Dirac fermion is now the S z rotation of the SU (2) f .
The U (1) A suffers from chiral anomalies. It is explicitly broken down to Z 8 after considering the mixed anomalies with the SO(3) g . This is seen from the following anomaly equation, The first part of the equation is the standard Fujikawa's calculation for abelian anomalies [79]. In the second part, we use the relation between the Pontryagin classes between SO(3) and SU (2) groups. The Pontryagin class of SU (2) counts the instanton number of the SU (2) gauge field and takes value in integers. The equation means the axial charge will change by 8 if we insert an SU (2) instanton configuration with winding number 1. Therefore, the axial charge is only well defined up to 8. The U (1) A is broken down to Z A 8 . Note that there is no mixed anomaly between the SU (2) f and SO(3) g . The divergence of the where A µ = 3 a=1 A a µ T a , T a 's are SO(3) generators and σ α 's are Pauli matrices. The anomaly equations are determined by calculating certain triangle loop diagrams [79,109]. The essential part of the right hand side of the equation involves the trace of three symmetry generators. In this case, it is clearly zero because the SO(3) generator and SU (2) generator acting on different spaces. In the flavor space the trace of an SU (2) generator is zero. This tells us that the SU (2) f is still a symmetry after gauging the SO(3) g . Thus we see that the global symmetry for the critical SU (2) In the infrared limit, it is possible that the Z 8 symmetry is dynamically enhanced to U (1). There are many examples of such phenomenon in 2 + 1-D deconfined quantum critical points [1,4,110].
Though we can not be sure that this enlargement actually happens in our case, we are encouraged by the matching of anomalies with the free Dirac theory at its massless point (which has emergent U (2) = SU (2)×U (1) Z 2 symmetry) discussed below 27 . Henceforth in talking about the free Dirac theory 27 However we will also need to postulate an additional decoupled gapped sector in which there is no such dynamical we will simply treat the Z 8 axial symmetry of the gauge theory as though it is a U (1) symmetry.
A proper discussion of the anomalies involving the Z 8 without this simplification is in Ref. 60. Now let us consider the anomaly structure for the G. Firstly, we discuss the t' Hooft anomaly of SU (2) f . The SU (2) f itself has no perturbative anomaly but has the global Witten anomaly. The Witten anomaly is a Z 2 anomaly [111] which depends only on the parity of the number of SU (2) f fundamental Weyl fermions. Here we have three SU (2) f fundamental Weyl fermions. Therefore, they carry the SU (2) Witten anomaly. Dynamically gauging the SU (2) f symmetry will lead to vanishing partition function. The Z 8 symmetry has self 't Hooft anomaly and mixed anomalies with SU (2) f and gravity. The anomaly is summarized in the following equation Next we look at the IR Dirac fermion Ψ at its massless point. In the Weyl basis, the Dirac theory reads, where η 1 and η 2 are both left-handed Weyl fermions. According to our dictionary in Eq. (90), the η fermions can be written as composite operators from ξ fermions in the SU (2) gauge theory.
The theory manifestly has G A = symmetry. These symmetries are in one to one correspondence with the emergent symmetries in the SU (2) * + N A f = 1 theory if as we assumed the Z 8 symmetry of the latter theory is enhanced to U (1) in the gapless sector of the proposed IR theory. The SU (2) f transformation is This transformation is consistent with the dictionary in Eq. (102). From the dictionary, the η fermions carry charge 3 under the axial U (1) A symmetry in the SU (2) gauge theory.
enhancement. Nevertheless, as the free Dirac sector is decoupled, we can ask about realization of the Z 8 on this gapless sector. The more correct assumption then is that the Z 8 is dynamically enhanced to U (1) in this decoupled sector.
This property is crucial for matching the anomalies with the UV theory.
Now we study the 't Hooft anomalies of the emergent symmetry. First the SU (2) f symmetry has the same global Witten anomaly [111] because we have a single SU (2) f fundamental Weyl fermion.
The anomalies associated with U (1) A are summarized in the following anomaly equation, where the coefficient 27 and 3 precisely comes from the fact that the Ψ fermions carry charge-3 under the axial U (1) A symmetry. This will match the anomalies in Eq. (100) if we consider the discrete Z 8 axial symmetry instead of the U (1) A . This indicates that the low energy theory cannot be a simple Dirac fermion but needs some additional sector which remembers that the U (1) A is broken down to Z 8 .
C. The 1-form symmetry anomalies and the additional Z 2 topological order Thus far we argued that the IR Dirac fermion Ψ matches almost all the 0-form symmetry and anomalies in the UV theory. Now we focus on the 1-form (Z 2 ) 1 symmetry of the system in the infinitely massive spectator limit. The IR Dirac fermion does not have the 1-form symmetry. As shown in Ref. 60, this (Z 2 ) 1 symmetry has mixed anomalies with both the Z 8 and with gravity.
Therefore there must be other degrees of freedom in the IR which carry the 1-form symmetry and its anomalies.
The anomalies involving the Z 2 1-form symmetry have two pieces according to Ref. ([60]). The first part is a mixed anomaly between the Z 2 1-form symmetry and the Z A 8 discrete axial symmetry. Let us call this type I anomaly. The mixed anomaly means that dynamically gauging the Z 2 1-form symmetry will break the Z 8 down to Z 4 on spin manifold, Z 2 on non-spin manifold. Formally, we can couple the system to a background 2-form Z 2 gauge field B. By definition, a symmetry operation on a quantum system should preserve the partition function. However, in this system in the presence of the 2-form background gauge field B, the partition function is no longer invariant under Z 8 axial rotation. The k th element of Z 8 axial rotation will shift the partition function by a phase exp[i πk 2 Y 4 P(B)], where P(B) is the Pontryagin square of B. On a spin manifold, Y 4 P(B) is quantized as an even number [90]. Therefore the Z 8 is broken down to Z 4 by the mixed anomaly.
On a non-spin manifold, Y 4 P(B) is an arbitary integer [90] and the axial symmetry is then broken down to Z 2 . The second anomaly is more abstract. It is a mixed anomaly between the Z 2 1-form symmetry and geometry. We will call this type II anomaly. This anomaly has the following formal interpretation. We again couple the system to a 2-form Z 2 gauge field B. The 2-form gauge field is a Z 2 gauge field which means a redefinition of the 2-form gauge field, B → B + 2x with x another 2-form Z 2 gauge field, should not change the partition function of the system. However, in this theory, such a redefinition change the partition function by a factor exp[iπx ∪ w T Y 2 ], which can be −1 on a non-spin manifold.
It is useful for us to have a more concrete physical picture for both types of anomalies. The type I anomaly in the UV has the following physical interpretation. Let us remind ourself from Eq. (98) that the change of axial charge is equal to 8 times the instanton number of the SU (2) gauge field.
Coupling the SU (2) gauge theory to the Z 2 2-form gauge field B is effectively turning the SU (2) bundle to an SO(3) bundle which has magnetic monopole excitations. The instanton number for the SU (2) bundle is quantized to be integer. However when we extend the SU (2) bundle to the SO (3) bundle, we have new field configurations involving the SO(3) monopoles, and the quantization of the instanton number is changed. On spin manifolds, the SO(3) instanton number is quantized as half integer. On non-spin manifolds, the smallest SO(3) instanton number can be a quarter.
The 1/2 instanton event for the SO(3) bundle have the following physical picture. We take two 2π magnetic flux loops 28 initially separated in space and then move them cross each other to form a link [112]. 29 This spacetime process produce the 1/2 instanton. 30 We can now give a physical description of the mixed anomaly between Z 8 and (Z 2 ) 1 . We assign an axial charge of 4 to two 2π SO(3) flux loops that have linking number 1. The instanton event changes this linking number and hence breaks the Z 8 to Z 4 . On a non-spin manifold, for example CP 2 , there is an even smaller instanton event. It can be roughly thought as creating a self-linking of the 2π SO(3) magnetic flux.
The type II anomaly involves the second Stiefel-Whitney class of the tangent bundle which detects the spin structure of the base manifold. This anomaly tells us that there is an ambiguity on the quantum statistics of the 2π SO(3) monopole. Below we will build on these physical characterizations to augment the free Dirac theory with a gapped sector that enables matching 28 The normalization is that the magnetic flux coming out of a single SO(3) monopole is 2π. 29 Notice this event is not allowed in the pure SU (2) bundle because the minimal flux unit is twice as that of the SO(3) bundle. 30 In practice, we can take the first part of Eq. (97) involving the SO(3) gauge field and then restrict it to a U (1) subgroup. Inserting a spacetime event as described here, the result of the integral will be 4 instead of 8.
the 1-form anomalies.
Note that the extra anomalies discussed in this section are of the discrete unitary symmetry Z 8 × (Z 2 ) 1 . For ordinary 0-form discrete unitary symmetries (at or above 2 + 1-D) it is known that their anomalies can always be satisfied by a symmetry preserving gapped topological order. Inspired by this we ask if there can be some symmetry preserving 31 3+1-D topological order that captures the anomalies of Z 8 × (Z 2 ) 1 . Further note that with anomalous 0-form symmetry, the charge particles will be fractionalized into partons that carry projective representations of the symmetry. Here the anomalous Z 2 1-form symmetry acts on loops. Thus we are lead to search for a topological ordered state of matter that has "fractionalized" loop excitations. A short introduction and example of such a fractionalized loop phase is given in Appendix F. Now we describe a postulated topological order that can match the anomalies associated with the 1-form symmetry. It has the following properties.
2. The specific theory is a Z 2 gauge theory where the "microscopic" loops (we can call them 2π-flux loops) have fractionalized into two π-flux loops. The physical manifestation of the (Z 2 ) 1 is that the 2π flux loops are unbreakable. 5. Each electric loop should be thought of as a ribbon. A self-linked loop is assigned axial charge of 8. Events in the theory that create a single such self-linked loop will break the axial symmetry to Z 8 . Now let us explain why this topological order can match the Z 8 × (Z 2 ) 1 anomalies. The fermi statistics of electric charge 1 objects ensures that the (Z 2 ) 1 symmetry has the right mixed anomaly with gravity. Gauging the (Z 2 ) 1 symmetry introduces electric charge 1/2 particles. Since the fusion result of two charge 1/2 particles must be the charge 1 particle which is a fermion, these charge 1/2 31 Preserving the 1-form symmetry means the "physical" loops are tension-full.
particles have indefinite statistics. In contrast in a strictly 3 + 1-D system it should be possible to assign definite statistics to these particles. This is the manifestation of the mixed anomaly between (Z 2 ) 1 and geometry.
Introducing electric charge 1/2 particles into the theory implies that the system must also allow strength 1/2 electric loops. These 1/2 strength electric loops can form links. A link of two 1/2 electric loops will carry axial charge 4. However as there are sources for these loops the linking number can change dynamically. An event in which two linked strength-1/2 electric loops is created changes the axial charge by 4. This breaks the axial symmetry down to Z 4 . We also need to consider single strength-1/2 loop that is self-linked. As a self-linked strength-1 loop is assigned axial charge 8, a self-linked charge-1/2 loop should be assigned axial charge 2. Dynamically again the selflinking number can change as there are sources for the loops. It follows that an event where a self-linked strength 1/2 electric loop created changes the axial charge by 2. Therefore the axial symmetry is broken down to Z 2 . These precisely match the mixed anomaly between (Z 2 ) 1 and Z 8 axial symmetry.
To recap, the proposed low energy theory is a free massless Dirac fermion augmented with the topologically ordered state just described. What we have argued is that this theory has the same global symmetries, the same local operators, and the same anomalies as the SU (2) gauge theory with an N A f = 1 adjoint Dirac fermion (and no spectator fundamental scalar). We do not of course know if the gauge theory really flows to the free Dirac + topological theory but are encouraged by these checks. Alternate possibilities have been discussed in Ref. 60. Let us now introduce a finite mass for the spin-1 2 spectator fields in our UV theory. With a finite mass spectator, the Z 2 1-form symmetry is explicitly broken. Physically this means that the 2π flux loops can be broken dynamically. The question is whether the Z 2 topological order we described is immediately destroyed dynamically by a finite but large spectator mass. In our case, since the topological order is in a "fractionalized" loop phase, the π flux loops still cannot break and remain as non-trivial excitation in our system. Therefore with a large but finite spectator mass, the Z 2 topological order is still stable.
If the low energy theory of massless SU (2) + N A f = 1 theory with finite spectator mass is indeed a free Dirac fermion plus a decoupled Z 2 topological order, then the phase diagram of the theory will be as shown in Fig. (10). Since the Z 2 topological order is stable against small perturbation, it will survive until a critical fermion mass m c . The phase transition at m = 0 occurs entirely in the gapless free Dirac sector, and describes the topological phase transition between the n = 0 and From a high energy perspective, one of our results is to provide an interpretation of some massless gauge theories as quantum critical points. We saw that even when the gauge theory is IR-free it has an interesting place in the phase diagram as a deconfined quantum critical point. Perhaps the most interesting aspect (for quantum field theorists) is our discussion of the possible duality of the SU (2) gauge theory with a massless N f = 1 adjoint (Dirac) fermion, and a massive fundamental boson to a free massless Dirac fermion with an additional decoupled topological field theory. It will be interesting to scrutinize this possibility through numerical studies of the gauge theory. By definition, a Yang-Mills instanton is a solution of the classical Euclidean equations of motion with finite action. To find solutions with finite action, we require that the field strength tends to zero at infinity sufficiently fast. Hence, the gauge field asymptotically approaches a pure gauge. All pure gauge configurations, namely A = U −1 dU , at infinity are classified by which is characterized by an integer number, the instanton number. First consider gauge configurations on R 4 , which become pure gauge at asymptotic infinity. Given a group G, the instanton number of any such gauge configuration on R 4 is an integer multiple of a minimal positive number. This minimal instanton corresponds to the generator of π 3 (G) = Z. It is customary in the literature to normalize this minimal instanton so that this has instanton number 1. If G has a discrete Z 2 subgroup, since π 2 (Z 2 ) = π 3 (Z 2 ) = Z 1 , we have which indicates that G/Z 2 and G share the same generator for instantons. For any non-abelian group G, an instanton of minimal charge can be obtained by embedding a minimal instanton of SU (2) through an appropriate isomorphism SU (2) → G, which is obtained by picking a sub-SU (2) algebra generated by a long root in the Lie algebra of G. For continuous group G, the instanton number can be calculated from an integral of a local density, where R denotes a specific representation we can freely choose, and the coefficient c R is chosen to make sure that l G = 1 for the minimal instanton configuration. Particularly, c R can be determined by embedding the minimal SU (2) instanton to G and evaluating the expression above. If we use adjoint representation in Eq. (A3), the normalization coefficient c R will only depend on the Lie algebra of G but not the global structure of the group. 32 Therefore, the formula gives the same result for G and G/Z 2 , namely l G = l G/Z 2 . All the instanton numbers we used in the main text are normalized in this way.
Now let us talk about the relation between the Pontryagin classes and the instanton numbers
of SU (N ), SO(N ) and Sp(N ) groups. The first Pontryagin class of a group G is defined with its fundamental representation as the following, For SU (2) group, we get exactly 1 from Eq. (A4) if we plug in the minimal instanton configuration.
This indicates that the first Pontryagin class is equal to the instanton number for SU (2) group, namely p 1 (SU (2)) = l SU (2) . This is starting point. Now consider the SU (N ) and Sp(N ) groups.The minimal instanton number is achieved by embedding the minimal SU (2) instanton configuration in the upper left corner in the gauge configuration as following.
It is obvious that we will get 1 if plug this into the Eq.(A4). Therefore, for SU (N ) and Sp(N ), the first Pontryagin class is equal to their instanton number. only embed the SU (2) instanton configuration into the SO(3) gauge configurations using the SU (2) adjoint representation. Because of this embedding, for a minimal SU (2) instanton configuration, (3)) actually is equal to 4. Hence, p 1 (SO (3)) is equal to four times of the instanton number.
The embedding for SO(N ) with N > 3 is different. We make use of the fact SO(N ) ⊃ SO(4) = SU (2) × SU (2)/Z 2 , and embed the SU (2) instanton configuration into one of the SU (2) subgroup of SO (4). With this embedding, it is easy to verify that p 1 (SO(N )) is equal to 2 if we put in a minimal instanton configuration. Therefore, The θ-angle of 3 + 1-D gauge theories are usually defined so that a configuration of instanton number 1 contributes to the Euclidean action by the phase exp(iθ).
Appendix B: A 2 + 1D example of unnecessary continuous phase transition In the same spirit as the 3d examples, let us give an example in 2d. We consider the trivial to topological phase transition of the p ± ip superconductor system with Z 2 × Z T 2 symmetry. The low energy field theory near the phase transition is the following.
where the Z 2 symmetry, Z 2 : χ → σ 03 χ, is the relative fermion parity symmetry of the two layer.
Time reversal symmetry, T : χ → iσ 21 χ, exchanges the ± layers. The two symmetries together only admit the mass term in Eq. (B4), which guarantees that there is a generic phase transition described by free majorana fermions in the bulk. The edge of the system consists of helical majorana modes described by the following equation.
The Z 2 and time reversal transformations are We can introduce a mass term on the boundary m b χ T σ 2 χ which breaks both Z 2 and Z T 2 symmetries but preserves a different time reversal symmetryZ T 2 : χ → −σ 1 χ (which is the original Z T transformation followed by the Z 2 transformation). The domain wall of the Z 2 breaking mass term traps a majorana zero mode, labeled by γ. TheZ T 2 symmetry will not change the domain wall background and it just acts trivially on the zero modes, namelyZ T 2 : γ → γ. Now let us consider 8 copies of the same system and impose an SO(7) symmetry which rotates these 8 copies in the spinor representation. This symmetry only allows a uniform mass term. The low energy theory near the phase transition is the following.
When m is tuned from negative to positive, the system goes through a continuous phase transition described by bulk free majorana fermions. This transition is stable against small interactions. Our goal now is to show that m < 0 and m > 0 phases are in fact the same phase. We can always regularize the system such that m < 0 phase is trivial. In the m > 0 phase, the natural edge state has 8 copies of helical majorana modes with Z 2 × Z T 2 × SO (7) symmetry. We will argue that the boundary modes can be gapped out while preserving all the symmetries, which indicates the m > 0 phase is actually topologically trivial.
To that end we first break the Z 2 and Z T 2 symmetry on the edge by adding m b 8 i=1 χ T i σ 2 χ i . Then we proliferate the topological defects of this order parameter, namely the domain walls, to restore a symmetric gapped edge. Since there are zero modes residing at the domain wall of the order parameter, we have to be careful about their condensation. The domain wall must have single gapped ground state and it has to be symmetric under the combinedZ T 2 symmetry. This can be precisely achieved by the SO(7) invariant Fidkowski-Kitaev interaction. Therefore, with this interaction, we can safely condense the domain wall to get a symmetric gapped edge state. Thus, the m > 0 phase is topologically trivial. The phase diagram of the system is similar to the previous cases as shown in Fig. (6).
Appendix C: Fermion zero modes and time reversal transformations
In this appendix, we consider a 2 + 1-D Dirac fermion in a 2π flux background and solve the zero mode wavefunction. Then we will consider the time reversal transformation on the zero mode.
Let us first write down the Hamiltonian for the 2 + 1-D Dirac fermion on a flat 2-dimensional plane with a background gauge field where we take the Landau gauge A x = 0, A y = Bx. Notice this is equivalent to the spherical geometry, since the flat plane can be viewed as the infinite radius limit of the sphere. The time reversal transformation for the fermion fields is In component form, the time reversal action is This time reversal transformation will flip the electric charge of the Dirac fermions but keep the magnetic flux background invariant. Therefore, it is meaningful to discuss the time reversal transformation of the zero modes trapped in the flux background.
Consider the Dirac equation The usual trick to solve the Dirac equation is to square the Dirac operator to get The spectrum for ε 2 is (in the unit with = c = 1) Notice that the zero mode wavefunction depends on the sign of magnetic field B. Consequentially the time reversal transformation on the zero modes are different for ±B. For B > 0, the zero mode operator is While for B < 0, the zero mode operator is where φ 0 (p y , x) is the wavefunction for the ground state of a harmonic oscillator.
Thus we find that the time reversal transformations for the zero modes are where interactions modify an analogous band theory rule.
Consider a system with two species of fermions -denoted ψ and χ -in two space dimensions.
We will assume that there is a global U (1) symmetry under which ψ has charge-1 and χ is neutral.
Within free fermion theory gapped ground states of this system are now characterized by a pair of integers (n, m). The electrical Hall conductivity is σ xy = n while the thermal Hall conductivity κ xy =c Compared to the standard integer quantum Hall system the presemce of the additional neutral fermion means thatc can take any multiple of half-integer value and is not tied to σ xy . Within free fermion theory, a generic continuous transition between these phases satisfies the following rules: (i) ∆n = 1, ∆m = 2, or (ii) ∆n = 0, ∆m = 1. The former can be understood as a quantum Hall transition of the ψ fermion and the latter as a transition of the χ fermion. 33 Lattice translation may or may not be present and makes no difference to this discussion. Now we will show that this rule can be violated in the presence of short ranged interactions that preserve the global U (1) symmetry. Imagine an interaction such that the charged fermion forms a 3-body bound state (a "cluston" [113]) ψ 3 ∼ ψψψ. A cluston integer quantum Hall state [113] is clearly then possible with σ xy = 9k, andc = k with k ∈ Z. In this system where both charged and neutral fermions are present, such a cluston integer quantum Hall state can also be accessed within free fermion theory: it corresponds to n = 9k, m = 2k. Now consider a cluston integer quantum Hall transition which can be second order so long as ∆k = 1. This corresponds to ∆n = 9, ∆m = 2 which violates the band theory rules discussed in the previous paragraph even though both phases are band-allowed. The critical theory has gapless clustons but the ψ, χ particles are gapped.
Appendix E: A 2 + 1-D bosonic Mott insulator to Z 2 topological order transition Here we provide another example of continuous phase transition in which modifying the properties of a gapped spectator field changes the nearby phase however not the universality class of the transition. We consider a transition from a 2 + 1-D bosonic Mott insulator to a Z 2 topological order. Consider a bosonic system in a Mott insulating phase. The physical bosons b are gapped. We assume the system has a time reversal symmetry T and the physical bosons are Kramer's singlets. Now consider partons of the physical boson. We decompose the physical boson into two bosonic partons which we call the e particles. This fractionalization introduces a Z 2 gauge field and the e particles carry Z 2 gauge charge 1. The Z 2 gauge field also has π flux excitations which we label as m particle. The e and m particles have mutual Berry's phase π. The Mott insulating phase is the confined phase of the Z 2 gauge field. The Z 2 confined phase can be viewed as a condensed phase of the m particles.
Let us imagine by tuning some parameter we can drive the system through a deconfinment transition to a Z 2 topological order. We can view the deconfinement transition as the proliferation of the vortices of the m particle condensation. The transition is in the Ising universality class.
After the transition, the m particle is gapped and the Z 2 gauge field is deconfined. The resultant phase has a Z 2 topological order. Throughout the transition, the e particle remains gapped and does not participate in the low energy theory. We can view them as the massive spectator fields in our system. Since the system has time reversal symmetry, there are actually different classes of Z 2 topological orders distinguished by their time reversal properties. These are called Symmetry Enriched Topological (SET) orders. In our case, the time reversal properties of the spectator e particle precisely determine which SET state we get for the deconfined phase. There are two choices. One is e particle is a Kramer's singlet. In this case, the resultant deconfined phase is a vanilla Z 2 topological order which we can label as e0m0 meaning that both e and m are Kramer's singlet. The other choice is that the e particle actually carries a Kramer's doublet. 34 In this case, we get a non-trivial symmetry enriched Z 2 topological order labeled by eT m0. eT m0 and e0m0 are distinct phases if the system preserves the time reversal symmetry. However, since the e particle remains gapped during the transition, it cannot change the universality class of the transition.
begin by considering first a U (1) gauge theory with a (Z 2 ) 1 symmetry. This theory has a gapless photon, gapped electric charges E, gapped magnetic charges M , and their bound states. Now assume that all particles with odd magnetic charge are thrown out of the U (1) gauge theory. Then odd strength magnetic loops cannot end and there is an exact (Z 2 ) 1 symmetry. This symmetry is broken spontaneously in the U (1) gauge theory (the odd strength magnetic loops are tension-less).
Consider now a Higgs transition obtained by condensing the basic E particle. All magneic flux loops will then have line tension, and we will get the "trivial" phase of loops with unbroken (Z 2 ) 1 .
If instead we consider a Higgs transition obtained by condensing E 2 without condensing E, we will get a Z 2 gauge theory where E survives as the Z 2 gauge charge. We also get strength-1/2 magnetic flux loops with line tension which braid with π phase with the Z 2 gauge charge. Of course strength-1 magnetic loops also have line tension, and cannot break. We identify them with the microscopic loops. This state preserves (Z 2 ) 1 and is exactly the loop fractionalized phase described above 35 An effective field theory for this loop fractionalized phase is readily written down. Consider the where α is a 1-form dynamical gauge field, and β is a 2-form dynamical gauge field. B is a 2-form background gauge field that couples to the global (Z 2 ) 1 symmetry. The first term is the standard "BF" theory description of Z 2 gauge theory. It dictates that the strings that are charged under β are seen as π flux of the α. These strings are the tension-full loops of the Z 2 gauge theory. The 'microscopic' loops that couple to B however have 2π flux of α. Thus this action correctly captures the loop fractionalized phase described above.
In this appendix, we provide generalizations of the previous fermionic deconfined quantum critical points. We extend the SU (2) + N A f theories to even N A f cases. Let us consider an SU (2) gauge theory coupled to N A f = 2 flavors of adjoint Dirac fermions. The 35 An alternate construction of the same phase is to start with a standard deconfined Z 4 gauge theory, and throw out all particles with odd Z 4 charge. This builds in a (Z 2 ) 1 symmetry into the theory associated with the Z 4 flux loop with even flux. This loop does not braid non-trivially with any other excitation, and has line tension. However it is fractionalized into two odd flux loops which themselves braid with phase π with the particle with even Z 4 charge.
+ 1-D Lagrangian of this theory is
Analytically, this theory is expected to be inside the conformal window [81,83]. Numerically, it is found that the infrared limit for the m = 0 theory is consistent with a conformal field theory [82].
We want to understand what phase transition this theory describes.
To be more precise we will content ourselves with determining the topological distinction between the phases with the two signs of m assuming large |m|. We will not attempt to answer the question of whether there is are other intermediate phases at small |m|. Accordingly whenever we talk about the massive theory below we implicitly mean the theory at large |m|. If we tune m to be non-zero, the fermions are gapped. As usual, we can regularize the theory such that for m < 0 integrating out the massive fermion generates zero Θ-angle for the SU (2) gauge theory, in which case the theory will enter a confined phase in the low energy. For m > 0, the massive fermions contribute an 8π Θ-angle for the SU (2) gauge fields. Since the Θ term is 2π periodic, the SU (2) gauge theory will again confine in the infrared limit. The question is what is the nature of the gapped phases for m < 0 and m > 0.
The two states can only differ in their topological aspects. They can be different SPT states of certain global symmetry.
For general mass m, the global symmetry of the theory is G = SO(4) × Z T 2 . The time reversal symmetry transformation is as usual where we suppressed the flavor and gauge indices. To see the SO(4) symmetry, we decompose the 2 flavors of Dirac fermions into 4 flavors of Majorana fermions. The SO(4) symmetry is then a flavor rotation between the 4 Majoranas. Since the SU (2) adjoint representation is a real representation, the SO(4) × Z T 2 symmetry commutes with the gauge group. This is not anomalous and is an exact symmetry for any m.
Let us first discuss the classification of interacting fermion SPT with SO(4) × Z T 2 symmetry. In the free fermion limit, the 3 + 1-D fermion SPT classification is Z. The root state for this class is 4 copies of topological superconductors with Z T 2 symmetry (DIII class), where the SO(4) rotates among the 4 copies. The typical surface theory of such root state is 4 copies of gapless Majorana fermions. In the free fermion limit, since DIII class is Z classified, the classification of With interaction, the classification becomes Z 4 × Z 2 . The Z 2 part corresponds to the pure Z T 2 SPT state labeled by its anomalous surface Z 2 topological order ef mf , which only appears in interacting system and has no free fermion correspondence. The free fermion Z classification is reduced to Z 4 by interaction. The reason is the following. The pure time reversal anomaly on the 2 + 1-D surface is Z 16 classified, which means that multiples of 16 copies of 2 + 1-D Majorana fermions are time reversal anomaly free. Therefore, we at least need 4 copies of the root state to cancel the time reversal anomaly on the surface. Next, we need to consider the mixed anomaly between SO(4) and Z T 2 . This is related to the generalized parity anomaly. According to [100], by considering the system on general unorientable manifold, the surface theory of 4 copies of the root states will be free from the mixed anomaly between SO(4) and Z T 2 symmetry. Physically, it means that the SO(4) monopole in the 3+1-D bulk carries trivial time reversal quantum number.
Combining the two constraints, we conclude that the interaction classification reduced from the free fermion states is Z 4 . We can also see this from a surface argument. Let us take 4 copies of the The symmetry transformations are Now we introduce a superconducting order parameter just as in Eq. (61). This breaks both U (1) e and Z T 2 but preserves a combination of Z T 2 and U (π/2) rotation. Consider the π vortex of the superconductor order parameter. It carries 8 Majorana zero modes labeled by χ i,v . We can combine them into 4 complex zero modes, f v = χ 1,v + iχ 2,v . We can write down an SO(4) invariant four fermion interaction of the form [93]. This interaction leads to an SO(4) symmetric ground state 2 and a gapped spectrum for the vortex core. Now we can condense the π vortices and restore the U (1) e and Z T 2 symmetry. The resultant surface state is a trivial gapped symmetric state under U (1) e × SO(4) × Z T 2 symmetry. We can then turn on a small explicit U (1) e breaking term. Since the surface is now trivially gapped, it is stable against any small perturbation. Thus, we proved that the surface of 4 copies of the root states can be trivially gapped while preserving the SO(4) × Z T 2 symmetry, which is equivalent to saying that the bulk state is topologically trivial.
Next we want to determine which SPT state the m > 0 phase falls into. We can always regularize the system such that m < 0 phase is the trivial class of the SPT states under this global symmetry.
To detect the topological properties of the m > 0 phase, we can derive the topological response for the background SO(4) gauge field on an orientable manifold.
This non-trivial response theory tells us that the m > 0 state is indeed a non-trivial SPT protected by the SO(4) × Z T 2 symmetry. As before, to understand the theory we need to introduce the spin-1 2 spectator field. Let us take the simplest case where the spectator is a scalar under SO(4) and a singlet under Z T 2 as in Eq. (70). In this case, we can do similar surface analysis as in previous sections to understand the m > 0 phase. The natural surface state of the m > 0 system is SU (2) QCD 3 with 2 flavors of adjoint massless Dirac fermions. We can condense the trivial spectator boson to Higgs out the SU (2) Notice in this case the topological index of the m > 0 phase actually does not depend on the two choices of the spectator fields. This is indeed consistent with the bulk analysis. We will show that in this case, the neutral SO(3) monopole in the bulk can be a Kramers singlet boson. Therefore, the two choices of the spectator fields do not have different surface time reversal anomaly. To consider the zero modes in the SO(3) monopole, we consider the system with a sphere geometry and set the background SO(3) gauge field such that there is 2π magnetic flux coming out of the sphere along z direction in the flavor space. For the m > 0 phase, the surface theory hosts gapless Dirac fermions which contribute zero modes for the monopole configuration. Let us write down the surface state.
Here 36 We know that for SO(4) two left or two right spinors can be combined into an SO(4) scalar. 37 Therefore, combining two left or right handed spinors from f + sector and f − sector, we can form such a gauge neutral and SO(4) singlet state, for example (f † +,1 f † −,2 − f † +,2 f † −,1 )|0 . Under time reversal transformation, Z T 2 : |0 → f † +,1 f † +,2 f † −,1 f † −,2 |0 , f ±,i → ∓if † ±,i , this state goes back to itself. Therefore, the gauge and global neutral SO(3) monopole is a Kramers singlet boson. This corresponds to a trivial m particle for the surface Z 2 topological order which indicates that it is not anomalous. Hence, the two spectator choices make no difference on the topological index of the m > 0 phase.
The above analysis also suggests a possible duality between the 3 + 1-D SU (2) + N A f = 2 theory and two free Dirac fermions with SO(4) × Z T 2 symmetry as they both describe the continuous phase 36 In general, for 2n Majorana zero modes, they form a vector representation of an SO(2n) group and they host 2 n dimensional Hilbert space. This Hilbert space can always be decomposed into left and right handed spinor representations of the SO(2n) symmetry. 37 This is actually true for all SO(4Z) group. transition between the n = 0 and n = −1 SPT states in this symmetry class. However we will leave to future study an analysis of the emergent symmetries and anomalies of the gauge theory.
For even N A f > 2, the global symmetry of the system is SO(2N A f ) × Z T 2 and the interacting fermionic SPT classification is the same as the N A f = 2 case. The SU (2) + N A f theory is also a theory of quantum phase transition between n = 0 and n = −1 SPT states in this symmetry class.
However, in this case the gauge theory is free in the infrared limit. Therefore, we can tell with confidence that it is distinct from the phase transition theory in the free fermion setting. Thus this provides other examples of multiple universality classes for the same phase transition. The meaning of N A f being half integer is that we consider, instead of Dirac fermions, Majorana fermions. Since the adjoint representation of SU (2) is a real representation, we can easily generalize the theory to Majorana fermions. We thus consider 2N A f = 2k + 1 (k ∈ Z) flavors of SU (2) adjoint Majorana fermions whose 3 + 1-D action can be written down as (We still assume massive spin-1 2 spectator fields in the spectrum of our system.)The massless theory with N A f = 3 2 or k = 1 is inside the conformal window of adjoint SU (2) gauge theory. For N A f > 2 or k > 2, the massless theory flows to the free fixed point in the infrared.
Let us first discuss the dynamical properties of the massive phase. As before, the m < 0 phase can be regularized to have a trivial Θ-angle for the SU (2) gauge theory and it enters a confined phase at low energy. In the m > 0 side, the Θ-angle for SU (2) is 4kπ + 2π, which is also trivial because it is a multiple of 2π. Therefore, the m > 0 side also enters a confined phase. As in all the other examples before, the two phases are not distinguished by their dynamical properties but their topological properties.
The global symmetry in this system is SO(2k + 1) × Z T 2 . The fermion SPT classification for this symmetry is Z 16 × Z 2 . Z 2 part is the ef mf state protected by Z T 2 only. The Z 16 part is descendent from the free fermion classification. The root state is 2k + 1 copies of topological superconductor in DIII class. The 2k + 1 copies form a vector representation of SO(2k + 1). Since the time reversal anomaly for DIII class is Z 16 fold and 2k + 1 is coprime with 16, at least we need 16 root states to cancel the time reversal anomaly on the surface. For 16 copies of the root states, there is also no mixed anomaly between SO(2k + 1) and Z T 2 . (Using the argument in the previous section, the mixed anomaly is 4-fold periodic.) Therefore 16 copies of root states is the minimal number for an anomaly free surface. Hence, the interaction reduced classification is Z 16 .
Let us now discuss the nature of the m > 0 phase. We can derive the topological response theory for the background SO(2k + 1) gauge field on the m > 0 side on an orientable manifold.
This response theory, while not revealing all the information about the m > 0 phase, does tell us that the m > 0 phase is topologically non-trivial. We still need to determine which SPT the m > 0 phase is in.
We find that the nature of the m > 0 phase depends on the properties of the spectator field.
Assuming a spectator boson which is a SO(2k + 1) scalar and time reversal singlet as in Eq. (70), the topological index for the m > 0 phase is n = 3 state in the Z 16 classification. For the other case of a time reversal doublet spectator as in Eq. (76), the topological index is n = −1. The arguments for these results are straightforward generalization of surface arguments in Section IV.
We note that the difference between the two cases is n = 4 state which is not the eT mT state in this situation. (eT mT state would correspond to n = 8 state in the Z 16 classification.) The time reversal singlet spectator case gives us another example of band-theory-forbidden continuous transition between band-theory-allowed insulating states. For the time reversal doublet spectator case, with k = 1 or N A f = 3 2 , the massless SU (2) + N A f = 3 2 theory is a strongly coupled conformal field theory in the gauge theory description. For k > 1 or N A f > 2, the massless SU (2) + N A f theory is free in the infrared. This theory is clearly different from 2k + 1 free massless Majorana fermions. However, both theories describe the same n = 0 to n = −1 transition.
Therefore, this provides more examples for multiversality classes.
A summary of all the results from different N A f series is tabulated in Table H.
where T a 's, the generators of SU (4) group, are 15 × 15 matrices. The infrared limit of the massless theory is still unclear. Let us assume it is inside the conformal window for the moment.
We first consider the dynamical properties of the massive phases. For m < 0, the fermions are massive and we can integrate them out. We will choose a regularization such that the Θ-angle for the SU (4) gauge theory is 0. The SU (4) gauge theory enters a confined phase at low energy. With this regularization, we can calculate the Θ-angle of the SU (4) gauge theory for the m > 0 phase as the following, 38 The Θ-angle is 8π which is equivalent to trivial because of the 2π periodicity. Therefore the SU (4) gauge theory on the m > 0 side is also confined. Next we will discuss the topological difference between the two massive phases.
First let us identify the symmetries. The 0-form global symmetry of the theory is U (1) × Z T 2 . 39 The time reversal and U (1) transformation are the same as the AIII class in Eq. (82) and (81).
The global symmetry commutes with the SU (4) gauge group. We also assume a massive bosonic spectator z that carries SU (4) fundamental representation. This breaks the 1-form Z 4 center symmetry in the system. There are clearly gauge invariant fermions in the system such as (z † T a z)ψ a . Therefore the massless theory describes a critical point in a fermionic system.
Let us consider the case where the spectator is neutral under global U (1) and a singlet under time reversal transformation.
We note that the T 2 = ±1 is meaningless in this case for the spectator boson. We can redefine the time reversal transformation to beZ T 2 : z → e iπ/2 z * , where the phase rotation is an element of the center of the SU (4) gauge group. This gauge equivalent time reversal has T 2 = −1 for the spectator boson. We also notice that the adjoint fermion has identical time reversal transformation forZ T 2 and Z T 2 . Let us regularize the m < 0 phase such that it is in the topologically trivial state. Then consider the m > 0 phase. We again consider the surface state of the system to determine the topological properties of the system. The natural surface state of the system is 2 + 1-D QCD of SU (4) gauge theory coupled to one adjoint fermion. On the surface, we can condense the spectator field, which Higgs the SU (4) gauge field completely while preserving the U (1) × Z T 2 symmetry. The 15 Dirac fermions in the SU (4) adjoint fermion become physical fermions with identical U (1) × Z T 2 transformations. Therefore, this state has topological index in AIII class n = 15 ∼ −1. Thus in the large mass limit we either get a trivial insulator or the simplest topological superconductor. Study of the small mass limit within this framework may reveal interesting possible evolutions between these two familiar phases. However, we will leave this to future work. 39 We can check this in an explicit way. The 15 components of Dirac fermion by decomposing into Majorana fermions can have at most SO(30) flavor symmetries. We can explicitly check that there is only one generator in SO (30) that commutes with all the SU (4) generators in the adjoint representation. This generates an SO(2) or U (1) global symmetry. We organize the gauge invariant operators according to their quantum numbers under Lorentz group and the emergent global symmetry group. We will only list Lorentz scalars and spinors composed from the adjoint fermions ψ a and gluon fields F a µν (up to product of three operators). The time reversal transformations in our system (CT to be more precise) on the Weyl fermions and the gluon fields are as the following.
|
v3-fos-license
|
2019-06-07T23:31:48.748Z
|
2019-05-21T00:00:00.000
|
181896285
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://civilejournal.org/index.php/cej/article/download/1100/pdf",
"pdf_hash": "96c0279f33f092f35ecdd030c6f0b3effbd60979",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42349",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "04e3abb3820e389a2d15e86c1da90c026f6a2376",
"year": 2019
}
|
pes2o/s2orc
|
Adoption of Prefabrication in Small Scale Construction Projects
The construction industry is facing numerous difficulties in managing construction waste, quality, environment, permanence, safety, and greater construction cost. Dynamic change is needed today to overcome new challenges in the construction industry. Adoption of prefabrication is one of the possible solutions to such problems. This paper explores the advantages in prefabrication adoption with its possible disadvantages (barriers) through the qualitative study. This paper is an addition to the existing literature of prefabrication specially for developing countries where the acceptance rate of new approaches is difficult. It covers private residential project and a public housing project. This study also aims to evaluate the current status of prefabrication adoption in small-scale construction projects. A set of the questionnaire is used to collect the data and Average Index (AI) method using SPSS has been used to analyze the results. Shorter construction time, Low site waste and better supervision are the main advantages. Higher initial construction cost and Strict & difficult design changes are the key disadvantages. It is analyzed that the conventional construction method is more frequently used when compared with prefabrication concept.
Introduction
Increasing awareness of environmental, social and economic issues in today's building methods has allowed practitioners around the world to adopt practices that are considered more sustainable in the long term. In the construction industry, conventional on-site construction methods have long been criticized for their durability, low productivity, low level of safety and a large amount of waste [1,2]. As an alternative to these problems, prefabrication can provide significant benefits, such as reduced time, low waste, improved quality, reduced environmental emissions, improved work environment, and reduced energy and water consumption [3,4]. One of the main reasons for the discouragement of decision-makers to adopt prefabrication is that they have difficulty in finding the benefits that such an approach would add to a project [5]. In fact, prefabrication is not always the only solution available, and it is not always better than the on-site construction method because of the different characteristics of the project and the resources available. If not used properly, orders lag significantly behind the production, cost overruns and structural problems in the use of prefabrication. Deciding to use prefabrication based on confidentiality and personal preferences is not uncommon [6]. Pasquire and Connolly (2002) has shown that the decision to include prefabrication still relies heavily on subjective evidence, rather than hard data, as there are no formal measurement strategies [5].
Prefabrication is widely regarded as a sustainable construction method with regard to its impact on the protection of the environment. An important aspect of this perspective is the influence of prefabrication on the reduction of construction waste and subsequent waste management activities, including waste categorization, recycling and disposal [7]. Recent studies reported that in order to cope up with the challenges in speed and quality in the construction industry to offset the shortage of houses for growing population in any country, the need of the day is prefabrication [8]. The use of prefabrication technology can contribute to waste reduction significantly. On condition that more detailed designs, waste reduction during construction could be achieved by avoiding unsuccessful works and unnecessary repetition of works [9]. Compared with the traditional cast-in-place method, it has been unable to meet the requirements of the construction industry and the development of the times. Because the prefabricated building has the advantages of fast speed, water saving, land saving, noise reduction, material saving and energy saving in installation [10]. Zhai (2017) explored the effect of operative hedging and develops the coordination mechanism towards a definite hedging problem in the prefabricated construction supply chain management [11]. Bon-Gang et al. (2018) reported that prefabrication can improve the workflow continuity, increase the efficiencies in the use of resources, minimize construction wastes, and reduce the number of on-site contractors as well as construction durations [12]. Many studies have focused on the technologies and reasoning behind off-site construction [13]. Prefabricated construction has attracted worldwide concern because of its significant role in the creation of sustainable urbanization [14]. Prefabrication is an innovative and cleaner approach that has restructured the production of the construction industry [12]. Fard et al. (2015) highlighted that prefabrication is also prone to occupational accidents so it is also important to evaluate it [15].
Prefabricated construction is becoming more common, improved in quality and has become available in a variety of costs. Many benefits are reported for this approach, including green approaches, financial savings, and flexibility in design, consistent quality, reduced site disruption, reduced construction time, and improved productivity. The results of Jaillon and Poon (2008) showed that the environmental, economic and social benefits of prefabrication were significant compared to conventional construction methods [9]. This implies that wider use of prefabrication techniques can contribute to sustainable construction in a close urban environment. In order to improve the overall quality and efficiency, it is necessary to increase the way the construction is carried out and revised. The key lies in innovation and blocking the many barriers that limit the sector's enormous potential to create a sustainable built environment. Hence, it is essential to evaluate this panorama that would encourage the suitable discussion of the appropriateness of prefabrication and other construction methods. Thus, this paper is an initial step toward this serious problem. The study aims to identify advantages of prefabrication and barriers in the adoption of it. It also aims to investigate the current status of prefabrication adoption in the construction industry of Pakistan. This paper will provide a pre-requisite knowledge and scenario of prefabrication adoption in making small-scale building projects. The results of this study may lead to a broader research for prefabrication adoption in big and complex construction projects.
Sustainability Aspects of Prefabrication
Sustainability enables a holistic response to environmental, social crises and creates the necessary links between nature, culture, economy, politics and technology. Prefabricated elements provide environment-friendly, energy and cost-efficient solutions for the building [16]. Prefabricated modular structures are increasingly becoming popular [17,18]. This is starting to lead customers to consider the effects of the sustainability of the construction, operation and maintenance of projects. Today's World is striving to cope up with upcoming challenges including saving natural resources, enhanced use of recycled items, environmental degradation, the overall cost of construction item and so on. All of this can be achieved by enforcing existing sustainability theories and modifying the sustainability aspects. The result of this struggle, which is evident in the highly developed and still developing countries, is closely linked to the pressures of economic progress. The framework for sustainable infrastructure design should review the economic impact of new prefabricated and construction technologies.
Research Methodology
An extensive literature review has been made to explore the gape in the existing body of knowledge for prefabrication and its acceptance level in different countries followed by Pakistan. After identify a gape, a research method was designed to carry out this research work. In the later stage, pilot studies were conducted to seek stakeholder's opinion for the prefabrication and its factors and finally a set of questionnaire was designed to collect data from the construction industry. It is observed that Average Index has been successfully used as a decision-making approach for such data set so the same approach is adopted for this study. The final ranks are based of this approach. The complete research methodology is shown in Figure 1.
Data Collection and Analysis
A detailed literature review has been made for factor identifications in this research. The identified factors were processed through a short pilot study. Expert's opinion during pilot study is amended in final set of questionnaire which was send to numerous practitioners working in construction industry via hard mail and emails. The respondents were requested to share their experience in assess the adoption level of prefabrication, advantages and disadvantages of prefabrication in general and with specific reference to small scale residential projects at private side and government side. Finally, 159 questionnaires were considered for this research which was received during data collection period.
Average Index (AI) method has been successfully used for data analysis of such decision-making problems. Therefore, same is used for data analysis of this paper. Average Index is indexed as shown in Eq: 1 Where, ai = Constant expressing the weight given to i, Xi = variable expressing the frequency of the response for:
Results and Discussion
As discussed earlier, the respondents were requested to share their opinion based on their work experience in construction industry. The respondents were provided with a 4-point likert scale and requested to weight the factors which are advantageous for prefabrication in construction industry of small-scale residential building projects. Table 1 shows the rank of factors which are advantageous for prefabrication based on AI score. It is observed that shorten construction time and less construction site waste are ranked as first and second with an average mean value of 3.57 and 3.48 respectively. It indicates that while adopting prefabrication in construction, will cause the overall shorten project duration and due to the manufacturing of components at particular site or in factory it will result less construction site waste. Also, the better supervision, sustainable product and environmentally friendly are ranked as third and fourth and followed by others as shown.
Other than the advantages in adopting prefabrication, the disadvantages on the applications of prefabrication are also investigated in this research. Similar analysis has been made for this phase of the research. Table 2 shows the responses on the disadvantage (hindrances) in applying prefabrication in building construction projects. Increased production volume is required to ensure affordability through prefabrication 2.45 13 16 New process and unfamiliarity of process 2. 45 13 It is observed that higher initial cost and Strict & difficult design changes are ranked as first and second with an average mean value of 3.25 and 3.12 respectively. Since the prefabricated components are manufactured early at the stage and if in future it is required to change the design of the project then it will be inflexible and prove to be costly. In addition to it, time consuming in initial design and leakage problems while joining prefabricated components stands at third and fourth factor followed by others as shown.
Finally, current status of the adoption of prefabrication has been assessed in this research. Comprehensive prefabrication method for different elements of projects is shown in Table 3.
Table 3. Adoption Level of Prefabrication in Small Scale Projects
It is observed that private and public sector is widely using prefabrication for fall ceiling. They also use prefabrication significantly for drainage and tiling works. Both sectors use prefabrication for kitchen items, washroom fixtures, boundary walls and partition walls to some stage means they are adopting it for such items for building works. It is observed that both sectors started adopting this concept to some extent for elements like beams, columns and slabs. Whereas, it is also observed that still both sectors lack to use prefabrication concept for foundation, basement, piling and stairs, they are not accepting this idea as a better replacement for cast-in situ elements as mentioned.
Conclusion
Prefabrication method for the construction industry provides a much more efficient atmosphere for productivity, eliminating the unnecessary distractions and interference typically encountered in conventional construction sites. It should be noted that prefabrication in most cases takes less than half the time compared to traditional construction. This is due to better planning, design, elimination of on-site problems and meteorological factors, subcontractor scheduling delays and faster manufacturing, as multiple parts can be built simultaneously. Prefabrication is a possible solution to the main causes of waste that arise in design and construction. Prefabrication also contributes to other benefits at the site, such as shorter construction time, better monitoring can be achieved with respect to the environment, improving quality and sustainability. Reducing total construction costs and better aesthetic prospects are also important advantages of prefabrication. Considering the results, it can be concluded that adoption of prefabrication is becoming a norm in building construction, though conventional method is still used in majority of construction industry but looking at scenario of construction in developed countries it seems that the use of prefabrication should likely to increase. With the continued popularity of prefabricated construction, it is likely to continue to grow in popularity. Customers who choose this option can benefit from a high-quality, faster, cost-effective and environmentally friendly construction method.
Acknowledgement
The authors are thankful to Mehran University of Engineering & Technology, Jamshoro, Pakistan for providing platform to conduct this research at master's level. The authors also extend their gratitude to Prince Sultan University, Riyadh, Saudi Arabia for expertise role throughout this research study.
Conflicts of Interest
The authors declare no conflict of interest.
NO.
Building
|
v3-fos-license
|
2020-11-18T14:07:00.911Z
|
2020-11-16T00:00:00.000
|
226990862
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-020-76413-7.pdf",
"pdf_hash": "f64a64275dcc8cd46c521e35736cccc87b243113",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42350",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"sha1": "d98e1a14142a534dea4bb22fb12c4f8353bfa7ad",
"year": 2020
}
|
pes2o/s2orc
|
A modulation-doped heterostructure-based terahertz photoconductive antenna emitter with recessed metal contacts
We present the implementation of an efficient terahertz (THz) photoconductive antenna (PCA) emitter design that utilizes high mobility carriers in the two-dimensional electron gas (2DEG) of a modulation-doped heterostructure (MDH). The PCA design is fabricated with recessed metal electrodes in direct contact with the 2DEG region of the MDH. We compare the performance of the MDH PCA having recessed contacts with a PCA fabricated on bulk semi-insulating GaAs, on low temperature-grown GaAs, and a MDH PCA with the contacts fabricated on the surface. By recessing the contacts, the applied bias can effectively accelerate the high-mobility carriers within the 2DEG, which increases the THz power emission by at least an order of magnitude compared to those with conventional structures. The dynamic range (62 dB) and bandwidth characteristics (3.2 THz) in the power spectrum are shown to be comparable with the reference samples. Drude-Lorentz simulations corroborate the results that the higher-mobility carriers in the MDH, increase the THz emission. The saturation characteristics were also measured via optical fluence dependence, revealing a lower saturation value compared to the reference samples. The high THz conversion efficiency of the MDH-PCA with recessed contacts at low optical power makes it an attractive candidate for THz-time domain spectroscopy systems powered by low power fiber lasers.
www.nature.com/scientificreports/ obtained from a PCA emitter that utilizes a GaAs-based, high electron mobility heterostructure; an example of which is the aluminum gallium arsenide (AlGaAs)/GaAs modulation-doped heterostructure (MDH). In a conventional AlGaAs/GaAs MDH, n-doped AlGaAs is separated from undoped GaAs by a thin spacer layer (Fig. 1a). Due to the alignment of the Fermi levels, a triangular well is formed in the conduction band. The confined carriers in this triangular well form a two-dimensional electron gas (2DEG) region, where the electrons have higher mobility and lower scattering from ionized impurities, compared to electrons in the bulk GaAs region 11 . Owing to the enhanced carrier mobility, the MDH is conventionally used for high-speed devices; particularly as a modulation-doped field effect transistor (MODFET), also referred to as "high electron mobility transistor (HEMT)", as well as applications in spectroscopy 12 , and optoelectronics 13 . The MDH utilized as a HEMT has been previously shown by Dyakonov et al. to function in the THz range as a detector, mixer, and multiplier owing to the utilization of the 2DEG 14- 16 . While this has sparked interest in MDH-based materials and devices for THz applications, previous works have delved on utilizing the MDH for THz detection [15][16][17][18][19] , rather than generation.
Previous works have shown that the application of an external magnetic field enhances the THz emission of several semiconductors 20,21 . We have previously observed in a bare MDH that the polarity of the applied magnetic field parallel to the surface and normal to the reflection plane dictated the THz enhancement factor 22 , and demonstrated via temperature-dependent THz-time domain spectroscopy (THz-TDS) that the high-field region in the 2DEG is responsible for the THz emission in a MDH 23 . This effect was most pronounced in the AlGaAs/GaAs MDH and highest when the external magnetic field was applied parallel to the heterojunction of the MDH. Doing so made the carriers in the MDH mimic the motion of the carriers of a PCA emitter under normal, biased operation 24 .
In this paper, we report on the characteristics of the AlGaAs/GaAs MDH utilized as a PCA with recessed metal contacts, a design we have previously proposed 24 . The MDH PCA emitter, along with standard SI-GaAs and LT-GaAs PCAs, were fabricated by standard lithography techniques and were tested via THz-TDS measurements to understand and compare each of the devices' performance. We show that by exploiting the transport of high-mobility electrons along the 2DEG region, the MDH PCA with recessed contacts shows THz emission amplitude increased by a factor of 7 over that of a SI-GaAs PCA, and roughly by a factor of 1.5 over that of a LT-GaAs PCA. To analyze how the enhanced mobility and reduced scattering would affect the devices, the THz emission characteristics of the PCA devices using the different substrates, namely SI-GaAs, LT-GaAs, unrecessed MDH and recessed MDH, were simulated using the Drude-Lorentz model. With its strong THz emission and compact dimensions, the recessed MDH PCA can help pave the way to more efficient, compact, and turn-key THz spectroscopy solutions. Figure 1 shows the cross-section of an AlGaAs/GaAs MDH PCA with surface contacts (Fig. 1a) and recessed contacts (Fig. 1b). The recessed features had an etch depth of d = 187 nm. The PCA pattern used for both was a dipole antenna with a gap of g = 5 µm. The recessed features could be achieved by selectively etching the layers prior to the deposition of the metal contacts. The proximity of the metal contacts to the 2DEG provides easier access for the electric bias to utilize the 2DEG region of the AlGaAs/GaAs MDH, resulting in a stronger THz wave emission. Epitaxial growth, lithography and fabrication are discussed in more detail in the Methods.
Photoconductive antenna design and simulation
The mechanism behind the experimental results are supported by numerical simulations of the THz emission using the one-dimensional Drude-Lorentz Model 25,26 . The one-dimensional Drude-Lorentz Model is a simple, yet accurate model of the generation of THz electromagnetic radiation 25,26 . As a femtosecond optical pulse is made incident onto the photoconductive gap, the photogenerated electron-hole pairs are swept by the applied electrical bias. The transient photocurrent density j is given by where e is the electron charge, n f is the free carrier density, v h and v e are the average hole and electron velocities, respectively. The time-dependence of the free carrier density is given by: www.nature.com/scientificreports/ where τ c is the carrier capture time and G(t) is the carrier generation rate of the form n 0 exp −t 2 /p 2 by optical excitation. The acceleration of holes and electrons is given by: where v h,e is the average velocity, q h,e is the charge, m * h = 0.34m e,0 , m * e = 0.067m e,0 , τ s is the momentum relaxation time given by the Drude relation τ s = µ i m * i /q i and E mol is the local electric field given by: where P sc is the space-charge polarization created by the carrier separating due to the applied field, ε is the dielectric constant of the material, and η is the geometrical factor of the antenna. For this work, we use ε = 12.9ε 0 , which is the dielectric constant of GaAs 27 . The time dependence of the space-charge polarization is, where τ r is the recombination lifetime. Taking the time derivative of Eq. (3), then inserting Eq. (4), the second time-derivative of the velocity v is given by Solving both Eqs. (5) and (6) and using Eqs. (1) and (2) will give the photocurrent density j . At far-field, the THz electric field E THz (t) is proportional to the time-derivative of the photocurrent density The THz wave radiated from the emitter PCA is assumed to reach the PCA detector without any losses. The probe beam generates electron-hole pairs, and the THz electric field incident on the detector PCA sweeps the photocarriers. The current density at the PCA detector is given by 28 : where σ s (t) is the transient surface conductivity of the photoconductive substrate of the detector. The transient surface conductivity of the detector was modelled using a LT-GaAs substrate with τ c,det = 0.15 ps, τ s,det = 40 fs. These detector values best replicate the frequency response of the experimental data and are kept as constant PCA detector parameters when simulating the SI-GaAs, LT-GaAs, MDH (Top) and MDH (Recessed) PCA emitters.
The simulation requires physical parameters of the semiconductor substrates, specifically, carrier density n f , capture time τ c , scattering time τ s , and recombination time τ r . These parameters have been well-documented for SI-GaAs, while the values for LT-GaAs would depend on the growth temperature. For the MDH samples, Hall mobility measurements were performed to measure the actual carrier density and mobility values. Van der Pauw configuration was utilized by applying indium contacts on top of the MDH and the magnetic field was supplied using a 3 T Lakeshore magnet.
Results and discussion
Identical dipole-type patterns (gap width g = 5 μm) were fabricated on the surfaces of a SI-GaAs (100) substrate, a LT-GaAs (growth temperature 270 °C) substrate, and a MDH substrate; and on a separately prepared piece of the same MDH sample, an antenna was fabricated with the electrical contacts recessed, as described earlier. From here onwards, we refer to the fabricated PCAs as "SI-GaAs", "LT-GaAs", "MDH (Top)" and "MDH (Recessed)". The PCAs were biased at a frequency of 20 kHz, and peak-to-peak voltage amplitude of 32 V. The powers of the pump beam and probe beam were both maintained at 9.5 mW, unless otherwise stated. Figure 2a shows the THz time domain emission spectra from the fabricated PCAs. The generated THz waves were detected using a commercial LT-GaAs dipole-type PCA with a 3.4 µm gap. Among the four antennas, the highest THz peak-to-peak amplitude was observed from the MDH (Recessed) PCA, followed by LT-GaAs, MDH (Top) and SI-GaAs PCAs. Between the two MDH PCAs, we find that by recessing the contacts, the bias is able to access the 2DEG region more effectively, and an increase in the drift carrier transport of carriers in the 2DEG resulted in the generation of higher THz emission.
The dynamic range of the PCAs as a function of THz frequency are shown in Fig. 2b, where each plot was given an appropriate y-offset such that the noise floor average coincides with the y = 0 dB line (dotted line). The inset shows the THz power spectra plotted in linear scale, to provide the reader with a visual context of the spectral difference in THz emission among the devices. The maximum dynamic range for all PCAs are ~ 60 dB. However, between 0.4 THz and 1.5 THz, the dynamic range of the MDH PCAs and LT-GaAs PCAs are higher, by www.nature.com/scientificreports/ around 10 dB at most, compared to the SI-GaAs PCA; and at frequencies higher than 1.5 THz, the MDH PCAs have a slightly higher dynamic range (~ 5 dB) than the LT-GaAs PCA. The increased density of high velocity carriers participating in the THz emission process 29 increases the higher frequency components of the spectra. The performance characteristics are detailed in Table 1.
The THz waveforms and FFT spectra from photoconductive antenna simulation using the one-dimensional Drude-Lorentz model are shown in Fig. 3. The parameters used for the carrier density n f , capture time τ c , scattering time τ s and recombination time τ r used in the simulation are detailed in Table 2. The scattering time was deduced from the experimentally-obtained mobility values using the Drude relation τ s = µ i m * i /q i (or vice-versa when the scattering time is known, such as for SI-GaAs and LT-GaAs). The SI-GaAs has a carrier capture time in the order of hundreds of picoseconds, a relatively high scattering time, and high mobility of > 5000 cm 2 /(V·s) [30][31][32] . The LT-GaAs used in this work was grown at T s = 270 °C and the presence of defects leads to picosecond carrier lifetime, a low scattering time and a low mobility of < 1000 cm 2 /(V·s) 9,33 . The time-scales of the MDH samples were estimated from literature values based on capacitive measurements or time-resolved measurements 34,35 . For the MDH (Top) structures, the carrier density and mobility were chosen close to actual Hall measurement values while the mobility of the MDH (Recessed) sample was scaled accordingly assuming that the radiated terahertz www.nature.com/scientificreports/ electric field E THz is directly proportional to the mobility µ . The effective mobility of the carriers contributing to the source current for THz emission is improved by the direct contact of the metal to the 2DEG region. The resulting trends in the simulation are in good agreement with the experimentally measured THz radiation for both the time domain (Figs. 2a, 3a) and the frequency spectra (Figs. 2b inset, 3b). This includes the increased amplitude of higher THz frequency components with the electron mobility of carriers participating in the THz generation process. A representative comparison showing the experimental data and simulation for the MDH (Recessed) is shown in Fig. 3c. A deviation between the time-domain waveforms and FFT spectra of the experimental and simulation results is explained by the deformation or renormalization of the THz waveform due to the antenna response, the frequency-dependent focusing characteristics of the THz optics, and the water vapor absorptions, all of which have been ignored in the simulation.
A comparison between the data and the simulation is presented in a bar graph in Fig. 3d. The differences in THz emission amplitude between the data and the simulation implies that the etched distance from the surface leaves room for optimization. Nonetheless, the good agreement between the simulation results and www.nature.com/scientificreports/ the experimental data shows how the MDH-PCA design effectively utilizes the high-mobility 2DEG region in improving THz yield. The dependence of the THz emission amplitude to the optical fluence was obtained (Fig. 4) by varying the laser power incident on the THz emitter PCAs. At any given pump fluence, the SI-GaAs PCA emits the lowest THz emission amplitude, followed by the MDH (Top) PCA, and lastly, the LT-GaAs PCA. The MDH (Recessed) PCA has the highest THz emission amplitude. Even at the lowest fluence value (< 0.5 mJ/cm 2 ), the THz emission from the MDH (Recessed) PCA was 5 times higher than the THz emission of the LT-GaAs PCA. The saturation fluence F sat can be calculated from the fits to the equation, E THz (F) ≈ A(F/F sat )/(F + F sat ) , where A is the amplitude of the radiated field, and F is the incident beam fluence. The saturation fluence values are 5.83 mJ/cm 2 and 6.89 mJ/cm 2 for the SI-GaAs and LT-GaAs PCAs, respectively. For the MDH (Top) PCA, the F sat is 12.82 mJ/ cm 2 . When the contacts are recessed, however, the value for F sat significantly reduces to 1.15 mJ/cm 2 . The saturation of the emitted THz radiation from PCAs with optical fluence, in general, is attributed to the screening effect that arises from the high photocarrier density [39][40][41] , and the saturation fluence is inversely proportional to the carrier mobility 40 . With recessed contacts, the applied bias becomes more efficient as it directly accesses the high mobility region; in contrast to when it is applied from the surface. This improved efficiency in the bias conditions outweigh the corresponding detrimental effects of screening.
As a point of reference, LT-GaAs is the most common commercially-available photoconductive material used for emitters because of the ultrashort carrier lifetime due to the high concentration of defects 9,42 . We find that compared to LT-GaAs, the MDH (Recessed) PCA emits a higher THz peak to peak amplitude and has a greater maximum dynamic range, even as they emit at the same THz spectral bandwidth. While LT-GaAs does have a higher saturation fluence at any given moderate fluence value, the efficiency of the MDH (Recessed) PCA is consistently higher in the < 5 mJ/cm 2 fluence range. The MDH (Recessed) PCA would be a good candidate for low laser power applications because of its high THz emission yield. When building THz-TDS spectrometers driven by compact low power fiber lasers, the efficiency of optical-to-THz power is crucial.
In summary, the previously-proposed PCA design was successfully implemented using an n-AlGaAs/GaAs MDH. The MDH was etched to recess the metal for direct contact with the 2DEG region of the MDH. As corroborated by Drude-Lorentz simulation, the influence of the high mobility carriers in the 2DEG was shown to drive the increase in THz emission. The MDH recessed contacts have the largest THz peak-to-peak emission and THz power, as compared to the LT-GaAs and MDH PCAs with top contacts; even as their dynamic and spectral ranges are comparable. The high THz emission and low saturation fluence of the MDH recessed contacts offer a feasible solution to THz-TDS systems that are designed to be powered by low power fiber lasers. Figure 5a shows the growth schematics of the n-AlGaAs/GaAs MDH and LT-GaAs. The MDH layer was grown via a RIBER 32P molecular beam epitaxy on an epiready (100)-oriented SI-GaAs substrate. The substrate was first heated in situ at 590 °C for 10 min to remove its artificial oxides. The substrate temperature was then raised to 610 °C to facilitate the growth of a 1.5 µm GaAs buffer layer at a growth rate of 1 µm/hr. Afterward, this layer was followed by the growth of a 150 Å AlGaAs (x = 0.2) spacer at a growth rate of ~ 1.2 µm/hr. The silicon dopant effusion cell was then opened to facilitate the growth of an 800 Å n-AlGaAs donor layer by using the same growth conditions aside from a nominal doping concentration of ~ 1 × 10 17 cm −3 . The growth was then terminated through the growth of a 200 Å n-GaAs cap. The LT-GaAs layer was grown in the same MBE system, on a similar epiready (100) SI-GaAs substrate. The substrate was first heated in situ at 590 °C for 10 min to remove its artificial oxides. The substrate temperature was then raised to 630 °C for the growth of a 0.2 µm GaAs buffer. Afterward, the substrate temperature was lowered down to 270 °C where a 2 µm LT-GaAs thin film was grown. The substrate temperature was then raised to 600 °C in order to anneal the LTG-GaAs layer for 10 min. www.nature.com/scientificreports/ The growth then terminated through the growth of a 200 Å n-GaAs cap. All of the layers for this sample were grown at a growth rate of 1 µm/h. Figure 5b shows the energy band diagram of a typical MDH. At the heterojunction between the highly doped n-GaAs layer and the undoped GaAs, a triangular quantum well is formed due to the alignment of the Fermi energy of the two materials. The MDH structure was originally designed to be used as a HEMT, as conduction electrons in the n-GaAs layer are designed to easily get trapped in the triangular quantum well (transistor channel), where they can move laterally at very low resistance (i.e. high mobility) 23,24 .
Methods
All of the samples used, namely two of MDH, LT-GaAs and SI-GaAs, underwent standard degreasing by immersion in trichloroethylene, acetone, and methanol. A MIDAS MDA-400 M mask aligner was used to transfer a dipole PCA structure with a gap g = 5 µm onto the surfaces of the LT-GaAs, the SI-GaAs and one of the MDH wafers (to create MDH (Top)). The other MDH substrate (to create the MDH (Recessed) PCA) was also patterned using the same dipole PCA structure, albeit defocused, to obtain a slightly larger pattern than that of the original. This sample was etched in an acid piranha solution consisting of 1:8:80 volumetric ratio of H 2 SO 4 :H 2 O 2 :deionized H 2 O, which reached a depth of d = 187 nm. After etching, the MDH (Recessed) substrate was patterned with the same dipole PCA pattern for metallization. AuGe/Ni/Au with nominal thicknesses of 55/15/85 nm were evaporated onto the samples by resistive evaporation and electron beam deposition. After metal lift-off, all of the PCAs were annealed inside a tube furnace at 400 °C under nitrogen gas-rich environment for 1 min.
The THz emission characteristics of the samples were measured using a standard THz-TDS spectroscopy setup. The 780 nm line of a Menlo C-fiber femtosecond fiber laser with pulse duration of 100 fs pulse duration and 100 MHz repetition rate was used. The laser beam was split into pump and probe beams using a beam splitter. The pump beam was used to excite the emitter samples and the probe beams was used to optically gate a commercial 3.4 µm LT-GaAs dipole detector. The pump and probe powers were both maintained at 9.5 mW, unless otherwise stated. The PCA emitters were biased with a 32 V peak to peak square wave at a frequency of 20 kHz.
|
v3-fos-license
|
2018-04-03T01:45:26.617Z
|
2011-05-31T00:00:00.000
|
16236911
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0020297&type=printable",
"pdf_hash": "a831f61732240063675cd542a81f26742fbf2dcd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42351",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a831f61732240063675cd542a81f26742fbf2dcd",
"year": 2011
}
|
pes2o/s2orc
|
The Molecular Subtype Classification Is a Determinant of Sentinel Node Positivity in Early Breast Carcinoma
Introduction Several authors have underscored a strong relation between the molecular subtypes and the axillary status of breast cancer patients. The aim of our work was to decipher the interaction between this classification and the probability of a positive sentinel node biopsy. Materials and Methods Our dataset consisted of a total number of 2654 early-stage breast cancer patients. Patients treated at first by conservative breast surgery plus sentinel node biopsies were selected. A multivariate logistic regression model was trained and validated. Interaction covariate between ER and HER2 markers was a forced input of this model. The performance of the multivariate model in the training and the two validation sets was analyzed in terms of discrimination and calibration. Probability of axillary metastasis was detailed for each molecular subtype. Results The interaction covariate between ER and HER2 status was a stronger predictor (p = 0.0031) of positive sentinel node biopsy than the ER status by itself (p = 0.016). A multivariate model to determine the probability of sentinel node positivity was defined with the following variables; tumour size, lympho-vascular invasion, molecular subtypes and age at diagnosis. This model showed similar results in terms of discrimination (AUC = 0.72/0.73/0.72) and calibration (HL p = 0.28/0.05/0.11) in the training and validation sets. The interaction between molecular subtypes, tumour size and sentinel nodes status was approximated. Discussion We showed that biologically-driven analyses are able to build new models with higher performance in terms of breast cancer axillary status prediction. The molecular subtype classification strongly interacts with the axillary and distant metastasis process.
Introduction
Gene expression profiling of invasive breast carcinoma has resulted in highlighting three main categories of breast cancer with very specific features [luminal-like, basal-like, HER2-like] [1]. Wirapati et al [2] showed that three main vectors-genes [ESR1, HER2 and STK6, a marker of proliferation] are the biological backbone of this classification. Although the methodology to determine the molecular subtypes has still to be improved [3], many publications have validated this classification [2] [4]. It has been shown that the molecular subtypes differ in their response to neaoadjuvant systemic treatment [5], loco-regional recurrence [6], metastasis pattern [7,8], time to metastasis and overall survival [3]. Furthermore, several authors have underscored a strong relation between the molecular subtypes classification and the axillary status of breast cancer patients [9][10][11][12][13][14][15][16]. As the nodal status is the most robust and the strongest factor correlated to overall survival in breast cancer patients, and is one of the major determinants in therapeutic decisions, axillary staging (either by sentinel node biopsy or axillary lymph node dissection) is a mandatory step in breast cancer management. Many predictors of axillary lymph nodes metastases have been previously published. Tumour size, tumour grade, tumour location, presence of lymphatic/vascular invasion, high MIB-1 index, age at diagnosis, S phase, estrogen receptor status (ER), progesteron receptor status (PR), HER2 status are independent variables identified in these studies [17][18][19][20][21][22][23][24][25].
The aim of our work was to decipher the relation between the molecular subtype classification as defined by a combination of ER and HER2 status evaluated by immuno-histochemistry (IHC) and confirmed by FISH in case of IHC-HER2 2+ and the probability of a positive sentinel node biopsy. Using one training set and two validation sets we showed a benefit to introduce the ER and HER2 biomarkers interaction covariate to identify, before surgery, a patient with a high risk of axillary metastasis. Furthermore we showed for each molecular subtype a very specific correlation pattern between the tumour size and the probability of a positive sentinel node biopsy. We hypothesized from these results that the axillary lymph node metastasis process is predominantly correlated to intrinsic biological properties in the ER negative HER2 negative breast cancer subgroup whereas stochastic events, tumour size, growth rate and lympho-vascular invasion are the main determinants in the ER positive or HER2 positive breast cancer subgroups. normal physical examination of the axilla, treated at first by conservative surgery plus a sentinel node (SN) biopsy. The procedure was performed with blue patent, radioisotope or a combination, as previously described, in line with French recommendations. SN biopsy was performed as previously described [26]. Axillary lymph node dissection was performed during the same procedure when the SN was positive by imprint cytology or frozen section. A second operation was performed when either hematoxylin-eosin staining or immunohistochemistry revealed tumor cells in the SN postoperatively, including isolated tumour cells. Pathologic SN examination methods were as reported previously [26]. Patients receiving a neoadjuvant treatment (chemotherapy, hormone-therapy or radiotherapy) or with a locoregional recurrence were systematically excluded from the study. The clinical data (age at diagnosis, treatment protocols) were extracted from the Institut Curie prospective breast cancer database and from the Hospital Tenon, department of gynecology, prospective breast cancer database.
Tumor samples
The following histological features were retrieved: tumour type, tumour size, histological grade according to Elston and Ellis grading system (Histopathology 1991), Mitotic Index, Lympho Vascular Invasion, Estrogen Receptor status, Progesterone Receptor status, HER2 status, number of positive sentinel nodes, number of sentinel nodes. Mitotic Index (MI) corresponded to the number of mitoses observed in 10 successive high power fields (HPF) using a microscope with a 40x /0.7 objectives and a 10x ocular, each. Mitotic Index was assessed on histological sections stained by Hematein, Eosin and Saffron. The criteria of Van Diest and al were used to define mitotic figures [27,28]. Estrogen Receptor (ER) and Progesteron Receptor (PR) immunostainings were determined as follow. After rehydration and antigenic retrieval in citrate buffer (10 mM, pH 6.1), the tissue sections were stained for estrogen receptor (clone 6F11, Novocastra, 1/ 200), and progesterone receptor (clone 1A6, Novocastra, 1/200). Revelation of staining was performed using the Vectastain Elite ABC peroxidase mouse IgG kit (Vector Burlingame, CA) and diaminobenzidine (Dako A/S, Glostrup, Denmark) as chromogen. Positive and negative controls were included in each slide run. Cases were considered positive for ER and PR according to standardized guidelines using $10% of positive nuclei per carcinomatous duct. The determination of HER2 over-expression status was determined according to the American Society of Clinical Oncology (ASCO) guidelines [29].
The SLN histopathological assessment protocol has been published by Fréneaux et al [26]. SLN samples were serially sectioned and stained with HE. Negative HE cases were then analyzed by serial sectioning with IHC. Positive sentinel nodes were classified into two groups according to the size of the metastasis: macrometastasis (.2 mm) and micrometastasis (, = 2 mm) detected either by HE staining or by cytokeratin IHC.
Statistical model
Baseline characteristics were compared between groups using Chi-square or Fisher's exact tests for categorical variables and Student's t-tests for continuous variables. To develop wellcalibrated and exportable nomograms for prediction of sentinel node positivity, we built a multivariate logistic regression model in a training cohort and validated it in two independent validation cohorts. First, univariate logistic regression analysis was performed to test the association of the sentinel lymph node status to the following variables: patient age, tumor diameter, histologic type of tumor, histological grade, lymphovascular invasion, ER status, PR status, HER2 status. Interaction covariate between ER and HER2 status were tested. The loglinearity of the continuous variables was study by fitting a polynomial functions with different degree or step functions in a logistic model. Age at diagnosis was subdivided in 3 classes and the tumour size was kept as a continuous variable. Second, a multivariate logistic regression analysis was performed to determine the probability of having a positive sentinel node biopsy procedure and to build a nomogram. Significant variables identified through univariate analysis were used as input in the multivariate analysis. The multivariate model performance was quantified with respect to discrimination and calibration. Discrimination (i.e., whether the relative ranking of individual predictions is in the correct order) was quantified with the area under the receiver operating characteristic curve. Calibration (i.e., agreement between observed outcome frequencies and predicted probabilities) was studied with graphical representations of the relationship between the observed outcome frequencies and the predicted probabilities (calibration curves): the grouped proportions versus mean predicted probability in groups defined by deciles and the logistic calibration were represented. The calibration was tested using the Hosmer-Lemeshow test. This test compares mean predicted probability and observed proportions using a 8 degree of freedom chi-square for the training set and a 9 degree of freedom chi-square for the validation sets. The analyses were performed using R software (http://cran.r-project.org).
A java web based interface is available at www.cancerdusein. curie.fr The study was approved by the breast cancer study group of the Institut Curie. Table 1 summarizes the training (1543 patients) and the two validation sets (615 and 496 patients). These three populations significantly differ in terms of age at diagnosis, ER status, HER2 status, histological grade, lympho vascular invasion, histological subtypes, number of sentinel nodes removed and number of positive sentinel node biopsy. These differences are of major interest in a validation process to test the robustness of a classification algorithm. The training set ( Table 2) was composed of 516 patients with a positive sentinel node biopsy (33.4%) and 1027 patients with a negative sentinel node biopsy (66.6%). We showed that patients with a positive sentinel node biopsy differed from those with a negative biopsy in terms of age at diagnosis, ER status, pathological tumor size, histological grade, mitotic index, lympho vascular invasion and number of sentinel node removed. The proportion of patients with a positive HER2 status was not significantly different between the two groups [8.6% vs 7.6%, p = 0.58]. The interaction covariate between ER and HER2 status [ERneg HER2neg, ERpos HER2neg, ERpos HER2pos, ERneg HER2pos] was a stronger predictor (p = 0.0031) of positive sentinel node biopsy than the ER status by itself (p = 0.016). We designed a multivariate logistic regression model to determine the probability of having a positive sentinel node biopsy ( Table 3). The initial input was based on the variables found significant in the univariate analysis. Tumour size, lympho-vascular invasion, molecular subtypes classification as defined by the interaction covariate between the ER and HER2 status and age at diagnosis were the final input into this model. Odds Ratio, Confidence Intervals and pvalue are summarized in table 3. The logistic regression parameters indicate the relative degree to which each of these variables is correlated to nodal metastasis. The performance of the multivariate model in the training and the two validation sets was analyzed in terms of discrimination and calibration. probability of having a positive sentinel node biopsy procedure for each molecular subtype ( Figure 3, Table 4). We showed an almost null slope of the correlation axis in the ER negative HER2 negative subgroup. The probability of having an axillary metastasis seemed to be more or less 20% whatever the tumour size. Both ER positive (either HER2 negative or positive) tumour subgoups showed an intermediate slope and the ER negative HER2 positive tumour subgroup showed the steepest slope. Tumour size was a major determinant of axillary metastasis development only in the HER2 positive or ER positive tumour subgroups. Sentinel node biopsies for breast cancers of less than 30 mm was associated with a rate of less than 30% of axillary metastasis in the ER negative HER2 negative subgroup and with one higher than 50% in the other three subgroups. For each molecular subtype as defined by a combination of ER and HER2 immuno-histochemistry markers, we summarized (table 5) eight publications addressing the percentage of axillary metastases [9][10][11][12][13][14][15][16].
Discussion
The aim of our work was to decipher the relation between the molecular subtype classification as defined by a combination of ER and HER2 status and the probability of a positive sentinel node biopsy. Using one training set and two validation sets we showed a benefit to introduce the ER and HER2 biomarkers interaction covariate to identify, before surgery, a patient with a high risk of axillary metastasis. Using tumour size, lympho-vascular invasion, molecular subtypes classification and age at diagnosis, we designed a robust multivariate logistic regression model to determine the probability of having a positive sentinel node biopsy. We validated this model in two independent and very different datasets and showed a very similar performance in terms of calibration and discrimination. Lu et al identified a similar multivariate model to predict lymph node metastases that included tumour size, lympho vascular invasion and tumour subtypes defined by a combination of ER status, HER2 status and modified Bloom and Richardson grade [9]. Furthermore we identified for each molecular subtype a very specific correlation pattern between the tumour size and the probability of a positive sentinel node biopsy. The ER negative HER2 negative breast cancer subgroup nodal status was almost independent from the tumour size with a relative constant trend of axillary metastases around 20%. Conversely the ER positive or HER2 positive breast cancer subgroups showed a strong and almost linear correlation between the tumour size and the percentage of axillary metastasis.
Tumour size and lympho vascular invasion are the main predictors of axillary metastases identified in many studies [17][18][19][20][21][22][23][24][25]. However tumour size and lympho vascular invasion have never been robustly related to any pathological or biological marker. High throughput gene expression profiles analysis failed to identify a set of genes correlated to the nodal status, the tumour size or the lympho vascular invasion [9]. The gene expression profile of paired primary tumour and corresponding axillary metastases have previously been shown as very similar [30]. From these observations, conclusions have been drawn that growth rate, time and stochastic factors seem to be the main determinants of the nodal status. However, several authors have recently underscored a significant relation between the molecular subtypes classification and the axillary status of breast cancer patients [9][10][11][12][13][14][15][16]. These evidences sustained the idea that nodal status is still a potential signature of the intrinsic biological properties of a primary tumour. Perou et al have identified the molecular subtype classification in the late 909 and it was a major breakthrough in the breast cancer research process [1]. This classification underscored the great heterogeneity of breast cancer. It is now a common knowledge that the pathologic characteristics, the aCGH profiles, the gene and miRNA expression profiles and altered pathways are dramatically different between these categories and sustained an overview of breast cancer as a disease composed of very different and independent molecular subgroups.
For each molecular subtype as defined by a combination of ER and HER2 immuno-histochemistry markers, we summarized (table 5) eight publications addressing the percentage of axillary metastases [9][10][11][12][13][14][15][16]. As reported in our study the ER negative HER2 negative tumour subgroup has the lowest rate of axillary metastasis and the HER2 positive tumour one, the highest. We hypothesized from the whole results that the axillary lymph node metastasis process is predominantly related to intrinsic biological properties in the ER negative HER2 negative breast cancer subgroup when stochastic events, tumour size, growth rate and lympho vascular invasion are the main determinants in both the ER positive or the HER2 positive breast cancer subgroups. As the molecular subtypes differ in terms of relapse free survival and overall survival [ER negative HER2 negative and HER2 positive breast cancer patients experience a shorter relapse free survival and overall survival] and the nodal status is the strongest prognostic predictor, we highlighted a very complex interaction network between the primary tumour, the nodal status and the distant metastases. The molecular subtype classification is one determinant of this network.
Finally we showed that biologically-driven analyses are able to build new models with higher performance in terms of breast cancer axillary status prediction. The molecular subtype classification is the first stratification level of breast carcinoma and strongly interacts with the axillary and distant metastasis process. Large integrative analyses have to be performed to explain why ER negative HER negative tumours have a low rate of axillary metastasis and a high rate of distant metastases. Conversely HER2 positive tumours have a rate of axillary metastases strongly related to the tumour size and a high rate of distant metastases.
|
v3-fos-license
|
2021-05-21T16:56:31.796Z
|
2021-04-20T00:00:00.000
|
234850862
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/jfs/2021/5597947.pdf",
"pdf_hash": "bfe46f3c20aa0c42f9cb4bbfbfd607b11d20de52",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42352",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "d0c26bd2802971dcd5d4bcfcf2cd8a1a6081925b",
"year": 2021
}
|
pes2o/s2orc
|
On the Oscillation Criteria for Fourth-Order p-Laplacian Differential Equations with Middle Term
In this paper, we study the oscillatory properties of the solutions of a class of fourth-order p-Laplacian differential equations with middle term. The new oscillation criteria obtained by using the theory of comparison with firstand second-order differential equations and a refinement of the Riccati transformations. The results in this paper improve and generalize the corresponding results in the literatures. Three examples are provided to illustrate our results.
In this paper, motivated by [26][27][28], we will give some new sufficient conditions for that oscillatory behavior of (1). In Section 2, we will provide some lemmas that will help us to prove our main results. In Section 3, based on the comparison with firstand second-order differential equations and a refinement of the Riccati transformations, we establish some new oscillation criteria of (1).
Preliminaries
First, we give the following lemmas that can discuss our main results.
Lemma 4 ([31], Lemma 2.3).
Assume that α is a quotient of odd positive integers; V > 0 and U ∈ ℝ are constants. Then The following lemma will be used in the proof of our main results in the next section.
Main Results
In the following theorem, we then by using a comparison strategy involving first-order differential equations to provide an oscillation criterion for equation (1).
For convenience, let
Theorem 7. Assume that (H 1 ) and hold. If the differential equation is oscillatory for some μ ∈ ð0, 1Þ, then the differential equation (1) is oscillatory.
Proof. Assume that (1) has a nonoscillatory solution in ½t 0 , ∞Þ. Without loss of generality, we may let x be an eventually positive solution of (1). Then, there exists a t 1 ≥ t 0 such that xðtÞ > 0, xðτðtÞÞ > 0, and xðσðtÞÞ > 0 for t ≥ t 1 . Let which having in mind (1) gives From the definition of zðtÞ, one has By repeating the same process, we have Set n = 3 in Lemma 2, we obtain zðtÞ ≥ ð1/3Þtz ′ ðtÞ, which implies that zðtÞ/t 3 is nonincreasing. Moreover, by the fact τðtÞ ≤ t that gives Combining (24) and (25), which yields Between equations (1) and (26), we obtain Since z is positive and increasing (by Lemma 5), we have lim t⟶∞ zðtÞ=0. So, from Lemma 3, one has for some μ ∈ ð0, 1Þ. It follows between (27) and (28) that, for all μ ∈ ð0, 1Þ, ω is a positive solution of the first-order delay differential inequality
Journal of Function Spaces
It is well known (see [33] and Theorem 7) that the corresponding equation (20) also has a positive solution, which is a contradiction. The theorem is proved.
Journal of Function Spaces
It is well known (see [33] and Theorem 7) that the corresponding equation (31) also has a positive solution, which is a contradiction. The theorem is proved.
Proof. Our proof by reduction to the absurd. Assume that z ′ ′ðtÞ > 0. From Lemma 2, we obtain Integrating the above equality from σðtÞ to t, one find that Let hðtÞ = z′ðtÞ in Lemma 3, then for all ε 1 ∈ ð0, 1Þ and every sufficiently large t. Now, we define a function ϕ by By differentiating (41) and using the inequalities (39) and (40), we get Since z′ðtÞ > 0, there exist a t 2 ≥ t 1 and a constant M > 0 such that zðtÞ > M, for all t ≥ t 2 . Without loss of generality, we may let M ≥ 1. By using Lemma 4 with we obtain
Journal of Function Spaces
This implies that which contradicts (37). The proof is completed.
Proof. We use the reduction to the absurd arguments. Assume that (1) has a nonoscillatory solution in ½t 0 , ∞Þ. Without loss of generality, we only need to be concerned with positive solutions of equation (1). Then, there exists a t 1 ≥ t 0 such that xðtÞ > 0, xðτðtÞÞ > 0, and xðσðtÞÞ > 0 for t ≥ t 1 . From Lemmas 4 and 11, one has that
|
v3-fos-license
|
2024-05-18T15:24:59.476Z
|
2024-05-01T00:00:00.000
|
269828981
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4425/15/5/616/pdf?version=1715497928",
"pdf_hash": "4ed494b1e2ad00fd0ecad7606f0bacedc583bb0f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42353",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "4187cdaf123ed63ce1ed636728fecb3132002eb4",
"year": 2024
}
|
pes2o/s2orc
|
Normal Ovarian Function in Subfertile Mouse with Amhr2-Cre-Driven Ablation of Insr and Igf1r
Insulin receptor signaling promotes cell differentiation, proliferation, and growth which are essential for oocyte maturation, embryo implantation, endometrial decidualization, and placentation. The dysregulation of insulin signaling in women with metabolic syndromes including diabetes exhibits poor pregnancy outcomes that are poorly understood. We utilized the Cre/LoxP system to target the tissue-specific conditional ablation of insulin receptor (Insr) and insulin-like growth factor-1 receptor (Igf1r) using an anti-Mullerian hormone receptor 2 (Amhr2) Cre-driver which is active in ovarian granulosa and uterine stromal cells. Our long-term goal is to examine insulin-dependent molecular mechanisms that underlie diabetic pregnancy complications, and our conditional knockout models allow for such investigation without confounding effects of ligand identity, source and cross-reactivity, or global metabolic status within dams. Puberty occurred with normal timing in all conditional knockout models. Estrous cycles progressed normally in Insrd/d females but were briefly stalled in diestrus in Igf1rd/d and double receptor (DKO) mice. The expression of vital ovulatory genes (Lhcgr, Pgr, Ptgs2) was not significantly different in 12 h post-hCG superovulated ovaries in knockout mice. Antral follicles exhibited an elevated apoptosis of granulosa cells in Igf1rd/d and DKO mice. However, the distribution of ovarian follicle subtypes and subsequent ovulations was normal in all insulin receptor mutants compared to littermate controls. While ovulation was normal, all knockout lines were subfertile suggesting that the loss of insulin receptor signaling in the uterine stroma elicits implantation and decidualization defects responsible for subfertility in Amhr2-Cre-derived insulin receptor mutants.
Introduction
Insulin receptors are critical activators of several transcriptional pathways and cellular and physiological processes, such as the cell cycle, survival, cell migration, proliferation, and differentiation [1,2].Insulin receptors (IRs) are tyrosine kinase receptors with four domains.Insr and Igf1r have high homology in their ligand binding domains, thus allowing alternative ligands, insulin, and insulin-like growth factor-1 to alternatively bind to the receptors [3].Thus, while INSR and IGF1R have cognate binding partners, our work and others have shown that when one receptor is nonfunctional, the other may effectively bind multiple ligands to preserve function at least partially.
Insulin signaling is essential for reproductive physiology in both males and females.For this work, we will outline the role of insulin signaling in the female reproductive tract, specifically in the ovaries.Similar patterns of hormone cyclicity occur in humans and mice-during the menstrual and estrous cycles, respectively.An essential process in female Genes 2024, 15, 616 2 of 12 reproductive biology is the maturation and release of oocytes from the ovary, which is dependent on well-characterized gonadotropin and steroid hormone stimulation.The primary hormones guiding follicle growth and ovulation are follicle-stimulating hormone (FSH), estrogen (E2), progesterone (P4), inhibin, and luteinizing hormone (LH), but the mechanism of insulin and insulin-like hormones is less clear.
When defective, insulin signaling in the female reproductive system can lead to subfertility or infertility due to ovarian and/or uterine dysfunction.Diabetic women have higher risks and rates of reproductive issues, including infertility and pregnancy complications.The Centers for Disease Control report from 2022 indicated that 11% of women in the U.S. 20 years and older have diabetes.Diabetic women have higher risks of ovarian dysfunction, such as enlarged ovaries and potentially reduced ovarian reserve, than women without diabetes [4].Additionally, there are higher rates of menstruation issues, including amenorrhea, oligomenorrhea, and menorrhagia, in diabetic women [5,6].Prevalent complications during pregnancy include pre-eclampsia, spontaneous abortion, and neonatal hyperglycemia [7,8].Offspring from gestational diabetic pregnancies and or hyperglycemia can result in increased risks for metabolic and cardiovascular complications [9].As type 2 diabetes continues to develop in younger people, the prevalence of women with diabetes in childbearing years is increasing [10].Obesity is correlated with insulin resistance, elevated insulin, and glucose levels.
Insulin signaling can be dysregulated in cumulus cells in obese and infertile women with polycystic ovarian syndrome without recognizable insulin resistance.The cumulus granulosa-oocyte complex (COC) is altered due to hyperglycemic or hypoinsulinemic conditions.COC defects in type 1 diabetic murine models are characterized by a decrease in oocyte size and meiotic delay, increased apoptotic granulosa cells, an upregulated expression of death signaling proteins, and a decrease in critical gap junction proteins that maintain communication between the oocyte and surrounding nurse cells [11].
Folliculogenesis is promoted by insulin in a stage-specific manner through AKT signaling in vitro [12].However, studies examining hyperinsulinemia, particularly in women with polycystic ovarian syndrome (PCOS), indicate that granulosa cells become resistant to FSH stimulation.However, when these patients were administered with insulin and pioglitazone treatments for five months, the granulosa cell responsiveness increased [13].This indicates that granulosa cells are sensitive to insulin resistance, and ovarian function, such as ovulation, is impaired in insulin-resistant environments.Furthermore, in obese women undergoing IVF treatments, hyperinsulinemia in granulosa cells decreased FSHstimulated functions, such as aromatase activity and a reduced expression of p-Akt2 [14].Insulin can sufficiently stimulate E2 and P4 production in granulosa cells [15].Indeed, even in PCOS patients with peripheral hyperinsulinemia, insulin stimulates E2 and P4 production.These results suggest that insulin signaling and optimal insulin levels are essential for proper granulosa cell function and correlating ovarian functions for cyclicity and maintaining pregnancy.
The Stocco group conditionally ablated Igf1r using a combination of Esr2-Cre and Cyp19-Cre and found female mice were infertile due to a block in antral follicle formation leading to ovulation failure [16].We subsequently characterized a female reproductive tract conditional ablation of Insr and Igf1r using progesterone receptor Cre (Pgr-Cre) where granulosa cell-specific ablation occurred after the block revealed in the prior study.These mice exhibited subfertility in Igf1r mutants and complete infertility in Insr/Igf1r double mutants (DKO) [17].The infertility was characterized by a significant reduction in ovulation (50% relative to controls) with follicles exhibiting trapped oocytes in partially luteinized tissue that were not released and ultimately underwent atresia.Further, while the uterus appeared normally receptive, oocytes that were ovulated and fertilized never implanted in DKO, and the fecundity of Igf1r mutants was severely reduced [18].
Here, we evaluate fertility and ovarian function in the Amhr2-Cre-mediated ablation of Insr, Igf1r, and double receptor knockout female mice.Amhr2-Cre-driven ablation should occur during the secondary to antral follicle transition, placing the timing of insulin receptor dysfunction in between the two prior models.Our findings demonstrate that while these mice are subfertile, the loss of insulin-dependent signaling in granulosa cells during this window elicited by Amhr2-Cre is not sufficient to impede ovarian cyclicity, folliculogenesis, and ovulation.
Mice
All mice used in this study were maintained on a C57BL6 genetic background.All animals were housed under a 12:12 light-dark cycle at 70% humidity.Genotyping was conducted by collecting genomic DNA from tails and toes 5-8 days of age (Table S1: PCR primers), as described previously [17].Mice with floxed alleles for Insr (#006955, [19]) and Igf1r (#012251, [20]) were obtained from the Jackson Laboratory.Esr2-Cre mice [21] were kindly provided by Jay Ko (University of Illinois) but are now available from the Jackson Laboratory (#028717).Pgr-Cre mice [22] were kindly provided by Dr. John Lydon (Baylor College of Medicine).Amhr2-Cre mice (014245-UNC, [23]) were obtained from the MMRRC as frozen embryos and rederived by the WSU animal production core.
Estrous Cycle Assessment
Vaginal smears were collected for 30 days beginning seven days post vaginal opening for each genotype.Vaginal canals were flushed three times with 1 × PBS and transferred to a microscope slide at the same time each morning.Cell composition was observed and scored under bright field microscopy as previously described [24].Estrous stages were classified as follows: proestrus, primarily nucleated and some cornified cells; estrus, primarily cornified epithelial cells; metestrus, cornified epithelial and leukocyte cells; and diestrus, primarily leukocytes.
Superovulation and Ovulation Assessment
Ovulation was induced in mice through injections of equine chorionic gonadotropin [eCG (PMSG), Biovendor RP178272, Asheville, NC, USA] and followed by a single injection of human chorionic gonadotropin (hCG, C0434 Sigma St. Louis, MO, USA), as described previously [17,25].Briefly, female mice weighing ~15 g (aged 21-28 days) were injected with 5 IU eCG and 48 h later, a single injection of 4 IU hCG.Mice were euthanized and collected at 12 h post HCG injection for molecular and histological studies.For cumulus-oocyte complex (COC) retrieval, oviducts were removed and transferred to a dish containing PBS.The ampulla mechanically burst and was flushed of COC using PBS injected via a 30 G blunt-end needle.COCs were transferred to M2 media (Sigma M7167) at room temperature and imaged.One ovary was snap-frozen in liquid nitrogen and stored at −80 • C for RNA isolation with Trizol (Invitrogen, Waltham, MA, USA) according to the manufacturer's instructions and the other preserved in 4% paraformaldehyde for embedding and histological analyses, as described previously [17,25].
Follicle Analysis
Ovaries were serially sectioned at 5 µm, and every 5th section was transferred to a microscope slide.Forty sections are obtained per sample.Sections were stained with hematoxylin and eosin and counted as previously described [13].Images were processed using the Image J: Cell counter plug-in.Primordial follicles were defined as type 1, primary as type 2, secondary as type 3, antral as type 4, and corpus luteum as type 5. Scored image uploads generated by the algorithm were manually verified for consistency by two observers.
Apoptosis Assay
Apoptotic cells were identified in ovarian sections using the terminal deoxynucleotide transferase dUTP nick end labeling (TUNEL) assay that was performed according to the manufacturer's instructions using the ApopTag ® Fluorescein In Situ Apoptosis Detection Kit (S7110; Millipore, Burlington, MA, USA).Apoptotic cells were counted using the ImageJ (Version 1.54h): Cell counter plug-in.Antral follicles with at least one positive cell were scored as positive, and the total apoptotic cells were counted within each positive follicle.
Quantitative Real-Time RT-PCR (qPCR) Analysis
RNA was extracted from the whole ovarian tissue homogenized in 500 µL TRIZOL.RNA was extracted using Phasemaker TM tubes (Thermo Fisher, Waltham, MA, USA) following the manufacturer's protocol.cDNA was synthesized from 3 µg of total RNA using the High-Capacity cDNA Reverse transcription kit A (Applied Biosystems, Waltham, MA, USA).Relative mRNA expression was analyzed using the BIO-RAD CFX Opus 96 Real-Time PCR System using Applied Biosystems' corresponding SYBR Green Master Mix.Relative expression was normalized against Rpl19 using the 2 ∆∆CT method.Gene-specific primers are shown in Table S1 and were designed and used as previously described [17].
Statistical Analysis
All qPCR, histological measurement, and fertility assessment data were subjected to one-way ANOVA Prism 9.0 (GraphPad, San Diego, CA, USA).Comparisons of means between two groups were conducted using t tests, and differences between individual means of multiple grouped data were tested by a Tukey multiple-range post-test.All data met the necessary criteria for ANOVA including equal variance as determined by Bartlett's test.All experimental data are presented as the mean ± SEM.Unless otherwise indicated, a p value of less than 0.05 was considered statistically significant.
Fertility Analysis of Female Mice with Conditional Ablation of Insulin Receptors
To examine the impact of the loss of INSR and IGF1R signaling on female fertility that avoids the severe defects of global receptor deletion [26][27][28] and the spatial and temporal fertility blocks identified by previous conditional knockout studies [16][17][18], we used Amhr2-Cre to ablate Insr (Insr d/d ) and Igf1r (Igf1r d/d ) individually and eliminated the potential for redundancy from receptor cross-activation by generating double receptor knockouts (DKO).We found no differences in mating behavior in receptor mutants relative to controls (Figure 1A).While INSR and IGF1R were significantly ablated specifically in the uterine stroma, no differences in gross uterine morphology or diameter at estrus were observed, and a normal distribution of uterine glands was present.All three mutant lines were capable of generating pups, but there was a significant delay in the timing of birth in DKO mice that held their litters 1.45 ± 0.37 days longer than control mice (Figure 1B).Mean litter sizes were significantly reduced in all three mutant lines with Insr d/d producing two fewer pups per litter and Igf1r d/d and DKO mice producing three fewer pups per litter (Figure 1C).potential for redundancy from receptor cross-activation by generating double receptor knockouts (DKO).We found no differences in mating behavior in receptor mutants relative to controls (Figure 1A).While INSR and IGF1R were significantly ablated specifically in the uterine stroma, no differences in gross uterine morphology or diameter at estrus were observed, and a normal distribution of uterine glands was present.All three mutant lines were capable of generating pups, but there was a significant delay in the timing of birth in DKO mice that held their litters 1.45 ± 0.37 days longer than control mice (Figure 1B).Mean litter sizes were significantly reduced in all three mutant lines with Insr d/d producing two fewer pups per litter and Igf1r d/d and DKO mice producing three fewer pups per litter (Figure 1C).The Amhr2-Cre driver is currently the best option for conditional knockout studies seeking to examine gene function in stromal cells of the uterus [23].Recently, this Cre line has controversially been demonstrated to elicit a global deletion of some targets [29][30][31].In our study, a spurious embryo-wide inactivation of Insr and Igf1r did not occur as mothers were normal (i.e., not infertile dwarves as may be expected for complete insulin receptor mutants [27,32,33]), and we carefully genotyped mice to ensure the floxed allele existed outside of reproductive tissues.However, active Cre recombinase is present in secondary follicles of adult ovaries [34].As prior studies revealed ovulatory defects when Insr and Igf1r were conditionally deleted in granulosa cells of primary follicles and in luteinizing granulosa cells, we sought to determine whether Amhr2-Cre-mediated deletion in the ovary could contribute to the observed subfertility.
Evaluation of INSR and IGF1R Receptor Ablation in Ovarian Granulosa Cells
In control animals, nearly all granulosa cells were found to be IGF1R-positive, and 80% were positive for INSR.Some INSR and IGF1R ablation was observed in secondary follicles and was the most prevalent in antral follicles in accordance with where Amhr2-Cre activity was expected to have been the most effective (Figure 2A).The levels of receptors were reduced but not fully ablated in granulosa cells of follicles in DKO mice.We did not examine single knockout mice as the efficiency of ablation for each single gene was expected to be similar in DKO mice.INSR was eliminated in 27% of granulosa cells in each antral follicle, and IGF1R was eliminated in 36% of granulosa cells in each antral follicle (Figure 2B).Thus, we hypothesized that most INSR-and IGF1R-dependent processes may be preserved in Amhr2-Cre-derived conditional knockouts as the majority still possessed one or both receptors, unlike our previous ablation model using the stronger Pgr-Cre where receptor ablation was predominant throughout antral and periovulatory follicles and nearly complete in granulosa cells, and subsequent defects in luteinization and ovulation were observed [17].To test our hypothesis, we assessed ovarian function in insulin The Amhr2-Cre driver is currently the best option for conditional knockout studies seeking to examine gene function in stromal cells of the uterus [23].Recently, this Cre line has controversially been demonstrated to elicit a global deletion of some targets [29][30][31].In our study, a spurious embryo-wide inactivation of Insr and Igf1r did not occur as mothers were normal (i.e., not infertile dwarves as may be expected for complete insulin receptor mutants [27,32,33]), and we carefully genotyped mice to ensure the floxed allele existed outside of reproductive tissues.However, active Cre recombinase is present in secondary follicles of adult ovaries [34].As prior studies revealed ovulatory defects when Insr and Igf1r were conditionally deleted in granulosa cells of primary follicles and in luteinizing granulosa cells, we sought to determine whether Amhr2-Cre-mediated deletion in the ovary could contribute to the observed subfertility.
Evaluation of INSR and IGF1R Receptor Ablation in Ovarian Granulosa Cells
In control animals, nearly all granulosa cells were found to be IGF1R-positive, and 80% were positive for INSR.Some INSR and IGF1R ablation was observed in secondary follicles and was the most prevalent in antral follicles in accordance with where Amhr2-Cre activity was expected to have been the most effective (Figure 2A).The levels of receptors were reduced but not fully ablated in granulosa cells of follicles in DKO mice.We did not examine single knockout mice as the efficiency of ablation for each single gene was expected to be similar in DKO mice.INSR was eliminated in 27% of granulosa cells in each antral follicle, and IGF1R was eliminated in 36% of granulosa cells in each antral follicle (Figure 2B).Thus, we hypothesized that most INSR-and IGF1R-dependent processes may be preserved in Amhr2-Cre-derived conditional knockouts as the majority still possessed one or both receptors, unlike our previous ablation model using the stronger Pgr-Cre where receptor ablation was predominant throughout antral and periovulatory follicles and nearly complete in granulosa cells, and subsequent defects in luteinization and ovulation were observed [17].To test our hypothesis, we assessed ovarian function in insulin receptor mutants using the most common physiological and molecular assays.We found no evidence of gross ovarian morphology or size.A histological analysis of ovarian sections from all four genotypes showed no obvious signs of developmental block.A subsequent quantitative analysis of follicles in DKO ovaries found no differences in the stage distribution relative to controls (Supplemental Figure S1A).lation were observed [17].To test our hypothesis, we assessed ovarian function in insulin receptor mutants using the most common physiological and molecular assays.We found no evidence of gross ovarian morphology or size.A histological analysis of ovarian sections from all four genotypes showed no obvious signs of developmental block.A subsequent quantitative analysis of follicles in DKO ovaries found no differences in the stage distribution relative to controls (Supplemental Figure S1A).
Analysis of Estrous Cyclicity and Ovulation in Conditional Knockout Females
While Amhr2-Cre is not expected to be active and ablate insulin receptors in the pituitary and there was no obvious block in folliculogenesis, we sought to rule out subtle anomalies in ovarian function by charting ovulatory cycles and subsequent ovulation from each genotype.First, we examined the progression of the stages of the estrous cycle and revealed normal hormone cyclicity and progression through estrous cycle stages.Representative patterns from each genotype are shown in Figure 3A.In contrast to mice with Igf1r ablation using Esr2-Cre where ovulation is inconsistent and stalls in metestrus
Analysis of Estrous Cyclicity and Ovulation in Conditional Knockout Females
While Amhr2-Cre is not expected to be active and ablate insulin receptors in the pituitary and there was no obvious block in folliculogenesis, we sought to rule out subtle anomalies in ovarian function by charting ovulatory cycles and subsequent ovulation from each genotype.First, we examined the progression of the stages of the estrous cycle and revealed normal hormone cyclicity and progression through estrous cycle stages.Representative patterns from each genotype are shown in Figure 3A.In contrast to mice with Igf1r ablation using Esr2-Cre where ovulation is inconsistent and stalls in metestrus for extended periods (Figure 3B), all Amhr2-Cre-derived mutants exhibited largely normal 3-4 day cycles.This is in agreement with our prior study examining insulin receptor ablation with Pgr-Cre (Figure 3C) which acts later and more strongly than Amhr2-Cre but ultimately did not impact estrous cyclicity despite oocytes not being released efficiently prior to luteinization [17].To further discern ovarian cyclicity, we quantified the time spent in each phase of the estrous cycle.No significant differences existed in the estrus, proestrus, or metestrus stages (Figure 3D).The exception was a mathematically significant prolonged diestrus stage in both Igf1r d/d and DKO mice.A similar pause at the diestrus stage was observed in Pgr-Cre-derived Igf1r d/d and DKO mice in our previous study [17].As those mice exhibited a 50% reduction in ovulation, we quantified the number of cumulus-oocyte complexes in the oviduct.To rule out an upstream disruption of ovulation and to synchronize the mice, we used exogenous gonadotropins to hyperstimulate ovulation and assess key genes that promote ovulation.
for extended periods (Figure 3B), all Amhr2-Cre-derived mutants exhibited largely normal 3-4 day cycles.This is in agreement with our prior study examining insulin receptor ablation with Pgr-Cre (Figure 3C) which acts later and more strongly than Amhr2-Cre but ultimately did not impact estrous cyclicity despite oocytes not being released efficiently prior to luteinization [17].To further discern ovarian cyclicity, we quantified the time spent in each phase of the estrous cycle.No significant differences existed in the estrus, proestrus, or metestrus stages (Figure 3D).The exception was a mathematically significant prolonged diestrus stage in both Igf1r d/d and DKO mice.A similar pause at the diestrus stage was observed in Pgr-Cre-derived Igf1r d/d and DKO mice in our previous study [17].As those mice exhibited a 50% reduction in ovulation, we quantified the number of cumulus-oocyte complexes in the oviduct.To rule out an upstream disruption of ovulation and to synchronize the mice, we used exogenous gonadotropins to hyperstimulate ovulation and assess key genes that promote ovulation.
cKO Females Respond to Exogenous Hormone Supplementation
To determine whether the normal estrous cycles we observed indeed correlated to successful ovulation, we superovulated female mice, excised their oviducts, and flushed cumulus-oocyte complexes (COCs) for morphological assessment and quantification.All three conditional knockout lines produced COCs that had intact granulosa cell layers and were indistinguishable from control COCs.There were no significant differences in the number of COCs retrieved from any group (Figure 4A).We never observed oocytes trapped in corpora lutea in Amhr2-Cre DKO ovaries, in contrast to the Pgr-Cre DKO mice in which they were found frequently [17].Indeed, the number of COCs retrieved matched very closely with the number of CL present on the ovarian surface with COC/CL ratios of 91 ± 1.5% in control animals and 87 ± 3.2% in DKO mice (n = 5).In addition to the absence of trapped oocytes, luteinization appeared more uniform within CL cross-sections as assessed by immunohistochemistry for the luteal cell marker HSD17B7 (Supplemental Figure S1B).In Pgr-Cre DKO mice, HSD17B7 staining was sporadic indicating an altered timing of luteinization, and those mice exhibited a reduction in progesterone production and the downregulation of ovulation-promoting factors and several enzymes in the progesterone synthesis pathway [17].We similarly assessed the ovarian expression of Pgr, Lhcgr,
cKO Females Respond to Exogenous Hormone Supplementation
To determine whether the normal estrous cycles we observed indeed correlated to successful ovulation, we superovulated female mice, excised their oviducts, and flushed cumulus-oocyte complexes (COCs) for morphological assessment and quantification.All three conditional knockout lines produced COCs that had intact granulosa cell layers and were indistinguishable from control COCs.There were no significant differences in the number of COCs retrieved from any group (Figure 4A).We never observed oocytes trapped in corpora lutea in Amhr2-Cre DKO ovaries, in contrast to the Pgr-Cre DKO mice in which they were found frequently [17].Indeed, the number of COCs retrieved matched very closely with the number of CL present on the ovarian surface with COC/CL ratios of 91 ± 1.5% in control animals and 87 ± 3.2% in DKO mice (n = 5).In addition to the absence of trapped oocytes, luteinization appeared more uniform within CL cross-sections as assessed by immunohistochemistry for the luteal cell marker HSD17B7 (Supplemental Figure S1B).In Pgr-Cre DKO mice, HSD17B7 staining was sporadic indicating an altered timing of luteinization, and those mice exhibited a reduction in progesterone production and the downregulation of ovulation-promoting factors and several enzymes in the progesterone synthesis pathway [17].We similarly assessed the ovarian expression of Pgr, Lhcgr, and Ptgs2 at 12 h post-hCG in hyperstimulated mice and found no significant difference between control, single receptor, and DKO mice (Figure 4B).A modest but statistically insignificant increase in Star expression was observed in Igf1r d/d and DKO mice (Figure 4C).However, all six genes examined in the subsequent steroid hormone synthesis pathway were unaltered in any knockout mice line.and Ptgs2 at 12 h post-hCG in hyperstimulated mice and found no significant difference between control, single receptor, and DKO mice (Figure 4B).A modest but statistically insignificant increase in Star expression was observed in Igf1r d/d and DKO mice (Figure 4C).However, all six genes examined in the subsequent steroid hormone synthesis pathway were unaltered in any knockout mice line.Prior insulin receptor ablation studies using Esr2-Cre observed a significant decline in follicle health with widespread apoptosis contributing to the failure of antral follicle development [16].In our Pgr-Cre-mediated knockout study, there was no significant increase in atretic follicles in DKO mice, and granulosa cell apoptosis, while commonly observed, was highly variable such that no follicle type exhibited an increase in TUNELpositive cells relative to controls [17].To further determine follicle quality in Ahmr2-Cre Insr and Igf1r conditional knockouts, we counted the number of apoptotic follicles per tissue section in each genotype and found no differences in the prevalence of follicles with TUNEL-positive cells in any genotype.However, when the number of apoptotic cells were counted per positive follicle, there was a ~3-fold increase in the number of positive cells in antral follicles of Igf1r d/d and DKO mice (Figure 5).Only a few follicles exhibited an extreme proportion (i.e., one-third) of dying granulosa cells in agreement with overall normal ovarian function and ovulation in all three mutant genotypes.Prior insulin receptor ablation studies using Esr2-Cre observed a significant decline in follicle health with widespread apoptosis contributing to the failure of antral follicle development [16].In our Pgr-Cre-mediated knockout study, there was no significant increase in atretic follicles in DKO mice, and granulosa cell apoptosis, while commonly observed, was highly variable such that no follicle type exhibited an increase in TUNELpositive cells relative to controls [17].To further determine follicle quality in Ahmr2-Cre Insr and Igf1r conditional knockouts, we counted the number of apoptotic follicles per tissue section in each genotype and found no differences in the prevalence of follicles with TUNEL-positive cells in any genotype.However, when the number of apoptotic cells were counted per positive follicle, there was a ~3-fold increase in the number of positive cells in antral follicles of Igf1r d/d and DKO mice (Figure 5).Only a few follicles exhibited an extreme proportion (i.e., one-third) of dying granulosa cells in agreement with overall normal ovarian function and ovulation in all three mutant genotypes.and Ptgs2 at 12 h post-hCG in hyperstimulated mice and found no significant difference between control, single receptor, and DKO mice (Figure 4B).A modest but statistically insignificant increase in Star expression was observed in Igf1r d/d and DKO mice (Figure 4C).However, all six genes examined in the subsequent steroid hormone synthesis pathway were unaltered in any knockout mice line.Prior insulin receptor ablation studies using Esr2-Cre observed a significant decline in follicle health with widespread apoptosis contributing to the failure of antral follicle development [16].In our Pgr-Cre-mediated knockout study, there was no significant increase in atretic follicles in DKO mice, and granulosa cell apoptosis, while commonly observed, was highly variable such that no follicle type exhibited an increase in TUNELpositive cells relative to controls [17].To further determine follicle quality in Ahmr2-Cre Insr and Igf1r conditional knockouts, we counted the number of apoptotic follicles per tissue section in each genotype and found no differences in the prevalence of follicles with TUNEL-positive cells in any genotype.However, when the number of apoptotic cells were counted per positive follicle, there was a ~3-fold increase in the number of positive cells in antral follicles of Igf1r d/d and DKO mice (Figure 5).Only a few follicles exhibited an extreme proportion (i.e., one-third) of dying granulosa cells in agreement with overall normal ovarian function and ovulation in all three mutant genotypes.
Discussion
Folliculogenesis and ovulation are regulated by the functional hormone cyclicity of LH, FSH, E2, and P4.Disruption in this process can occur through metabolic disorders, diabetes (T1D, T2D, GDM), and PCOS.Many studies have shown the relationship between metabolic dysregulation and ovarian defects, such as abnormal menstruation, amenorrhea, oligomenorrhea, and menorrhagia, in diabetic women [4,5,35].These studies have used diabetic models of mice or human diabetic patients.These studies characterized wholebody insulin dysregulation.However, few studies have investigated ovarian tissue with altered insulin signaling while keeping the peripheral body insulin-dependent pathways functioning normally.In this study, we conditionally deleted Insr and Igf1r using Amhr2-Cre which is active in granulosa cells of the ovary and in uterine stromal cells.Amhr2-Cre is presently the best option for eliciting conditional gene ablation in uterine stroma cells, but the potential confounding effects of ovarian deletion on female fertility must be accounted for to completely characterize subfertility and infertility in subsequent mutant mice.
We expected that Amhr2-Cre would sufficiently ablate insulin receptors in the ovary eliciting an ovulation defect based on prior studies from our lab and others.The conditional ablation of Igf1r using Cyp19-Cre and Esr2-Cre was performed to test the hypothesis that IGF1R signaling was essential for antral follicles to respond to FSH in vivo [16].The ablation of Igf1r with either Cre driver resulted in significant subfertility and the combination of both infertilities.While FSH receptor expression was not altered, the FSH-dependent control of granulosa proliferation and differentiation was crippled, and follicles did not progress to the antral stage resulting in ovulation failure.Our subsequent work showed that the Pgr-Cre ablation of both Insr and Igf1r was necessary to eliminate potential masking of ovarian phenotypes due to the cross-reactivity of INS or IGF ligands with their non-cognate receptor when their high affinity partner was absent [17].In these mice, the later activity of Pgr-Cre allowed Igf1r d/d mice to skip the block observed by the Stocco group [16,17].Still, these mice exhibited substantial subfertility in single receptor mutants, and DKO mice were completely infertile.Both of these Cre models were characterized by elevated granulosa cell apoptosis, a reduced expression of ovulation-promoting genes, a reduced expression of steroidogenesis enzymes, and ultimately estradiol and progesterone production were compromised.However, in contrast to the Esr2-Cre ablation of insulin receptor signaling, this did not elicit abnormal estrous cycles in Pgr-Cre mutant lines, similarly to our findings in this study after Amhr2-Cre ablation.
We did find that the loss of INSR and IGF1R in ~one-third of antral follicle granulosa cells did correlate with a 3-fold increase in the incidence of apoptotic cells in each follicle.However, this mosaic deletion and subsequent loss of mural granulosa cells did not appear to impact follicle health significantly as there was no increase in atretic follicle counts in any genotype.Similar follicular distribution and the lack of induced atresia were also features of the stronger but later acting Pgr-Cre used in our prior study [17].We did not assess whether the loss of insulin signaling and granulosa cell death impacted subsequent thecal or luteal cell development.However, the lack of change in any of the steroidogenesis factors and ovulation-promoting genes would indicate that developing follicles progressed normally and could be capable of supporting oocyte maturation to fertilization competency.Follicles reaching the periovulatory phase were found to extrude their oocytes with maximal efficiency in Amhr2-Cre DKO as no trapped oocytes were observed within corpora lutea.Like Pgr-Cre, Amhr2-Cre could potentially ablate INSR and IGF1R in the oviduct.We did not assess this histologically, but there was no evidence that the oviduct was less coiled, shorter, or retained oocytes/embryos in either our prior study or this one.
We anticipated that ovulation might stall in Amhr2-Cre DKO mice, as we observed in Pgr-Cre DKO mice [17].However, all lines of evidence presented in this paper suggest that ovulation is largely normal in Insr d/d , Igf1r d/d , and DKO mice due to the relatively reduced activity of Amhr2-Cre and greater residual expression of INSR and IGF1R in secondary and antral follicles after conditional ablation.
In agreement with prior studies, the disruption of the IGF1R axis has a greater impact on reproductive parameters than INSR alone.However, differences between single mutants and DKO mice indicate that some redundancy is present that should be investigated further.Taken together, our present findings indicate that the Amhr2-Cre model is likely useful for examining insulin receptor action in implantation and postimplantation developmental processes, which we are presently pursuing.
Figure 1 .
Figure 1.Female insulin receptor mice are subfertile.Control females and those lacking one or both insulin receptors were bred to males of established fertility.(A) All female mice presented with postcoitus vaginal plugs to indicate successful mating.Data shown are the mean delay for the appearance of the first plug (n = 10-15).(B) After mating, the pairs were separated, and the gestational length was measured from plug date to birth across all genotypes (n = 8-9).(C).The number of pups produced per litter from successful pregnancies (n = 7-16).(A-C).Bar heights indicate the mean ± SEM for each genotype.Letters denote means that are significantly different (p < 0.05), one-way ANOVA with a Tukey multiple comparison post-test.
Figure 1 .
Figure 1.Female insulin receptor mice are subfertile.Control females and those lacking one or both insulin receptors were bred to males of established fertility.(A) All female mice presented with post-coitus vaginal plugs to indicate successful mating.Data shown are the mean delay for the appearance of the first plug (n = 10-15).(B) After mating, the pairs were separated, and the gestational length was measured from plug date to birth across all genotypes (n = 8-9).(C).The number of pups produced per litter from successful pregnancies (n = 7-16).(A-C).Bar heights indicate the mean ± SEM for each genotype.Letters denote means that are significantly different (p < 0.05), one-way ANOVA with a Tukey multiple comparison post-test.
Figure 2 .
Figure 2. INSR and IGF1R are reduced in granulosa cells of DKO ovaries.(A) INSR and IGF1R residual proteins were detected by immunohistochemistry with antibodies specific to each receptor.As differences were not easy to visualize in full ovarian cross-sections, positive and negative granulosa cells were counted in individual secondary and antral follicles in high magnification images (n = 4 animals/genotype).Red arrows indicate granulosa cells were receptor expression is absent.(B) DKO mice exhibited a decrease in the percentage of INSR-or IGF1R-positive granulosa cells in antral follicles.Data are expressed as the mean ± SEM of the average percentage of positive granulosa cells found in each follicle per animal (at least 10 follicles per animal).(C) An enlarged image from panel A is provided to distinguish positive and negative cells.
Figure 2 .
Figure 2. INSR and IGF1R are reduced in granulosa cells of DKO ovaries.(A) INSR and IGF1R residual proteins were detected by immunohistochemistry with antibodies specific to each receptor.As differences were not easy to visualize in full ovarian cross-sections, positive and negative granulosa cells were counted in individual secondary and antral follicles in high magnification images (n = 4 animals/genotype).Red arrows indicate granulosa cells were receptor expression is absent.(B) DKO mice exhibited a decrease in the percentage of INSR-or IGF1R-positive granulosa cells in antral follicles.Data are expressed as the mean ± SEM of the average percentage of positive granulosa cells found in each follicle per animal (at least 10 follicles per animal).(C) An enlarged image from panel A is provided to distinguish positive and negative cells.
Figure 3 .
Figure 3. Estrous cycles were monitored for 30 days (n = 7-12).(A) Representative plots for each genotype are shown indicating the transition from metestrus (M) to estrus (E) to proestrus (P) and to diestrus (D).(B) For comparison, the ablation of Igf1r using Esr2-Cre is shown, which acts earlier than Amhr2-Cre in granulosa cells of activated follicles and in the developing ovary, has ovulation failure.(C) For comparison, the ablation of both Insr and Igf1r using Pgr-Cre is shown, which acts later than Amhr2-Cre in granulosa cells of activated follicles and in the developing ovary, has partial ovulation impairment.(D) The number of days at each stage of the estrous cycle were plotted for each animal.Bar heights indicate the mean ± SEM for each genotype.Letters denote means that are significantly different (p < 0.05), one-way ANOVA with a Tukey multiple comparison post-test.
Figure 3 .
Figure 3. Estrous cycles were monitored for 30 days (n = 7-12).(A) Representative plots for each genotype are shown indicating the transition from metestrus (M) to estrus (E) to proestrus (P) and to diestrus (D).(B) For comparison, the ablation of Igf1r using Esr2-Cre is shown, which acts earlier than Amhr2-Cre in granulosa cells of activated follicles and in the developing ovary, has ovulation failure.(C) For comparison, the ablation of both Insr and Igf1r using Pgr-Cre is shown, which acts later than Amhr2-Cre in granulosa cells of activated follicles and in the developing ovary, has partial ovulation impairment.(D) The number of days at each stage of the estrous cycle were plotted for each animal.Bar heights indicate the mean ± SEM for each genotype.Letters denote means that are significantly different (p < 0.05), one-way ANOVA with a Tukey multiple comparison post-test.
Figure 4 .
Figure 4. Ovulation is normal in Amhr2-Cre-mediated insulin receptor knockout mice.Mice were superovulated and sacrificed 12 h post-hCG for histological and molecular analyses.(A) Oviducts were extirpated, flushed with PBS, and retrieved COCs were counted (n = 8 control, n = 5 for each conditional knockout).(B) qPCR analysis of established ovulation-promoting genes in conditional insulin receptor knockout mice relative to control which was arbitrarily set to 1 (n = 10-12 per genotype).(C) qPCR analysis of rate-limiting enzyme for steroid synthesis, Star, and steroid hormone synthesis pathway genes relative to control which was arbitrarily set to 1 (n = 12 per genotype).(A-C).Bar heights indicate mean ± SEM for each genotype with no significant differences (p > 0.05) observed between control and mutants determined by one-way ANOVA with Tukey multiple comparison post-test.
Figure 5 .
Figure 5. Apoptosis is increased in antral follicles of mice Igf1r d/d and DKO.To assess follicle quality, a TUNEL assay was used to assess the frequency of apoptosis in control mice and those lacking Insr, Igf1r, or both.(A) Representative images of control and DKO mice with TUNEL-positive cells indicated by green fluorescence with DAPI counterstain.(B) The quantification of TUNEL-positive cells in antral follicles which contained at least one positive cell (n = 12 control, Insr d/d and DKO, n = 20
Figure 4 .
Figure 4. Ovulation is normal in Amhr2-Cre-mediated insulin receptor knockout mice.Mice were superovulated and sacrificed 12 h post-hCG for histological and molecular analyses.(A) Oviducts were extirpated, flushed with PBS, and retrieved COCs were counted (n = 8 control, n = 5 for each conditional knockout).(B) qPCR analysis of established ovulation-promoting genes in conditional insulin receptor knockout mice relative to control which was arbitrarily set to 1 (n = 10-12 per genotype).(C) qPCR analysis of rate-limiting enzyme for steroid synthesis, Star, and steroid hormone synthesis pathway genes relative to control which was arbitrarily set to 1 (n = 12 per genotype).(A-C) Bar heights indicate mean ± SEM for each genotype with no significant differences (p > 0.05) observed between control and mutants determined by one-way ANOVA with Tukey multiple comparison post-test.
Figure 4 .
Figure 4. Ovulation is normal in Amhr2-Cre-mediated insulin receptor knockout mice.Mice were superovulated and sacrificed 12 h post-hCG for histological and molecular analyses.(A) Oviducts were extirpated, flushed with PBS, and retrieved COCs were counted (n = 8 control, n = 5 for each conditional knockout).(B) qPCR analysis of established ovulation-promoting genes in conditional insulin receptor knockout mice relative to control which was arbitrarily set to 1 (n = 10-12 per genotype).(C) qPCR analysis of rate-limiting enzyme for steroid synthesis, Star, and steroid hormone synthesis pathway genes relative to control which was arbitrarily set to 1 (n = 12 per genotype).(A-C).Bar heights indicate mean ± SEM for each genotype with no significant differences (p > 0.05) observed between control and mutants determined by one-way ANOVA with Tukey multiple comparison post-test.
Figure 5 .
Figure 5. Apoptosis is increased in antral follicles of mice Igf1r d/d and DKO.To assess follicle quality, a TUNEL assay was used to assess the frequency of apoptosis in control mice and those lacking Insr, Igf1r, or both.(A) Representative images of control and DKO mice with TUNEL-positive cells indicated by green fluorescence with DAPI counterstain.(B) The quantification of TUNEL-positive cells in antral follicles which contained at least one positive cell (n = 12 control, Insr d/d and DKO, n = 20
Figure 5 .
Figure 5. Apoptosis is increased in antral follicles of mice Igf1r d/d and DKO.To assess follicle quality, a TUNEL assay was used to assess the frequency of apoptosis in control mice and those lacking Insr, Igf1r, or both.(A) Representative images of control and DKO mice with TUNEL-positive cells indicated by green fluorescence with DAPI counterstain.(B) The quantification of TUNEL-positive cells in antral follicles which contained at least one positive cell (n = 12 control, Insr d/d and DKO, n = 20 Igf1r d/d ).Data are presented as the mean ± SEM.Letters denote means that are significantly different (p < 0.05), one-way ANOVA with a Tukey multiple comparison post-test.
|
v3-fos-license
|
2018-12-14T22:11:15.164Z
|
2015-11-03T00:00:00.000
|
55883343
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bop.unibe.ch/JEMR/article/download/2411/3607",
"pdf_hash": "38b88f0a2edec68b8d19c3ae4e82033db4191594",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42355",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "38b88f0a2edec68b8d19c3ae4e82033db4191594",
"year": 2015
}
|
pes2o/s2orc
|
Robust Head Mounted Wearable Eye Tracking System for Dynamical Calibration
In this work, a new head mounted eye tracking system is presented. Based on computer vision techniques, the system integrates eye images and head movement, in real time, performing a robust gaze point tracking. Nystagmus movements due to vestibulo-ocular reflex are monitored and integrated. The system proposed here is a strongly improved version of a previous platform called HATCAM, which was robust against changes of illumination conditions. The new version, called HAT-Move, is equipped with accurate inertial motion unit to detect the head movement enabling eye gaze even in dynamical conditions. HAT-Move performance is investigated in a group of healthy subjects in both static and dynamic conditions, i.e. when head is kept still or free to move. Evaluation was performed in terms of amplitude of the angular error between the real coordinates of the fixed points and those computed by the system in two experimental setups, specifically, in laboratory settings and in a 3D virtual reality (VR) scenario. The achieved results showed that HAT-Move is able to achieve eye gaze angular error of about 1 degree along both horizontal and vertical directions.
Introduction
Eye Gaze Tracking (EGT) system includes a device able to continuously acquire and follow eye position over time and compute gaze point coordinates in the environment around the subject through an analytical relationship.In the current literature, EGTs have been exploited in different fields, ranging from the medical field (detecting the relationship between oculomotor characteristics and cognition and/or mental states) to user interaction (helping people with disabilities or level attention detection), from multimedia to product design (C.Morimoto & Mimica, 2005).Currently, most EGTs are based on the Video-OculoGraphy (VOG) technique, which is a method for tracking eye movements through computer vision techniques used to process eye images (Van der Geest & Frens, 2002).Through VOG, pupil position and iris landmarks are detected by means of image processing algorithms and used to calculate both eye rotation angles and the eye center.VOG-based EGTs can be classified into two main categories identified by the position of the camera, that is dedicated to acquiring eye images with respect to the user.In particular, if the camera is placed on a fixed support in front of the subject the systems are named remote EGTs, when placed on the head of the subject they are named Head Mounted EGTs (HMEGTs), i.e. portable EGTs.Of course, the choice between the two types of systems poses different technical and methodological challenges.Generally, in laboratory environment remote EGTs are employed.They allow a quite precise measure, but impose limitations in the kind of information that can be retrieved.Specifically, remote EGTs could make use of either the chin support to block the user's head resulting unsuitable for long term acquisitions but with a good accuracy, or intelligent algorithms such as Active Shape Model (ASM) (T.F. Cootes, Taylor, Cooper, & Graham, 1995) to detect the user's eye allowing for limited head movements.They often require really expensive high definition camera that make remote EGTs suitable for investigating oculomotor strategy in neurological investigation, (Meyer, B öhme, Martinetz, & Barth, 2006).On the contrary, HMEGTs have opened new scenarios for research and for markets where the user is free to move his head on which the equipment is directly mounted (Zhu & Ji, 2007).HMEGTs make it possible to investigate the eye-movement during natural tasks in uncontrolled environments (M.Land & Lee, 1994) and in real life scenarios.Although in the real life scenario some uncontrollable external factors such as illumination changes could make the experimental setting time-consuming and not easily repeatable, at the same time HMEGTs allow for dynamical monitoring even in case of time-variant experimental paradigms (Hayhoe & Ballard, 2005).One of the most significant approach to prevent luminosity change issues is based on infrared (IR) illumination.In particular, spectral (reflective) properties of the pupil, under near-IR illumination, are exploited to maximize the image-contrast (C.Morimoto, Amir, & Flickner, 2002;C. Morimoto & Mimica, 2005).Nevertheless, in order to combine the advantage of supervised laboratory settings with real-life-like environments one of the most interesting approach is based on virtual reality immersive space (Fahrenberg, Myrtek, Pawlik, & Perrez, 2007).Virtual Reality (VR) offers an excellent compromise between laboratory and natural world accounting for a systematic control of the stimuli and the variables involved.Given the advantages of immersive VR spaces, HMEGTs have opened up new exploiting directions of the human interaction over time (Henderson, 2003;Jacob & Karn, 2003) focussing on understanding and coding naturalistic behaviour (Hayhoe & Ballard, 2005).However, even though many researches exploited HMEGTs for studying user attention, their perception of surrounding objects, and user interest as well as eye pattern in affective stimulation obtaining good results, (Lanatà, Valenza, & Scilingo, 2013;de Lemos, Sadeghnia, Ólafsd óttir, & Jensen, 2008;Partala & Surakka, 2003;Lanata, Armato, Valenza, & Scilingo, 2011), the performances of these systems drastically decade when the user head is free to move for example in the investigation of VR space navigation (C.Morimoto & Mimica, 2005).In this context, this work aims at developing a new HMEGT (named HAT-Move) to be used either in real life or in completely immersive 3D-virtual reality worlds.
Head Movement Issue
The possibility of freely moving the head with HMEGTs requires a robust and reliable identification and tracking of the pupil center and gaze point.Generally, all of the eye tracking systems developed both for market and research purpose make use of an uneasyto-perform calibration procedure, that should be very accurate.As a matter of fact, the better is the calibration the better is the outcome of the EGT.In particular, given a certain number of points (i.e, calibration points) in the real world and fixed their coordinates on the acquired image, the calibration is an analytical relationship that maps the coordinates of the eye gaze computed by the system into the coordinates of the calibration points (Hartley & Zisserman, 2000)..This procedure can be differentiated by both the number of calibration points and the kind of mathematical models used to generate this relationship, (Ramanauskas, Daunys, & Dervinis, 2008).In the literature many efforts have been spent to improve gaze estimation in terms of increasing tolerance to head movements, and as a consequence to improve and simplify the calibration process (Cerrolaza, Villanueva, & Cabeza, 2008;Johnson, Liu, Thomas, & Spencer, 2007;Evans, Jacobs, Tarduno, & Pelz, 2012).In fact, in spite of several attempts of EGTs to enable head movements which can be found in the literature, the head movement keeps remaining an open issue.Babcock et al. introduced a projected grid of 9-points in front of the person, (Babcock, Pelz, & Peak, 2003), as a reference for calibrating the eye position with the camera scene image.Even though this system is designed to be used in natural task it requires many calibration steps during the experiments.Moreover, since a measure of the movement of the head is missing the recalibration process is based on human-operator expertise.Rothkopf et al. tried to use head movements to disambiguate the different types of eye movements when subjects move head and body (Rothkopf & Pelz, 2004).However, the performance of the algorithm declines for large motions in roll, and sometimes the algorithm fails completely.Data analysis showed that the recordings suffered from a significant noise level compared to experimental conditions in which the subject does not move neither the body nor the head, highlighting that much more complex patterns of eye and head movements had to be considered.Johnson et al. allowed relatively unrestricted head and/or body movements (Johnson et al., 2007) tracking them by a visual motion tracker but large errors were found.Moreover two relevant drawbacks have been reported: the eye tracker became less reliable at larger eye rotations and errors arose if there was a shift in the relative position 3D reference on the eye-tracker's visor and the participant's head.As a result of this background it is worthwhile noting that augmenting the eye trackers capacity of differentiating among the movements of the head and the eyes could improve their ability in performing an effective and robust gaze detection.In this view, HAT-Move aims at overcoming the state of the art proposing a novel method for integrating head movement contribution in a free to move eye gaze detection.
Eye Movement
One of the crucial point of the proposed system is the ability in detecting and monitoring the eye movement.Specifically, eye movements can be classified as pursuit and smooth pursuit, saccades and nystagmus movement.Pursuit movements or smooth pursuit are eye movements used for tracking an object in movement, therefore a moving image has to remain constrained to the fovea to achieve a stable image seen by the user.Fovea is a small area of the retina with a very high visual acuity, and it covers about 2 degrees of visual angle.Saccades are rapid movements of eyes for scanning a visual scene.They are also present when the subject is fixating one point (Findlay, 2009).Nystagmus indicates involuntary eye movements.More specifically, when the head rotates about any axis, distant visual images are sustained by rotating eyes in the opposite direction on the respective axis.Physiologically, nystagmus is a form of involuntary eye movement that is part of the vestibulo-ocular reflex (VOR), characterized by alternating smooth pursuit in one direction and saccadic movement in the other direction (Westheimer & McKee, 1975).As a matter of fact the brain must turn the eyes so that the image of the fixed object falls on the fovea.Of note, the pursuit system keeps up the moving object in order to allow the brain to move the eyes in opposite direction to head motion, otherwise, image slip on the retina and a blurred image is produced.
In this work, we propose an innovative version of the already partially proposed wearable and wireless eye tracking system (Armato, Lanatà, & Scilingo, 2013).The new system, HAT-Move, is comprised of only one light camera able to capture simultaneously the scene and both eyes of the subject through a mirror.The system calculates the gaze point in real-time both indoor and outdoor.Moreover, a precise and fast Inertial Motion Unit (IMU) was added to render it independent from involuntary head movements, during the calibration phase, or to render it robust to head movements needed to focus on objects out of the visual space.Specifically, in this study we evaluated the contribution, in terms of angular error, of the head movement integration with respect to standard eye tracking.Moreover, HAT-Move system inherits from the previous version the robustness against illumination variation implementing a normalization of the illumination through a Discrete Cosine Transform
Materials and Methods
The HAT-Move eye tracking system was developed in two configurations: the "baseball" hat (see fig. 1) and the head band (see fig. 2).They are technically and functionally equivalent, although the former can be considered aesthetically more pleasant.It is comprised of a wireless camera that is light and small, with an Audio/Video (A/V) transmitter of up to 30m of distance.The camera has a resolution of 628 x 586 pixels with F2.0, D45 optic, and 25 frames per second (f.p.s.).In addition, the InfraRed (IR) filter, which is actually present in each camera, was removed and a wide-angle-lens was added allowing to enlarge the view angle and acquire natural infrared components increasing both the image resolution and the contrast between pupil and iris.This system is able to simultaneously record the visual scene in front of the subject and eye position.This is achieved through a mirror(5 x 0.6cm) placed in front of the user's head (see fig. 2).The system is completely customizable on the user's forehead (see fig. 2).In addition, a wireless Inertial Motion Unit is placed atop the head close to the azimuth rota-Figure 3. Representation of the head along with IMU.In the figure X e , Y e are the axis in the reference frame of the eye; X cam , Y cam , Z cam are the axis in the reference frame of the camera; X h , Y h , Z h are the axis in the reference frame of the center of the head where the IMU is placed; y h , q h , f h are the Euler-angles of the head rotation while y e , q e , are the Euler-angles of the eye rotation.
tion center (see fig. 3).The IMU allows for the acquisition of head movements and rotations during natural activities, allowing for the correction of eye gaze estimation taking into account both the movements during the calibration phase and the "Vestibulo-Ocular Reflex" (VOR) contributions.The adopted IMU provides the three rotation Euler-angles of the head (see Fig. 3) with a sampling frequency of 100 Hz.The system is intended to be wearable, minimally invasive, capable of eye tracking, estimating pupil size, lightweight, and equipped for wireless communication.Moreover the system is designed to be attractive and aesthetic (in a baseball-like version), and able to process the eyegaze pattern in real-time.The HAT-Move system uses a passive approach for capturing ambient light reflected by the eye (VOG).The customized image acquisition system is able to acquire both natural light along with its IR components that are already present in the natural light bandwidth.Therefore, the system presents IR lightning advantages increasing the pupil-iris contrast and avoiding any possible eye injury due to artificial IR illuminators.The block diagram in Figure 4 shows the methodology used to process the acquired image, in which both eyes and scene are presented.The whole processing chain is comprised of a series of algorithms for the detection of the eye center and for the integration of the head rotation angles and the correction of involuntary head movements.Specifically, the eye center detection is achieved through the fol- lowing steps: eye region extraction algorithm, photometric normalization algorithm of illumination, extraction of the pupil contour, and ellipse fitting algorithm as well.Afterwards, the center of the eye is detected, Eulero-head-angles together with the pupil center are integrated into the mapping function to map the eye center and movements into the image plane.The processing chain is fully described in the following sections.
Extraction of the eye region
The region containing the eye must be extracted from the image in which both scene and eyes are simultaneously acquired (see fig. 5).It is obtained through an automatic detection of the rectangular area including the eye (named Region Of Interest (ROI)), (see fig. 5 and 6).For this purpose a modified version of the Active Shape Model was used (ASM).The ASM is an artificial intelligence algorithm generally used for the detection of objects by means their shape (T.F. Cootes et al., 1995), in particular, several modified versions of this algorithm were implemented over time for face detection applications.More specifically, after defining the "landmarks", or distinguishable points present on every image, e.g. the eye corner locations, the shape was represented by an x-y coordinate vector of those landmarks which characterize the specific shape.It is worthwhile noting that a shape of an object does not change when it is moved, rotated or scaled, and the average Euclidean distance between the shape points is minimized through a similar transformation in order to align one shape with an other.Indeed, ASM is a recursive algorithm that starts from an initial tentative shape and adjusts the locations of shape points by means of a template matching the image texture around each point, aimed at adapting the initial shape to a global shape model.The entire search is repeated at each level in an image pyramid, from coarse to fine resolution; specific details can be found in (T.Cootes & Taylor, n.d.).In our study, after the HAT-Move is worn, the first thirty seconds were used to train the ASM and then to detect the ROI.Since the system is mounted on the head, the extracted ROI does not change through- out the experiment.In addition, only the red-imagecomponent is converted in gray scale and used as input to the other processing blocks (see fig. 6).This image component, indeed, is especially helpful in enhancing the contrast between pupil and background.
Illumination normalization
Illumination normalization relies on an algorithmic strategy to keep stable illumination conditions throughout the captured images.More specifically, environmental illumination changes are reflected in the acquired images as a variation of the eye representation in terms of intensity thereby strongly reducing the contrast between eyes and landmark.The standard approach is based on the Retinex theory, (E.H. Land & McCann, 1971) whereby the effect of a non-uniform illumination is eliminated and is completely independent of any a-priori knowledge of the surface reflectance and light source composition.According to this theory, the image intensity I(x, y) can be simplified and formulated as follows: where R(x, y) is the reflectance and L(x, y) is the illuminance at each point (x, y).The luminance L is assumed to contain low frequency components of the image while the reflectance R mainly includes the high frequency components of the image.The technique Figure 7 shows the output of the DCT algorithm applied to gray scale image reported in figure 6.
Pupil tracking and ellipse fitting
This section deals with the method used to extract the pupil contours.The method is comprised of several blocks in which the acquired eye image is first binarized in order to separate the pupil from the background by using a threshold in the image histogram; then a geometrical method was used to reconstruct pupil contours and to remove outliers belonging to the background, details of this algorithm con be found in (Armato et al., 2013).Following the geometrical detection of the points belonging to the pupil, an ellipse fitting algorithm is implemented for pupil contour reconstruction and for detecting the center of the eye.In the literature, the ellipse is considered to be the best geometrical figure representing the eye, being the eye image captured by the camera a projection of the eye in the mirror.Over the last decade many ellipse fitting algorithms have been proposed (Forsyth & Ponce, 2002; Bennett, Burridge, & Saito, 2002), although most work offline.In our system we used the Least Square (LS) technique, which is based on finding a set of parameters that minimizes the distance between the data points and the ellipse, (Fitzgibbon, Pilu, & Fisher, 2002).According to the literature this technique fulfills the real time requirement (Duchowski, 2007).Specifically, we follow the algorithm proposed by Fitzgibbon et al., which is a direct computational method (i.e.B2AC, it is the exact name of Fitzgibbon's algorithm which is based on the solution of a quadratic polynomial) based on the algebraic distance with a quadratic constraint, in which a gaussian noise is added for algorithm stabilization, (Maini, 2005).Afterwards, the center of the eye is computed as the center of the fitted ellipse.A detailed description of the methods can be found in (Armato et al., 2013).Figure 8 shows the result of pupil tracking and the ellipse fitting algorithm for reconstructing pupil contours.
Mapping of the position of the eye
The mapping procedure aims at associating the instantaneous position of the center of the eye to a point of the scene.This point is named gazepoint.This procedure is mainly based on a mathematical function, named mapping f unction, which is an equation system constituted of two second order polynomial functions (C.H. Morimoto, Koons, Amir, & Flickner, 2000) defined as: x si = a 11 + a 12 x ei + a 13 y ei + a 14 x ei y ei + a 15 x ei 2 + a 16 y ei 2 (3) y si = a 21 + a 22 x ei + a 23 y ei + a 24 x ei y ei + a 25 x ei 2 + a 26 y ei 2 (4) where x si , y si are the coordinates of a point on the image plane (i.e. the coordinates of the point on the screen mapped into the image plane captured by the camera), and x ei , y ei are the coordinates of the center of the eye coming from the ellipse fitting block, referred to the image plane as well.The procedure is intended to solve the equation system by means of a calibration process.
Once the system is positioned onto the subject's head in a manner that eyes and scene are simultaneously presented in the image captured by the camera, the user is asked to look at some specific points on the screen (calibration process).These points are identified by coordinates s i = (x si , y si ) referred to the image plane (i.e. the image captured by the camera), (see fig. 5).Since the coordinates of the calibration points are known to solve the equation system, we have to compute the coefficients a 1,1 to 6 , and a 2,1 to 6 that are unknowns.The results are achieved because each calibration point defines 2 equations, and, considering a 9-point calibration process, the system is over constrained with 12 unknowns and 18 equations and can be solved using Least Square Method (LSM).Head movements mainly affect the calibration process, resulting in artifact movement that degrades eye estimation and consequentially the point of gaze as well.Two different problems related to these movements arise.The first consists of the modification of the image plane position, which follows the head rotations being attached to the forehead, while the second is due to the involuntary movement.
In this work an innovative integration process is implemented, as described in the next paragraph.
Movement Integration Process
This integration process is related to the adjustment of the eye gaze as a consequence of changes in the calibration plane orientation with respect to the camera and the compensating eye rotations against head rota-tions.These issues are mainly due to the user's inability to hold the head still for the whole duration of the calibration process.Consequently, these movements reduce the system accuracy.This process is based on data gathered from the IMU.First of all, according to figure 10, O h X h Y h Z h , we define the cartesian system in the reference frame of the IMU; OX cam Y cam Z cam the cartesian system in the reference frame of the camera; OX e Y e a cartesian system in the reference frame of the center of the eye; O 0 x i y i the cartesian system on the image plane; f the focal distance; c(x c , y c ) the projection of the central point of the calibration plane onto the image plane; s i (x si , y si ) the projection of P(x, y, z), which is a generic calibration point, onto the image plane.Moreover, we also define the following rotations: q h , y h and f h head rotation angles around Y h , X h , and Z h axes, respectively; and q e and y e the eye rotation angles around Y e and X e , respectively.The Movement Integration (MI) is performed during the acquisition of the 9 target points s i = (x si , y si ), the 9 points related to eye positions e i = (x ei , y ei ), and synchronously the Euler angles of the head (q h , f h and y h see fig.10).In Figure 10.Representation of the head, IMU, image plane and real plane, respectively.In the figure, X e , Y e are the axes in the reference frame of the eye; X cam , Y cam , Z cam are the axes in the reference frame of the camera; x i ,y i are the axes in the reference frame of the image plane; X h , Y h , Z h are the axes in the reference frame of the center of the head where the IMU is placed; y h , q h , f h are the Euler-angles of the head rotation while y e , q e , are the Euler-angles of the eye rotation; c(x c , y c ) is the projection of the central point of the calibration plane on the image plane; s i (x si , y si ) the projection of P(x, y, z), which is a generic calibration point, on the image plane.particular, the MI process performs both the realignment of eye center position on the image plane when the VOR occurs and the remapping of the calibrated space onto the image plane when the head of the user is rotating.Hence at the end of the process, the mapping function will compute the adjusted coordinates of the eye center x ei , y ei , and the corrected coordinates of the calibration point, s i = (x si , y si ), both referred to the image plane.Referring to the eye rotation angles, they were estimated taking advantage of VOR contributions by means of Vestibulo-Ocular calibration curves.These curves are estimated to quantify eye rotations by a mathematical model for transforming eye rotations, expressed in degrees, into movements of the eye center along the vertical and horizontal axes, expressed in pixels (Crawford & Vilis, 1991).Here, vestibulo-ocular curves are two curves (one is for the rotation y around x axis and the other is for the rotation q around y axis) computed asking the user to rotate the head around the horizontal and vertical axes fixing the gaze on point "C" in front of him while acquiring eye tracking, gaze, head movements and the scene over time.A linear fitting is applied to both rotations for extracting gain and offset, as expressed by the formula: qe = G q e P x + O q e (5) Specifically, the adjusted eye rotations are computed according to the following equations: y 0 e M = ỹe + y h + 4(y h ) where q 0 e and y 0 e are the corrected eye angles and qe and ỹe are the eye angles of a specific subject wearing the system computed as explained in eq. 5 and 6. 4(q h ) and 4(y h ) are obtained as a decomposition of f h , which is the head rotation around the z axis.More specifically, when a head rotation around z occurs the IMU provides a value of f different from 0 (f 6 = 0) and taking into account P x and P y continuously provided by HAT-Move, which are related to this variation of f, the values of 4q e and 4y e are obtained by means of the equations 5, 6. Afterwards the corrected angles are calculated by using equations 7 and 8, and then applying the equations 5, 6 the corrected coordinates of the eye center are obtained.At this point starting from the new eye angles by means of the eqs.5, 6 the corrected coordinates of the eye center are simply computed.The correction of the projection of the calibration point on the rotated image plane is carried out by means of geometrical considerations.Figure 11 shows a transverse view of the subject, the image plane and the calibration plane (or real plane), during the initial conditions.In this case, the subject is aligned with the central point of the calibration plane (point number five, named C, see fig.14).Let us define a generic calibration point P 8(5):2, 1-15 Lanata, Greco, Valenza & Scilingo (2015) Robust Head Mounted Wearable Eye Tracking System for Dynamical Calibration Figure 13.Representation of the error after positive q h rotation and its projection onto image plane p.During a positive head rotation of q h (see fig. 12) (which is a reasonable head movement during the calibration process) the projection of P in the image plane results p 0 instead of p (see fig. 13).It means that an error is made in the calibration process and consequently it will be propagated in the gaze estimation.Therefore, in taking into account the acquired head rotations, it is possible to correct the projection of P in the exact position by means of an algorithm based on some geometrical considerations reported in figure 13.In the case shown in fig.13 x 0 p results to be less than x c , but other cases can be identified: • x p 0 > x c and (q 0 + q h ) < 90 ; • x p 0 > x c and (q 0 + q h ) > 90 ; and for q h < 0, we have: • x p 0 > x c and (q 0 q h ) < 90 ; • x p 0 < x c and (q 0 q h ) < 90 ; • x p 0 < x c and (q 0 q h ) > 90 ; Considering all of the cases mentioned above, the correction along the x axis can be reassumed by the following equations: otherwise The y p corrections are the same as x p corrections using the angle y h instead of q h ; however, the relationships are inverted with respect to the sign of the angle.Therefore: if y p 0 < y c then otherwise The correction by f h gives a contribution for both x and y coordinates.All cases for both f h < 0 and f h > 0 can be summarized by the following equations: if (x 0 p < x c and y 0 p < y 0 c ) or (x 0 p > x c and y 0 p > y 0 c ) then 1 the symbol M = indicates that it is a new formulated equation The correct coordinates of both the eye center and the calibration points will be used in the mapping function system (eq.3, 4) to detect the new gaze point.
Experimental setup
This section deals with two experimental setups.The first aims at defining a protocol to validate the accuracy of the HAT-Move and the relevance of correction process (hereinafter named Accuracy Estimation) in laboratory settings, while the second is mainly a proof of concept concerning the use of the HAT-Move system in a 3D VR scenario (hereinafter named VR Estimation).To this extent we evaluated the applicability of the HAT-Move with the correction process in a quasi naturalistic environment when the subject is free to move his head and to walk around in the VR space.. Accuracy Estimation.The first experiment was performed by group of 11 subjects who did not present any ocular pathologies.All of the subjects were asked to sit on a comfortable chair placed in front of a wall (3 x 3 m 2 ) at a distance of 2 meters, while wearing the system.The wall surface was constituted of a set of black squares (2 cm per side) immersed into a white background.This controlled environment permitted to verify the system functionality during the whole experimental session.More specifically, experiments were divided into two main blocks; the first was related to VOR curves computation, and the second was related to the estimation of the eye gaze tracking.11 subjects of both genders were recruited having different eye colors.8 subjects had dark eyes and 3 had bright eyes.The average age was 27.8 years.For the first session, the VOR curves were computed on the whole group of subjects asking them to rotate their head first around x axis (y angle) and then around y axis (q angle) fixing point C placed in front of them Possible involuntary movements of the head around the azimuth z axis (f angle) have been considered through their contributions along both y and q.During the experiment, the subjects were asked to rotate the head around the horizontal and vertical axes fixing point C placed in front of them.These calibration curves are intended to be used for solving the equation system 5 and 6 for each subject.Specifically, the system can be solved by imposing some constraints.The first constraint regards the initial condition; in fact, when the user is looking at the central calibration point C, before moving the head, his head-angles are forced to be null, and P x and P y are considered equal to the starting eye coordinates extracted from the ROI.During the required movements around the axes, with a fixed gaze while the head is rotating, the IMU values exactly correspond to the eye rotation angles but in opposite directions (these angles were captured at a sampling frequency of 100 Hz).Therefore, by a linear fitting applied to both rotations, gains were extracted for each subject.Afterwards, by using the specific average gains (G q e , G y e ) in the equation system, 5 and 6 each specific offset for each subject was computed using the initial condition, where the eye angles were null.At the end of the process, given the G q e , G y e , O q e , and O y e as well as P x and P y in the image plane all corresponding eye angles ( qe , ỹe ) are determined.Moreover, both data with and without VOR contribution were collected and compared.The second session was organized into two phases.During the first phase the subjects were invited to look at the central point of the calibration plane (C point in the fig.14), initial condition, and to the other calibration points with their eyes indicated by the numbers, in an arbitrary order (fig.14).Simultaneously, the experimenter marked the corresponding point seen on the image plane.In the second phase, the subjects were invited to follow the square indicated by letters with their eyes.This second phase was performed in two configurations.The first configuration was carried out with a chin support, where the head was completely still.Afterwards, the second configuration was conducted without any support, so the head was free-to-move.The results of the two configurations were statistically compared.
VR Estimation.Eleven subjects were enrolled for the 3D VR scenario experiment.Our VR application consists of an immersive room equipped with a number of sensors and effectors including three projection screens, a sound system, and a tracking system (fig.15).The subjects were asked to wear the system and, after the calibration phase, to freely walk around the room looking at the three screens.A total number of 10 circles were shown one by one in a random order and unknown position (see fig. 15).The whole experiment duration was of 10 minutes for each subject.During the calibration phase the head movement correction was performed.The accuracy of the system was calculated in terms of median of angular errors between the circle positions and the estimated eye position across all the subjects, Moreover, a statistical comparison was performed between the accuracy results in the laboratory and in the VR conditions.
Experimental results
In this section we report on the achieved results for both experimental setups.
Accuracy estimation
Here, the computation of the VOR curves as well as the computation of the accuracy of the system along x, and y axes were computed in terms of angular error.In particular, by means of the angular error, we evaluated the effect of the correction process comparing the gaze point in three different "modalities".Specifically, the first was computed when the head was completely still, making use of a chin support (hereinafter called Stationary Mode), the second and third were performed without any support.More specifically, the second was computed applying only the integration of the calibration plane orientation changes (hereinafter called Screen Point), and the third was obtained applying both the integration of plane orientation changes and VOR contribution (hereinafter called Screen Point + VOR).
Table 1 reports average gain and standard deviation of VOR for both vertical (q) and horizontal (y) axes.
Table 1 Average Gain VOR
G q e G y e 52.44±2.8361.36±3.46 These values were then used as gain corrections to estimate the specific offset for each subject.The accuracy, where d pixel represents the distance between the subject and the calibration plane.Tables 2 -3 show median and median absolute dispersion of errors per subject, expressed in degrees, in stationary head condition for x and y axes (similar information are shown from the figures 16 -17).In particular, the first column refers to the values without any correction, the column Screen Point is referred to the error values, in the image plane, with the correction of the calibration plane orientation only actuated by IMU, and the column Screen Point + VOR is referred to the values, in the image plane, with the MI of both calibration plane orientation and VOR together.The corrections are reported for three head rotation angles q, f, and y. sion of errors per subject, expressed in degrees, with free head movement condition for x and y axes (the same information cane seen in figures 18 -19).In these Tables the columns report only the values for the column Screen Point, which refers to the error values applying the correction of the calibration plane orientation and the column Screen Point + VOR, which is referred to the values applying the correction of both calibration plane orientation and VOR together.In these Tables the corrections are reported for 3 head rotation angles q, f, and y.The head movements were related to an average rotation amplitude of 20 degrees around the three axes.Tables 4 -5 do not report the column "Without MI", with respect to Tables 2 -3 because the calibration process in case of head movement cannot be computed.More specifically, calibration would produce wrong random values by chance.As a matter of fact, Figure 20 reports an example of calibration during head movement.In this figure, it can be noticed that calibration points are strictly concentrated on a small part of the image plane making the system completely unusable.Table 6 show results from Friedman non para- without MI.More specifically, the test returns the probability that the different samples were not belonging to the same population.A pairwise comparison was performed for every pair of samples after the rejection of the null hypothesis carrying out a Mann-Whitney test with a Bonferroni adjustment.Results show that in Stationary mode the error between actual and computed coordinates of the gaze point estimated with and without MI are statistically equivalent, while when the head is moving the errors belong to different populations when MI (Screen + VOR) is not used.On the contrary, after the integration of the head movement, the statistical analysis reported that no significant difference is achieved among median in stationary head (stationary) and head free to move (movement) conditions.
VR estimation
Results achieved in the VR scenario are reported in Table 7 and are expressed in terms of angular errors along the X and Y axes.The Table shows for each subject the median ± the median absolute value of the angular errors computed on the gaze points of the 10 circle targets.The last raw represents the inter-subject median angular error, which resulted to be equal to 1.45 degree for the X coordinate and 1.54 degree for the Y coordinate .Moreover, a statistical comparison between the results achieved in laboratory and virtual reality conditions was performed by means of a Mann-Whitney test.For both coordinates, no significant differences were found between the two experimental setup conditions (X p-value > 0.05 -Y p-value > 0.05) confirming the usability of the HAT-Move even in free-movement conditions.Lanata, Greco, Valenza & Scilingo (2015) Robust Head Mounted Wearable Eye Tracking System for Dynamical Calibration
Real-time estimation
The execution time of the main tasks and the entire software integrated into the system was about of 34.47 ms.The working frequency is about 29 Hz, which is greater than camera sampling frequency (25Hz), therefore the real-time requirement was fulfilled.¡
Discussion and conclusion
Even though many current HMEGT systems are used without any integration of the movement of the head either for image plane orientation nor for the Vestibulo-Ocular Reflex, they are used with partial head movements.This study pointed out that HMEGT systems are strongly affected by head movements.More specifically, when the chin support is used the angular errors are acceptable.This conclusion is supported by the literature and also confirmed by this study, see figures 16 and 17, and tables 2, 3,.Indeed, no strong divergence between the median values for both errors along x and y is present, and confirmed by Friedman nonparametric test for paired sample with Bon f erroni post-hoc correction for multiple comparison reported in Table 6 no significant difference is shown for stationary mode.However, the same statistical tests highlighted that when head slightly moves the errors dramatically increase.It is worthwhile noting that the multiple comparisons reported in figures showed that during head movements the gaze point diverge with errors of 4 5 degrees.This experimental evidence suggests that the proposed corrections are essential to achieve an accurate eye gaze tracking in dynamical conditions.In fact, since eye tracking systems are often used also in medical environments for detecting pathologies which can range from behavioral to neurological fields, these errors could bring to misleading interpretations.The effectiveness of the head movement integration has been proven by the statistical comparison on x and y directions.In fact, it is possible to reduce the angular errors achieving no statistical difference between stationary mode and head movements, showing that the system keeps the same accuracy in both modalities.The estimated median error with head movement is reduced from 5.32 with a standard deviation of 2.81 (without VOR correction) to 0.85 with a standard deviation of 0.44 and from 4.70 with a standard deviation of 3.94 (without VOR correction) to 1.78 with a standard deviation of 1.05 for x and y axes, respectively.The obtained accuracy results confirm the reliability and robustness of the proposed system.Moreover, the difference between the accuracy along x and y can be due to angular position of the camera (which is above of the eyes) which reduces the accu-racy of the vertical positions of the pupil.In addition, this system fulfilled the realtime requirement being the execution time of the algorithm lower than the time interval between two consecutive video frames.In order to test the system even in conditions in which the subject was completely free to move and walk, we have developed an experiment in a virtual reality environment.We asked the subjects to look at random points on the screen of the VR moving into the room.Accuracy resulted to be equal to 1.45 degree for the X coordinate and 1.54 degree for the Y coordinate.No significant differences were found from the accuracy in the laboratory conditions.This result confirm the robustness of the proposed system even in scenarios similar to real environments The main limitation of the system is the low frame rate of the camera.This limitation does not allow the system to acquire fast saccadic movements, which are known to be in the time range of 30 ms, while it is able to acquire slow saccadic movements around 100 ms.The proposed system is equipped with low cost hardware and it results extremely lightweight, unobtrusive, and aesthetically attractive providing a good acceptability by the end users.Thanks to these properties and its technological specifications the HAT-Move system allows investigating how humans interact with the external environment continuously.This ecological approach (Bronfenbrenner, 1977) could be pursued either at individual or community level with the aim of analyzing and coding both activities and relationships.More specifically, this kind of information can be really useful in studying non-verbal social behavior in both healthy and impaired people (e.g.affected by behavioral pathologies such as autistic disorders) as well as to improve the scientific knowledge on human interpersonal relationships.Furthermore, HAT-Move system has been already shown to be useful for studying eye pattern as a response of emotional stimulation with good and promising results (Lanatà et al., 2013;Lanata, Armato, et al., 2011).As a matter of fact eye feature pattern could provide a new point of view in the study of personal and interpersonal aspects of human feelings.In such a way eye information which we already showed to be informative in emotional response, could be integrated with other sets of physiological data (Betella et al., 2013) such as cardiac information (Lanata, Valenza, Mancuso, & Scilingo, 2011) or heart rate variability (Valenza, Allegrini, Lanata, & Scilingo, 2012) , respiration activity (Valenza, Lanatá, & Scilingo, 2013), as well as electrodermal response (Greco et al., 2012) in a multivariate approach (Valenza et al., 2014) in order to create a very complete descriptive set of data able to explain the non-verbal phe-nomenology behind implicit (autonomic nervous system response) and explicit human response to external stimulation in real or virtual scenario (Wagner et al., 2013).It could be really helpful also in the investigation of several pathologies where the unobtrusiveness of the instrumentations could allow of monitoring naturally evoked response of participants, (Lanatà, Valenza, & Scilingo, 2012;Valenza, Lanata, Scilingo, & De Rossi, 2010;Lanatà et al., 2010;Armato et al., 2009;Valenza et al., 2010;Lanata, Valenza, et al., 2011).This latter issue could be used either on healthy participants or subject suffering from pathologies such as autism (Mazzei et al., 2012) as well as alterations of mood (Valenza, Gentili, Lanata, & Scilingo, 2013) etc. in which social skills are a strong impact on the lifestyle of it should improve the scientific knowledge on human interpersonal relationships, emotions etc.Moreover, to achieve these aims further efforts will be devoted to integrating a high-speed camera with high resolution in order to capture fast saccadic movements and to providing better accuracy.Moreover, an optimization process will be addressed to develop new multithreading algorithms based on the interocular distance in order to obtain a 3D eye tracking system.
This work is supported by the European Union Seventh Framework Programme under grant agreement n. 258749 CEEDS.
Figure 4 .
Figure 4. Block diagram showing all the algorithmic stages of the processing of eyes and outside scene.
Figure 5 .
Figure 5. Example of single frame captured by the camera.The rectangular area marked by red represents the ROI.
Figure 6 .
Figure 6.Red component of the ROI.
Figure 7 .
Figure 7. Eye image after the application of illumination normalization algorithm by DCT
Figure 8 .
Figure 8. Results of the pupil tracking and ellipse fitting algorithm.a) In blue are represented geometrical construction for pupil contour detection; Contour points are evinced in yellow.b) In red the fitted ellipse is highlighted.
Figure 9 .
Figure 9. Block Diagram of the mapping function calculation process.
Figure 11 .
Figure 11.Initial condition of the calibration.
Figure 12 .
Figure12.Representation of the positive q h rotation.
Figure 15 .
Figure 15.Virtual Reality environment used for the completely free movement scenario.
Figure 16 .
Figure 16.Box plot of data from the statistical comparison of the Median and Median Absolute Dispersion (MAD) along with x axis, with and without head movements.In the case of head movements without any VOR correction.
Figure 17 .
Figure 17.Box plot of data from the statistical comparison of the Median and Median Absolute Dispersion (MAD) along with y axis, with and without head movements.In the case of head movements without any VOR correction.
Figure 18 .
Figure 18.Box plot of data from the statistical comparison of the Median and Median Absolute Dispersion (MAD) along with x axis, with and without head movements.In the case of head movements with VOR correction.
Figure 19 .
Figure 19.Box plot of data from the statistical comparison of the Median and Median Absolute Dispersion (MAD) along with y axis, with and without head movements.In the case of head movements with VOR correction.
Figure 20 .
Figure 20.Calibration of Subject 6.The 9 calibration points are marked in blue; the calibration was performed during head movements.It is an example of the errors produced by uncorrected head movement calibration
Table 2
X accuracy with head in stationary mode Median 0.85±0.310.84±0.370.82±0.37Table 3 Y accuracy with head in stationary mode
Table 4
X accuracy with the head free to move, corrections are reported for three rotation angles
Table 5 Y
accuracy with the head free to move, corrections are reported for three rotation angles
Table 6
Results of the Friedman non parametric test with Bonferroni correction applied to Stationary Mode and Head Movement conditions "without MI" and with "Screen point + VOR" for x and y axes, respectively.In the Table p values are shown.
Table 7
Evaluation results of median ± MAD angular error computed for X and Y axes in a 3D VR scenario
|
v3-fos-license
|
2023-10-15T15:03:59.661Z
|
2023-09-17T00:00:00.000
|
264112114
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://annals-csis.org/proceedings/2023/drp/pdf/2385.pdf",
"pdf_hash": "59d2a42ea038b62ded1cb8303fca88f2db170bea",
"pdf_src": "IEEE",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42357",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "1f6153d82e81d5b976c1e634d187d67acc4a9dbc",
"year": 2023
}
|
pes2o/s2orc
|
Tackling Variable-Length Sequences with High-Cardinality Features in Cyber-Attack Detection
Internet of Things (IoT) based systems are vulnerable to various cyber-attacks and need advanced and smart techniques in order to achieve their security. In the FedCSIS 2023 big-data competition, participants are asked to construct scoring models to detect whether anomalous operating systems were under attack by using logs from IoT devices. These log files are variable-length sequences with high cardinality features. Through in-depth and detailed analysis, we find out concise and efficient methods to handle these huge volumes, variety, and veracity of data. On the basis of this, we create detection rules using the fundamental knowledge of mathematical statistics and train gradient boosting machine (GBM) based classifier for attack detection. Experimental and competition results prove the effectiveness of our proposed methods. Our final AUC score is 0.9999 on the private leaderboard.
I. INTRODUCTION
NTERNET of Things (IoT) plays an essential role in remote monitoring and control operations.IoT based systems are widely used in the fields of environment, home automation, healthcare, smart grid, transportation, agriculture, military, surveillance, etc.In 2023, the number of devices connected to networks is expected to be 3 times higher than the global population [1].With the IoT, sensors collect, communicate, analyze, and act on information.This offers new ways for technology, media and telecommunications businesses to create value.But it also creates new opportunities for that information to be compromised.The IoT connect systems, applications, data storage, and services become a new gateway for cyber-attacks as they continuously offer services but lack of adequate security protection.In 2020, nearly 1.5 billion cyber-attacks on IoT devices were reported [1].These attacks may steal important and sensitive information that causes economic and societal damages.To address critical challenges related to the authentication and secure communication of IoT, many people (such as Jarosz et al. [2]) have developed various authentication and key exchange protocols for IoT devices.But software piracy and malware attacks remain high risks to compromise the security of IoT.This brings with it a particular challenge: securing IoT based systems against cyber-attacks.
I
In the FedCSIS 2023 challenge: Cybersecurity Threat Detection in the Behavior of IoT Devices [3], participants are asked to construct scoring models to detect whether anoma-lous operating systems were under attack by using logs from IoT devices.This competition has important theoretical and practical value for increasing IoT cyber security.It provides rich and detailed data for participants to analyze cyber-attacks from various perspectives and to train and test their models.Thereby we can understand attacker's intent, learn their behavior, and track the tactics, techniques, and procedures that they utilize to achieve their goals.We believe that all predictive models thoughtfully and elaborately constructed by each participant will definitely help to detect attacks as early as possible, determine the scope of the compromise rapidly and predict how they will progress, and eventually empower organizations to better respond to attacks.
In the past decade, traditional machine learning techniques (such as Support Vector Machine, Decision Tree, K-Nearest Neighbor, Random Forests, Naive Bayes, etc.) have been widely used by the cyber security community to automatically identify IoT attacks.Many papers (such as [4]) have provided various reference implementation on state-ofthe-art machine learning methods for data preprocessing, feature engineering, model fitting, and ensemble blending.And paper [5] discusses in detail the existing machine learning and deep learning solutions for addressing different security problems in IoT networks.However, with the continuous expansion and evolution of IoT applications, attacks on these IoT applications continue to grow rapidly.
The complexity and quantity of attacks push for more efficient detection methods.In the recent years, deep learning techniques have been used in an attempt to build more reliable systems.For example, Martin Kodys et al. proposed a novel solution which deployed two CNN architectures (ResNet-50 and EfficientNet-B0) on the same data to observe how their performance differs to detect the intrusion attacks against IoT devices [6].Kumar Saurabh et al. developed Network Intrusion Detection System (NIDS) models based on variants of LSTMs (namely, stacked LSTM and bidirectional LSTM and validated their performance) [7].Compared with traditional machine learning, the deep learning brings an end-to-end approach combining feature selection and classification which can speed up the defense response against the fast-evolving cyber-attacks.However, some authors declare that deep learning methods have proved far better than the traditional machine learning models in terms of accuracy, precision with the ability to handle large amounts of data, and the inability to scale the data poses a large limitation to the extensive use of any conventional machine learning model [7].This is not always the case.
In this competition, we apply basic data processing approaches, and leverage the feature selection and model building methods mentioned in our ICME2023 paper [8], combined with fundamental knowledge of mathematical statistics for cyber security threat detection.Our methods are fast and accurate, and achieve near-perfect prediction results.Our work provides examples for processing large scale data and extracting effective features to get better detection accuracy with less computational cost.
The paper is structured as follows: Section II introduces data analysis and processing methods.Section III applies basic knowledge of mathematical statistics to construct rules for attack detection.Section IV discusses how to perform feature selection and build binary classification models for threat prediction.Section V explains the experiment design and presents the results of the experiments.Section VI discusses the pros and cons of our proposed approaches and suggests future research directions.
II. DATA ANALYSIS AND PROCESSING
The available training data and test data in this competition contain 15027 and 5017 log files respectively.Each log file includes 40 fields and it contains 1-minute logs of all related system calls.There are a total of 28,339,158 and 10,060,209 lines of records in the training and test sets, respectively.The size of the data set is over 21.4 gigabytes.Therefore, one of the main tasks of this competition is to analyze and process these data efficiently and thereby construct effective features for attack detection.
In the training set, 522 files were identified as being under attack.Therefore, the chance of cyber-attack is 3.47375%.After the end of the competition, the organizer published the labels of the test set for the participants to do further research.There are 176 files which are under attack in the test set.It seems that the data set is divided by a "stratified K-Fold" manner to let the test set has the same proportion of target variable as the entire data set.
Suppose "/proc/647524/stat" is an ordinary event, then the probability that "/proc/647524/stat" consecutively occurs 169 times in and only in the attacked files is 0.0347^169 = 0.According to the impossibility principle of small probability events, a small probability event is practically impossible to happen in a single trial.And once it does happen, we can reasonably reject the null hypothesis.In fact, it only needs five consecutive occurrences, then we can reasonably infer that an event has close relationship with cyber-attack.
By applying the above 6 rules, we are able to accurately detect 169 compromised files from the test set.
Furthermore, using the same method, we can confirm that 4003 files are secure (i.e., there are no attack events in these log files).
Applying these simple rules for threat prediction yields an AUC = 0.9985 on the test set.
IV. FEATURE SELECTION AND MODEL BUILDING
The aforementioned rule-based intrusion detection methods use only a small fraction of the data and cannot take advantage of the complex nonlinear relationships between various features.In this section we apply the sequential floating forward and backward (SFFB) feature selection method [8] for feature selection, and train a binary classification model based on GBM for attack prediction.
When creating features, we use the target encoding method to replace the categorical values with the mean of the target variable, and introduce a smoothing parameter to regularize towards the unconditional mean.We found this to be helpful in improving the predictive performance of the subsequent algorithms.We also find that the "K-fold target encoding" preferred by many people cannot mitigate over fitting risks.In fact, for high cardinality features "K-fold target encoding" method will lead to serious data leakage.This can be easily verified.
After feature encoding, we calculate the maximum, minimum and average chance of being attacked of each field.We also count their number of the contained basic items.Subsequently, these features are concatenated to form a feature set of equal length.We then use SFFB method to select features.The optimal subsets selected by the SFFB method are somewhat random.In most cases, the selected subset will only contain 10 features, such as: PROCESS_comm_count, PROCESS_exe_count, PROCESS_PATH_mean, CUSTOM_openFiles_max, CUSTOM_openFiles_min, SYSCALL_pid_min, SYSCALL_pid_mean, SYSCALL_pid_count, PROCESS_name_mean, PROCESS_name_count.*_max, *_min and *_mean means the maximum, minimum and average attacked chance of the fields.*_count means the number of basic items of the fields.
Training a GBM model with these 10-dimensional features leads to a classification result of AUC=0.9997 on the test set.Figure . 1 shows the gain contribution of these features.
V. EXPERIMENT DESIGN AND EXPERIMENT RESULTS
Cybersecurity threat detection always is a majority-minority classification problem.Class imbalance in the dataset can dramatically skew the performance of classifiers.Therefore a reliable cross-validation method is essential to train a good classifier.
In our experiments, we estimate the performance of the classifier by using 3-fold cross-validation.At each fold, we completely hide the validation set when processing data and performing feature engineering.The average AUC score of 3-fold cross-validation is 0.9997 in local test.However, the classifiers trained in this way cannot achieve optimal scores on the public leaderboard.In fact, when the score of local CV is greater than 0.998, the changing trends of the local CV score is not consistent with the trends of the public leader-board.To address this problem, we randomly select 2/3 of the data from the training set at a time to train several classifiers, and then weighted averaging the prediction result of each classifier.In this way, we try to eliminate the effects of class imbalance and sample bias.
Finally, we ensemble the results obtained from the rules prediction with those predicted by the GBM model, and achieve an AUC score 0.9999 on the private leaderboard.After the organizer published the labels of the test set, we found that by correctly ensembling the prediction results from sections 3 and 4, we could obtain an AUC score 0.99995 on the test set.This is equivalent to the total accuracy can up to 99.88%.The ensemble method is: 1.If rule-based prediction results are equal to 1, then: ensemble results = 0.85 + 0.15*GBM prediction results.
If rule-based prediction results are equal to 0, then:
ensemble results = 0.15*GBM prediction results.
Otherwise, ensemble results = GBM prediction results
The total time (includes data processing, feature construction, feature selection, classifier training, and target prediction) required to obtain this result on our i7-10700 desktop is less than 30 minutes.
VI. CONCLUSION AND FUTURE WORK
In this cyber security threat detection challenge, we only apply the fundamental methods of machine learning, but achieve near-perfect detection results.Many big-data competition participants like to apply ready-to-use GBM or deep learning frameworks.They prefer the end-to-end approaches that automates data processing, feature selection and classification,and expect to get good answers just by tuning the parameters.But our experiments show that each algorithm has a different application scenario.
In this competition, we conduct in-depth, detailed analysis of the massive-volumes data, and propose concise and efficient methods to process these data.(A significant portion of our work is C++ programming.To master the methodologies and techniques of contemporary C++ in the age of new technologies and challenges, one can start by reading paper [9].)Our proposed approaches are useful for solving variable-length, high-dimensional and high-cardinality problems.
However, our detection method still has obvious limitations: it is good at detecting known attacks but may fail at detecting attacks which have not been seen before.As more and more IoT devices are added, the potential for new and unknown threats grows exponentially.For this reason, an intelligent security framework for IoT networks must be developed that can identify such threats (e.g., detect any anomaly which rises from any deviation from normal behavior of the IoT network, or monitor network traffic to identify potential threats).In these research directions, conventional machine learning methods will still play an important role.
Table 1 .
Example of statistical analysis results of column 2
Table 2 .
Rules used for attack detection
|
v3-fos-license
|
2020-11-11T14:19:06.108Z
|
2020-12-14T00:00:00.000
|
230576759
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jintensivecare.biomedcentral.com/track/pdf/10.1186/s40560-021-00538-8",
"pdf_hash": "a89b23984da99c56de7599daa03d60e652951741",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42358",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "92c8458534c5902da737c346eea7d0f617b7d176",
"year": 2021
}
|
pes2o/s2orc
|
Predictors of failure with high-flow nasal oxygen therapy in COVID-19 patients with acute respiratory failure: a multicenter observational study
Purpose We aimed to describe the use of high-flow nasal oxygen (HFNO) in patients with COVID-19 acute respiratory failure and factors associated with a shift to invasive mechanical ventilation. Methods This is a multicenter, observational study from a prospectively collected database of consecutive COVID-19 patients admitted to 36 Spanish and Andorran intensive care units (ICUs) who received HFNO on ICU admission during a 22-week period (March 12-August 13, 2020). Outcomes of interest were factors on the day of ICU admission associated with the need for endotracheal intubation. We used multivariable logistic regression and mixed effects models. A predictive model for endotracheal intubation in patients treated with HFNO was derived and internally validated. Results From a total of 259 patients initially treated with HFNO, 140 patients (54%) required invasive mechanical ventilation. Baseline non-respiratory Sequential Organ Failure Assessment (SOFA) score [odds ratio (OR) 1.78; 95% confidence interval (CI) 1.41-2.35], and the ROX index calculated as the ratio of partial pressure of arterial oxygen to inspired oxygen fraction divided by respiratory rate (OR 0.53; 95% CI: 0.37-0.72), and pH (OR 0.47; 95% CI: 0.24-0.86) were associated with intubation. Hospital site explained 1% of the variability in the likelihood of intubation after initial treatment with HFNO. A predictive model including non-respiratory SOFA score and the ROX index showed excellent performance (AUC 0.88, 95% CI 0.80-0.96). Conclusions Among adult critically ill patients with COVID-19 initially treated with HFNO, the SOFA score and the ROX index may help to identify patients with higher likelihood of intubation. Supplementary Information The online version contains supplementary material available at 10.1186/s40560-021-00538-8.
Background
The novel coronavirus 2019 infection has spread worldwide causing thousands of cases of acute respiratory failure with an associated high mortality rate [1,2]. Critically-ill patients with COVID-19 often have profound hypoxemia which may partially explain the extremely high use of invasive ventilatory support for long periods of time shown in these subjects [3,4]. This issue, combined with the sharp rise in the incidence of this disease, has led to an unprecedented pressure on many healthcare systems and hospitals worldwide [4][5][6][7].
High-flow nasal oxygen (HFNO) reduces the need for endotracheal intubation in patients with acute respiratory failure [8][9][10]. In the last few months, several studies have reported experiences with HFNO therapy in patients with COVID-19 [11,12]. Also, a recent publication suggested that HFNO compared to oxygen therapy could decrease the requirements for invasive mechanical ventilation in these patients [13]. If validated, the use of HFNO would not only be beneficial for individual patients treated noninvasively but also to those planned for invasive mechanical ventilation through the rational allocation of resources. Conversely, delaying intubation by choosing a non-invasive approach may be associated with worse outcomes in patients with the acute respiratory distress syndrome (ARDS) [3,[14][15][16]. Therefore, identifying those at higher risk of failure could be highly valuable for avoiding delays in choosing the best management approach.
In this study, we sought to describe the use of HFNO in adult patients with COVID-19 acute respiratory failure and to identify factors associated with a greater risk of intubation. We also aimed to derive a parsimonious predictive score for intubation as an aid in daily clinical decision-making.
Study design and setting
We conducted a prospective, multicenter, cohort study of consecutive patients with COVID-19 related acute respiratory failure admitted to 36 hospitals from Spain and Andorra (see Supplementary file) [17]. The study was approved by the referral Ethics Committee of Hospital Clínic, Barcelona, Spain (code #HCB/2020/0399) and was conducted according to the amended Declaration of Helsinki. This report follows the "Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)" guidelines for observational cohort studies [18]. Gathering of data is ongoing and as of August 13, a total of 1129 patients were included.
Study population
For the present study, all consecutive patients included in the database from March 12 to August 13, 2020 that fulfilled the following inclusion criteria were analyzed: age ≥18 years, ICU admission with a diagnosis of COVID-19 related acute respiratory failure, positive confirmatory nasopharyngeal or pulmonary tract sample, and HFNO initiated on ICU admission day. Exclusion criteria were the use of oxygen therapy and non-invasive or invasive mechanical ventilation prior to HFNO or the absence of data regarding respiratory management on day 1 after ICU admission.
Data collection
Patients' characteristics were collected prospectively from electronic medical records by physicians trained in critical care according to a previously standardized consensus protocol. Each investigator had a personal username/password, and entered data into a specifically predesigned online data acquisition system (CoVid19.ubikare.io) endorsed and validated by the Spanish Society of Anesthesiology and Critical Care (SEDAR) [19]. Patient confidentiality was protected by assigning a de-identified code. Recorded data included demographics [age, gender, body mass index (BMI)], comorbidities and disease chronology [time from onset of symptoms and from hospital admission to initiation of respiratory support, ICU length of stay], vital signs [temperature, mean arterial pressure, heart rate], laboratory parameters (blood test, coagulation, biochemical), ratio of oxygen saturation to inspired oxygen fraction, divided by respiratory rate (ROX) index, and severity scores such as the Sequential Organ Failure Assessment (SOFA) and Acute Physiology and Chronic Health Evaluation II (APACHE II) scores. Data regarding physiological parameters was collected once daily. Site investigators collected what they considered to be the most representative data of each day from ICU admission to ICU discharge. After ICU discharge, patients were followed-up until hospital discharge.
Study outcomes
The primary outcome was the assessment of factors at ICU admission (ICU day 1) associated with the need for endotracheal intubation up to 28 days after HFNO initiation. The decision to intubate was made at the discretion of the attending physician at each participating site. Secondary goals were the development of a predictive model to estimate the probability of endotracheal intubation after HFNO and the assessment of between-center variability in the likelihood of receiving intubation after HFNO had been started.
Statistical analysis
We used descriptive statistics to summarize patients' baseline characteristics. We compared the baseline characteristics of patients who required intubation with those who did not require intubation. Specifically, continuous variables were compared with the T test with unequal variances or the Mann-Whitney U test, as appropriate. Categorical variables were compared using the chi-square tests or Fisher's exact test as appropriate. In order to identify factors associated with the likelihood of intubation, we fit a multivariable logistic regression model with endotracheal intubation as the dependent variable. A priori selected variables were those considered of clinical relevance as well as variables that were significantly associated with the outcome in the bivariate analysis (at a p value threshold of 0.2 or less). We report odds ratios (OR) with their associated 95% confidence intervals (CI).
Then, we sought to derive a parsimonious predictive model for intubation among patients treated with HFNO on the first day of ICU admission. Thus, we randomly split the full dataset in two parts: (1) a training dataset including 70% of the patients, and (2) a validation dataset including the remaining 30% of subjects. In the derivation step, all variables showing statistical significance with the outcome were chosen, and a final model based on the best accuracy was selected after performing tenfold cross-validation. The final model calibration was tested in the split validation cohort with the use of the Brier score. A receiver operating characteristic (ROC) curve was constructed to display the area under the curve (AUC) for the predictive model. The optimal cutoff was considered as the one showing the best accuracy. At this cutoff, the performance of the model is presented as sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios and their accompanying 95% CI. An online calculator is shown to estimate the likelihood of HFNO failure for each individual patient. Since validation datasets with few observations can provide imprecise estimates of performance, a sensitivity analysis to assess final model performance using enhanced bootstrapping was also carried out [20].
Additionally, since one of the goals of the present study was to assess center-related variability regarding the clinical decision to intubate, a mixed-effects multivariable logistic regression was fit as a secondary analysis. We fit a logistic model with a random intercept (for each center that recruited more than 10 patients), to account for possible correlation and differences in the baseline risk of intubation based on practice variation between sites. The proportion of variance explained by all fixed factors is presented as the marginal R 2 and the proportion of variance explained by the whole model is presented as the conditional R [2,21].
To account for missing data, which occurred in 6% of the observations of interest, we performed multiple imputation based on Markov chain Monte Carlo methods [22]. Specifically, for regression analysis, we removed subjects with extensive missing data (>50%). Briefly, for every missing value, we created 5 matrices, each one with 1000 imputations. Final imputed values for each missing observation were calculated as the median of all imputations. Imputation of the dependent variable (intubation) was not performed. We used a threshold of 0.05 for statistical significance and all reported tests are two-sided. For statistical analysis, we used the R software (R Foundation for Statistical Computing, Vienna, Austria) and included mice, lme4, caret, OptimalCutpoints, performance, and pROC packages.
Results
From March 12 to August 13, 2020, 259 critically ill patients with COVID-19 related acute respiratory failure were initially treated with HFNO and were included in the present study (Fig. 1). From those, 140 (54.0%) patients were intubated and mechanically ventilated after ICU admission, of whom 74 patients (52.9%) were intubated on the ICU admission day. SOFA score and APAC HE II were higher in patients requiring intubation while respiratory rate, PaO 2 /FiO 2 ratio, and ROX index were lower (Table 1).
Associated factors and predictive model for intubation
After excluding 3 subjects for extensive missing data, 256 patients were included in the multivariable logistic regression analysis. Baseline non-respiratory SOFA score (OR 1.78; 95% CI 1.41-2.35), ROX index (OR 0.53; 95% CI 0.38-0.72), and pH (OR 0.47; 95% CI: 0.24-0.86) were associated with the need for intubation ( Table 2). A model including the non-respiratory SOFA, the ROX index and cancer showed the best accuracy in the training dataset (see Additional file 1, Table S1). However, given that cancer was a protective factor for intubation, which probably meant treatment escalation limitation, a simpler model including non-respiratory SOFA and the ROX index was selected. In the validation subset, this model had excellent calibration (Brier score of 0.14) and discrimination (AUC of 0.88, 95% CI 0.80-0.96) (see Table 3 and Fig. 2).
Additionally, 216 patients, enrolled in 7 centers with 10 or more cases, were included in a mixed-effect analysis (see Additional file 1, Table S2). Baseline nonrespiratory SOFA score and ROX index remained as independent predictors of intubation (see Additional file 1, Table S2). Overall, fixed effects explained 63% of the variability of the outcome while individual centers explained an additional 1% (see Additional file 1, Table S3 and Figure S1). An online calculator to predict the likelihood of intubation given baseline non-respiratory SOFA score and ROX index was developed (see https:// desbancar.shinyapps.io/DESBANCAR/).
Out-of-sample model performance using enhanced bootstrapping is shown in the supplementary file ("Further details on statistical analysis," "Results," and " Figure S2").
Discussion
In this multicenter cohort study of 259 critically ill adult patients with COVID-19 initially treated with HFNO, the need for intubation and invasive mechanical ventilation was frequent and occurred in more than 50% of patients. Non-respiratory SOFA and the ROX index were the main predictors of endotracheal intubation.
Unlike previous studies in non-COVID patients [9,23], poor oxygenation at baseline, as measured by PaO 2 /FiO 2 , was not a reliable predictor of intubation. While hypoxemia seems often homogenously noticeable in this population, its mechanisms may be multifactorial and might change over time as the disease progresses [24]. Cressoni et al. described the distinction between anatomic to functional shunt in ARDS, and Gattinoni et al. have recently reported that the ratio of the shunt fraction to the gasless compartment in COVID-19 subjects is often higher than the values found in ARDS [25,26]. Recently, Chiumello et al. highlighted the differential radiologic pattern of COVID-19 patients as compared to non-COVID-19 ARDS [27]. Similar to previous studies in both non-COVID and COVID patients, our study supported how ROX index, which encompasses information from both oxygenation and respiratory rate, was useful to predict intubation [12,28]. In the absence of non-pulmonary involvement, a ROX index of 3.5 at admission conferred a 50% chance of intubation, which was 83% sensitive and 89% specific for HFNO failure. Of note, the present study differs from previous reports in the percentage of patients receiving HFNO from the total population of patients with COVID-19 related acute respiratory failure [5,6]. Specifically, the patient population in the present study comprised 24% of the whole database, potentially showing that clinicians seemed to be keener (compared to previously published reports) on using this non-invasive oxygenation strategy in this patient population. This in turn may also explain the lower PaO 2 /FiO 2 ratios that were often observed [5,6] and potentially, the lack of impact on the initial decision to switch from HFNO to invasive mechanical ventilation. Although high-quality evidence is needed to assess the effect of HFNO in COVID-19 patients, its use has increased since the start of the pandemic [29]. Moreover, recently published observational data suggests HFNO might increase ventilator-free days and decrease ICU length of stay without incurring in excessive mortality [10].
Our parsimonious model, which included nonrespiratory SOFA and the ROX index, to predict intubation among patients with COVID-19 treated with HFNO showed excellent discrimination and may be helpful in the decision-making process at the bedside. The model also shows strong clinical rationale. It is plausible that as lung mechanics deteriorated in some patients, respiratory drive increased, making the ROX index a valuable tool to predict HFNO failure. Likewise, pH was often lower and PaCO 2 higher in subjects who later became intubated, suggesting fatigue or increased lung injury in failing subjects. Nonrespiratory SOFA score was higher in intubated patients and this was mostly related to hemodynamic impairment. Finally, our mixed-effects analysis showed that most of the variability for the need of invasive mechanical ventilation can be explained by baseline factors at admission, while differential "ICU culture" does not appear to play a major role in this decision. This needs to be analyzed in comparison to previous research showing fairly strong center effects, both in Fig. 1 Patient flowchart. Two hundred fifty-nine patients were included and followed up until ICU discharge or death. NIV, non-invasive ventilation; IMV, mechanical ventilation the care of patients with septic shock and mechanically ventilated critically ill adults [30,31].
Our study has several strengths. First, data were collected prospectively in a nationwide project and one of its main goals was to specifically study the relationship between respiratory treatment and outcome. Second, we were able to derive a parsimonious, potentially easy-touse model that could aid in the identification of patients who may need intubation while being treated with HFNO. However, we acknowledge some limitations of our findings. First, observational studies, especially those multicenter in nature, as our study, are prone to misclassification of relevant covariates and potential predictors. Specifically, physiological parameters were collected once daily, and researchers were instructed to collect the most representative data over the study day. Although unlikely that researchers disregarded the values obtained during HFNO (since they were likely more abnormal than during mechanical ventilation), we cannot ensure completely that some patients, who became intubated on day 1, had their data collected after mechanical ventilation had been started, thus, representing a potential source of bias in the estimation of the predictive model for HFNO failure. Second, missing data on candidate predictors was present in the final sample, rendering our reported associations subject to information bias, and potentially decreasing the precision of our estimates. However, our results were robust while using multiple imputation.
Conclusions
In conclusion, in this observational study of 259 adult critically ill patients with COVID-19 related acute respiratory failure receiving HFNO, approximately 1 out of 2 patients were intubated during the subsequent ICU stay. Oxygenation at baseline was not a good predictor of HFNO failure, while non-respiratory SOFA, pH, and ROX index were independently associated with intubation. Little variation on the decision to intubate was observed across included centers. Future studies should confirm our findings and evaluate the performance of our model in external cohorts.
Additional file 1: Table S1: Final logistic regression model in the training dataset. Table S2: Logistic regression in 216 patients from 7 centres with at least 10 cases. Table S3: Mixed model, using hospital number as a random variable, in 216 patients from 8 centres with at least 10 cases. Figure S1: Effect of centre in the probability of intubation after HFNO. The vertical line depicts the common intercept. Horizontal bars represent 95% confidence interval for each centre. Figure S2. Histogram depicting the optimism for each of the 500 models derived in the bootstrapped samples and later validated in the whole cohort.
|
v3-fos-license
|
2016-06-17T23:36:34.459Z
|
2014-05-28T00:00:00.000
|
15208283
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2014.00224/pdf",
"pdf_hash": "63abd52a9d542cd01292742319f1fbbf5303187d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42361",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "63abd52a9d542cd01292742319f1fbbf5303187d",
"year": 2014
}
|
pes2o/s2orc
|
Physiological and genomic basis of mechanical-functional trade-off in plant vasculature
Some areas in plant abiotic stress research are not frequently addressed by genomic and molecular tools. One such area is the cross reaction of gravitational force with upward capillary pull of water and the mechanical-functional trade-off in plant vasculature. Although frost, drought and flooding stress greatly impact these physiological processes and consequently plant performance, the genomic and molecular basis of such trade-off is only sporadically addressed and so is its adaptive value. Embolism resistance is an important multiple stress- opposition trait and do offer scopes for critical insight to unravel and modify the input of living cells in the process and their biotechnological intervention may be of great importance. Vascular plants employ different physiological strategies to cope with embolism and variation is observed across the kingdom. The genomic resources in this area have started to emerge and open up possibilities of synthesis, validation and utilization of the new knowledge-base. This review article assesses the research till date on this issue and discusses new possibilities for bridging physiology and genomics of a plant, and foresees its implementation in crop science.
INTRODUCTION
A green plant is unique in its hydraulic architecture. Hydraulic conductivity of the xylem is closely linked to the minimum leaf area, which it must supply with water and nutrients for survival. Hydraulic conductivity, as quantified by Zimmermann (1974), is generally measured as leaf specific conductivity (flow rate per unit pressure gradient) divided by the leaf area supplied by the xylem pipeline segment. This measure is a key for quick evaluation of pressure gradients within a plant. Modeling the functional and natural architecture of plant water flow pipeline takes more traits in consideration than merely the physical attributes of a mechanical pump. The contribution of living cells and more specifically, genes and proteins, for maintenance of the "green pump" remains largely unaddressed.
Several theories have been proposed to explain ascent of sap. The operation of the green pump is simple yet elegant and is best described by the Cohesion-Tension Theory (CTT) (Dixon, 1914) but also synthesized from the work of many scientists over the last few decades. Besides physical explanations, the living parenchyma cells around xylem were originally proposed to be of importance by Bose (1923) in his pulsation theory. Later, the living xylem parenchyma cells indeed proved of high importance for the continuous ascent of sap.
The major governing factors are the physical properties of aqueous solution, means of transport and xylem anatomy, consideration of all of which makes the "sap conducting system" comparable to basic hydraulic systems such as pumps and irrigations in household or human blood vasculature. Components of such system are mainly (i) a driving force, (ii) a pipeline system, (iii) a reservoir and other regulating factors. To establish a soilwater-atmosphere continuum, an uninterrupted "water network" is necessary, which is built in the plant where transpirational evaporation is the driving force ( Figure 1A). The evaporation of water from the porous green tissue surface creates a capillary pull in the water menisci ( Figure 1Ai) and a curvature is induced in them, which is sufficient to support a huge water column against gravity in the stem and root vascular cylinder (Figure 1Aii). The water reservoir is the soil, wherefrom the root draws its supply (Figure 1Aiii). The empirical Jurin law says that a menisci radius of 0.12 µm can support a column of 120 m (Zimmermann, 1983). The pull creates sub-atmospheric pressure in the xylem vessels. As the height of a plant increases, the water potential drops, and it is expected that leaves, twigs and upper extremities will display a 10-1000 times drop of pressure ( Figure 1A, Tyree and Sperry, 1989). Sixty five percentage of the water potential drop occurs in tree trunk xylem, with a 20% contribution from root and 14% from leaves (Tyree and Sperry, 1989). This explains why big tree trunks can survive severe localized damages near the base.
PLANT ARCHITECTURE AND THE GREEN PUMP
Architecture of a plant is defined by its height, girth, woodiness, root system design and shoot disposition. Such architecture varies across the plant kingdom, along which varies the plants' hydraulic nature. Secondary thickening is a major player that governs the green pump. It has been shown that root pressure plays little or no part in maintenance of this column in woody plants. Severing the root may not hamper upward movement of water, if there is a direct supply to the vessels; however leaves are necessary. Even the best vacuum pump is able to pull water to not more than 10.4 m, considering that a Sequoia tree may have to pull water up to 100 m. However, in the monocots, root pressure is considered to be a major player of sap pull.
FIGURE 1 | (A)
The soil-plant-air continuum functioning in maintenance of water transport column. The plant root takes up water from soil, and the water column is maintained continuous along the xylem. The continuity across the xylem vessel is maintained by several intrinsic physical properties of water, input from the adjoining living cells and transpirational pool. The rough estimate of pressure along the vascular cylinder is presented in the scale bar (image not to actual scale). (B) A schematic of xylogenesis, adapted and modified from Hertzberg et al., 2001. The two phases of xylem development (primary and secondary); and the tissues involved in the process are shown within respective dotted boxes. The biological processes (cell division, expansion, elongation, deposition of cell wall) involved are shown by black arrows, under corresponding tissue types. The cell wall materials that are deposited are also shown under corresponding tissue types during xylogenesis. The order of such differentiation may be traced from left to right in the figure, though their actual time frame may differ from species to species. Considering the physical properties of green-pump, cavitation and embolism are major threats to the water column in xylem and subsequently, to survival, across the kingdom. To successfully transport water and minerals from soil to leaf, existing pressure in xylem conduits needs to remain sub-atmospheric (negative), in contrast to animal system where long distance transport is actively under positive pressure. The molecular property of cohesion gives a high strength to water. Ultrapure water confined to tubes of very small bore will need a tension comparable to the strength needed to break steel columns of the same diameter. Cohesion imparts strength comparable to solid wires in a water column. The vice is: once air is introduced in such system, the column will snap apart. To prevent such snapping, xylem properties play an important role.
PHYSIOLOGY OF XYLOGENESIS: THE BIPHASIC DEVELOPMENT IN XYLEM
The biphasic development of xylem in plants is critical to understand the hydraulic architecture as well as the air-water-soil continuum ( Figure 1B). Procambium develops into xylem precursor cells that eventually differentiate into xylem fiber cells, xylem parenchyma, and tracheary elements, consisting of vessels and tracheids in the first phase. The second phase deposits secondary xylem walls onto the primary xylem walls (Fukuda, 1997;De Boer and Volkov, 2003), derived from vascular cambium and made of cellulose microfibrils impregnated with lignin, structural proteins, hemicellulose and pectin ( Figure 1B, Ye, 2002;Fukuda, 2004;Yokoyama and Nishitani, 2006). Prior to secondary development, the tracheary components elongate and with the advent of secondary wall deposition, the cellular components in the living tracheid undergo programmed cell death (Fukuda, 2004) living only the hollow pipeline (Fukuda, 1997;Zhang et al., 2011) composed of vessels interconnected by pits (De Boer and Volkov, 2003;Choat and Pittermann, 2009). The paired pits are often bordered ( Figure 1A); from secondary deposition forming two overarched secondary walls, in between which a fine pit membrane with small pores persist. Pit membranes are made up of meshes of polysaccharide (Tyree and Zimmermann, 2002;Pérez-Donoso et al., 2010) and allow axial passage of water and small molecules. Besides, they act as safety protection against spread of air seeds (Tyree and Zimmermann, 2002;De Boer and Volkov, 2003;Choat et al., 2008;Pérez-Donoso et al., 2010).
PHYSIOLOGY OF CAVITATION
The negative pressure in the xylem may descend low enough to make the water metastable. To achieve non-disrupted flow in such system, water must remain liquid below its vapor pressure. This metastable state induces nucleation of vaporization, or cavitation. Cavitation is the introduction of air spaces into the continuous water column and under physical metastable state water is prone to form air bubbles easily. Introduced in a xylem lumen, air cavities rupture the water column and in its worst, block the transport of water and minerals to the leaf. This blockage is known as "embolism" and may lead the plant to a lethal fate.
Cavitation is known to occur in plants frequently. Paradoxically, occurrence of cavitation is the strongest support for CTT. It is only natural to observe cavitation if water is under such negative pressure. The root vessels of field grown, well watered maize plants have been known to embolize daily and then refill. Vessels that were filled by dawn may embolize at mid-afternoon and by sunset they are again refilled (McCully et al., 1998). When transpiration rate is high and water scarcity is at bay, trees display cavitation, which means that embolism can well be induced by water stress. Large metaxylem vessels show a higher rate of embolism, and evidence suggest that water stress-induced embolism is of the frequent most sort (Tyree and Sperry, 1989). It is a prerequisite for cavitation that some vessels are embolized to start with; which is met by bubbles introduced in some of the vessels by mechanical damage, harbivory and insect attack.
STRESS-INDUCED EMBOLISM IN PLANTS
Both abiotic and biotic stresses can induce embolism in a plant. Drought and frost-induced embolisms are most prevalent, while mechanical stress and pathogen-induced damage are often the primary inducers.
Desert plants and dry-season crops are most threatened by drought-induced embolism. Air-seeding increases during drought as the sap pressure becomes increasingly negative due to high suction. The evaporation from leaf surface increases and the porous conduit wall may release air inside the functional conduits. They behave as nucleation centers and cause the sap pressure to increase to atmospheric level. The bubble is then likely to start an embolism that fills up the diameter of conduit, as the surrounding water is pulled up by transpiration.
Interconduit pit membranes with nano-scale pores normally restrict passage of air bubble from affected to functional conduits but at a high pressure difference they fail to stop the propagation. The rate of this propagation is important to measure the cavitation resistance in a plant.
Freezing is another cause of embolism, specially in woody temperate species. Freeze-thaw cycles may lead to 100% loss of water transport due to embolism in some species (Scholander et al., 1961). The primary governing factor in damage intensity seems to be the mean diameter of the conduits. Smaller vessel diameters are more vulnerable to damage.
Frost-induced air seeding is caused by segregation of gas by ice. There is a certain amount of salting out from the sap during freezing of sap, and if the salts are not able to move through the walls, they raise the osmotic pressure of remaining solution (Sevanto et al., 2012). This embolism can be more severe if there is functional drought prevailing. Freezing-induced embolism is a primary stress in forests where seasonal freeze-thaw is observed. Herbaceous plants, on the other hand, hardly survive freezing and are mostly at threat from drought-induced embolism.
Vascular wilt pathogens can wipe out entire crop. It is known that vascular pathogens induce water stress in their hosts; but can embolism be a cause of such stress? All vascular wilt pathogens break into rigid secondary xylem walls to enter the vessels as well as the pit membranes. Generally vascular wilt pathogens or their spores and conidia are too large to pass through pit membrane pores (Mollenhauer and Hopkins, 1974;Choat et al., 2003Choat et al., , 2004Qin et al., 2008). Even when they manage to break into the vessel the milieu is not friendly. The microenvironment of xylem pipeline is nutritionally very poor and the pathogens surviving in xylem niche are not too many in number. It is speculated that they prefer this environment to minimize competition. Nevertheless, fungal and bacterial pathogens can extract the little amount of ions and nutrients available in the xylem stream and are able to break through and digest secondary wood to leech nutrition from living cells. Doing so, they weaken the pressurized cell wall and their infestation within the dead pipeline makes the water stream reactive and prone to cavitation. They may as well block the vessels and pit membranes, occluding parts of functional conduit network.
There is also an internal mechanical stress associated with ascent of sap. The high negative tension within the xylem pipeline causes an inward pool. Depending on the sapwood elasticity, there is a daily diameter change of tree trunk correlated to transpiration and daylight. In Scots pine, Perämäki et al. (2001) described daily changes in the sapwood diameter. The pull causes pressure on a stem surface element directed toward the center of the stem and the tracheal structure resists the movement of the surface element. The mechanical strength of the tracheary wall and its composition is, hence, an important factor in maintaining normal xylem activity as is the plasticity of pit membrane structure and composition.
VULNERABILITY OF XYLEM TO CAVITATION
Xylem seems to be vulnerable to cavitation in many different ways. This vulnerability can vary depending on the species, season, and availability, state and temperature of water. Broadly, the vulnerability of plants to cavitation is often plotted on xylem vulnerability curves, which is a function of decline in xylem hydraulic conductivity due to increasingly negative xylem pressure. Such declines are typically expressed relative to the maximum decline possible as the Percentage Loss of Conductivity (PLC). Comparisons of the vulnerability to cavitation among species are made using the xylem pressure at 50% loss of conductivity (P 50 ) with the traditional plotting of vulnerability curve (Meinzer and McCulloh, 2013). There remain controversies related to the techniques used for measurement of vulnerability described elsewhere in details (McElrone et al., 2012;Cochard et al., 2013;Wheeler et al., 2013).
The vulnerability curve for a number of tree species, as put forward by Tyree et al. (1999) shows a typical exponential shape, indicating that sub-zero pressure is a direct inducer of cavitation. This makes cavitation a regular process and necessitates a resistance mechanism in plants. It has also been claimed that cavitation is rapidly repaired by a miraculous mechanism (Holbrook and Zwieniecki, 1999) known as "refilling." We can thus categorize cavitation resistance under two proposed mechanisms; one, by refilling the air bubbles efficiently; and two, by modulating pit membrane properties. The possible genetic controls of both are worthy of discussion.
CAVITATION RESITANCE BY REFILLING: A QUESTIONABLE TRAIT
The removal of air seeds from lumen to turn a non-functional vessel to functional is known as refilling. The idea, though widely observed, recently was confronted with a serious doubt voiced by the plant hydraulic scientists. The long-established experimental procedure that has been followed to measure cavitation has been pronounced faulty (Sperry, 2013). It has been claimed that the standard procedure of xylem hydraulic conductivity measurement, by excising the stem under water to avoid air aspiration in the open conduits, is not a valid observation procedure. It has been suggested that in many species, significant amount of cavitation is introduced even when the stem is cut under water. The consequences of this artifact on previous datasets were significant, as it may be reflected in all vulnerability to cavitation curves obtained in other species for a long period of time; and perturb our analysis of refilled vessels.
However debatable the issue may be, recent high resolution and real-time imaging studies (Holbrook et al., 2001;Windt et al., 2006;Scheenen et al., 2007;Brodersen et al., 2010) also satisfy the requirements of the hypothesis that plant has some kind of resistance strategies to protect itself from embolism. It has been proposed that plants have an osmotically driven embolism repair mechanism and existing rehydration pathways through the xylem. The mechanisms were predicted to be largely of two types: (i) "novel" refilling, a refilling mechanism without "positive root pressures, even when xylem pressures are still substantially negative"; (ii) root pressure aiding the refilling of vessels raising the pressure inside vessels near atmospheric (Salleo et al., 1996;Holbrook and Zwieniecki, 1999;Tyree et al., 1999;Hacke and Sperry, 2003;Stiller et al., 2005). The first type is common among woody dicots whereas evidence of the second type is common among annual herbaceous species.
GENETIC CONTROL OF REFILLING MECHANISM
Bay leaf tree, Laurus nobilis is an aromatic shrub in which mechanism of refilling is proposed to be linked to starch to sugar conversion. Reserve carbohydrate depletion from xylem parenchyma induces phloem unloading in a radial manner via ray parenchyma (Salleo et al., 2009;Nardini et al., 2011). Xylem-phloem solute exchange has been found to occur along both symplastic and apoplastic paths (Van Bel, 1990). It has been hypothesized that solutes might move radially along the ray cell walls, enter the embolized xylem conduits and increase the solute concentration of the residual water within them, thus promoting xylem refilling by altering osmoticum. The role of xylem parenchyma in refilling is significant. Lianas, shrubs and vine fibers are often observed to have living protoplasts and starch granules (Fahn and Leshem, 1963;Brodersen et al., 2010). Repeated cycles of embolism and repair are correlated to cyclic depletion of starch in xylem during drought (Salleo et al., 2009;Secchi et al., 2011). Debatably, repeated cycles of embolism formation and repair may disable the refilling mechanism and ultimately lead to carbon starvation (Sala et al., 2010(Sala et al., , 2012McDowell, 2011). The hydrolyzed starch movement from xylem is yet unresolved.
Water stressed Populus trichocarpa plants revealed an upregulation of ion transporters, aquaporins, and carbon metabolism related genes (Secchi et al., 2011;Secchi and Zwieniecki, 2012). A putative sucrose-cation co-transporter may aid the refilling process as suggested by the chemical profiling of vessel lumen. Grapevine refilling petioles show strong upregulation of carbon metabolism and aquaporin expression (Perrone et al., 2012).
A basic assumption is made that in dicots, to enhance refilling ability trait, one might target carbohydrate metabolizing genes in a localized manner to improve sucrose release. Sucrose may be used as an osmoticum inside non-functional lumens or may be used as energy currency. Localization of increased aquaporins (PIPs and TIPs) within axial parenchyma surrounding conduits may prove important. It is now proved by imaging studies (Brodersen et al., 2010) that living cells play a central role in embolism refilling and restoring transport, and by further prevention of air seed and pathogen by sealing off conduits with tyloses. Further detailed work is needed to identify the stress signals that mediate talk between xylem vessels and parenchyma.
In monocots, root pressure is the most important mechanism for refilling reported till date. Grasses exhibit root pressure more often, and with the increase of plant height the basal root pressure increases (Cao et al., 2012). Monocots do not exhibit secondary thickening and ray cells thus the osmoticum and sucrose transport theory do not apply to monocots (Andre, 1998). Selection for root pressure in these species solves the embolism repair problem and negates the need for carbohydrate transport along the pathway common in woody angiosperms (Brodersen et al., 2013). However, Stiller et al. (2005) showed the presence of "novel" refilling in rice in presence of high negative pressure and suggested that in upland or low-rainfed rice this mechanism can serve side by side of a positive root pressure. Root pressure may involve a stronger mechanical tissue, and whether or not any trade-off between safety and efficiency is involved is unclear. Study of more vascular function mutants in monocot crops may resolve the genes involved in this process.
GENOMIC PERSPECTIVE: GENES, PROTEINS AND MODELS IMPLICATED IN REFILLING
The battle with cavitation is fought either with efficient refilling or fine structural modulation of pit membrane and strength of vascular cylinder wall. The genomic, transcriptomic and proteomic studies may thus come under two broad sections: genomic basis of refilling and genomic basic of mechanical strength (Figure 2A).
GENOMIC BASIS OF REFILLING
The process of refilling or repair of embolism requires pumping water in an air-filled cavity. Physically this will require an empty or air-filled vessel, functional neighbor vessels, a source of energy to drive the refilling and a source of water to refill. In the previous sections, the physical and physiological components of embolism repair have been discussed in detail. However, a reductionist biologist looks further beyond for the possible identities of molecular candidates that repair the non-functional vessel. It is hypothesized that refilling is a result of an intricate interaction of xylem parenchyma, (even possibly phloem), vessel wall chemistry, and the composition and flexibility of pit membranes (Holbrook and Zwieniecki, 1999). The signals that are sensed when embolism occurs and the cascades that follow the primary signal transduction event, involve interconnected molecular regulators; that has been subject of several studies. The most recent model of refilling puts forward a role of sugar signaling in embolism sensing and refilling mechanism, the involved gene families being Aquaporins, Sucrose transporters and enzymes related to starch breakdown, Alpha and Beta Amylase (Secchi and Zwieniecki, 2010).
AQUAPORINS
Aquaporins are conservedly implicated in the refilling process of angiosperms and gymnosperms from the very beginning. The refilling of vessels in Populus trichocarpa is accompanied by selective upregulation of PIPs (Plasma Membrane Intrinsic Proteins). Secchi et al. (2011) proposed that the sensing of embolism and accomplishment of refilling is mediated by sugar signals, specifically sucrose. According to their proposed model, when a vessel is filled with air, free passage of sucrose to the vessel lumen is hindered, and the sucrose molecules are deposited on vessel wall. This, with a positive feedback loop generate a cascade of high starch to sucrose conversion (Bucci et al., 2003;Salleo et al., 2004;Regier et al., 2009). The increased sucrose pool would be maintained by upregulation of amylases and sugar transporters. Secchi et al. (2011) showed a distinct upregulation in aquaporins and sucrose transporter (PtSuc 2.1) in air injected or artificially high osmotica-treated vessels. Ptsuc2.1 shows a high homology to walnut sucrose transporter, which, on upregulation is able to relieve freeze-thaw induced embolism (Decourteix et al., 2006). The increased sucrose and the upregulation of aquaporins are correlated spatially and temporally, but connections are difficult to establish. The model hence proposed is schematically represented in Figure 2B. Almeida-Rodriguez et al. (2011) showed a gene expression profile of 33 Aquaporins in fine roots of hybrid poplar saplings and compared light and high transpiration induced vascular hydraulics physiology with respect to Aquaporin expression. Dynamic changes were observed in expression pattern of at least 11 aquaporins from poplar; and some of them were localized in the root tissue. In Arabidopsis, Postaire et al. (2010) showed that, hydraulic conductivity of excised rosettes and roots are correlated wih expression of aquaporins. AtPIP1; 2, AtPIP2;1, and AtPIP2;6 are the most highly expressed PIP genes in the Arabidopsis rosette (Alexandersson et al., 2005) and under long night, AtPIP1;2 knockout plants loose 21% hydraulic conductivity in the rosette (Postaire et al., 2010). The disturbed hydraulics phenotype is a genetic dissection of the direct relation between aquaporin expression and plant water transport; although there may be components other than Aquaporin that may serve an important role (Sack and Holbrook, 2006;Heinen et al., 2009). It has been shown in hybrid poplar Populus trichocarpa × deltoides, increasing evaporation from leaf surface and perturbed hydraulics is correlated with high aquaporin expression (Plavcová et al., 2013). In common grapevine, Vitis vinifera L. (cv Chardonnay) inhibitors of aquaporin-mediated transport greatly affects both leaf hydraulic conductance and stomatal conductance (Pou et al., 2013). Of 23-28 Aquaporin isoforms in grapevine, a subset including VvPIP2;2, VvTIP1;1 plays important role during early water stress, while VvPIP2;1, VvPIP2;3, VvTIP2;1 are highly expressed during recovery (Pou et al., 2013). In Maize roots, radial water transport are diurnally regulated by proteins from the PIP2 group (Lopez et al., 2003). It is evident, though, that not all aquaporins participate in the refilling process. The sugar signal initiation is one important component; as originally described by Secchi et al. (2011) and must induce embolism-related aquaporin isoforms. Monocots often employ root pressure, while dicots employ novel refilling mechanism, and mechanical resistance to resist cavitation. There is no clear demarcation between the strategies employed by the two groups, and the strategies may overlap. (B) The sugar sensing model of embolism refilling process, modified from Secchi et al. (2011). For detail explanations of the model, refer text and Secchi et al. (2011). Briefly, when vessels are filled and functional, a default "switch off" mode is active. Sucrose is continuously transported from accompanying xylem parenchyma cells into the vessels.
Cavitation induces a "switch on" mode of sensing. When a vessel is filled with air, free passage of sucrose to the vessel lumen is hindered, and the sucrose molecules are deposited on vessel wall. This, with a positive feedback loop generates a cascade of high starch to sucrose conversion (Bucci et al., 2003;Salleo et al., 2004;Regier et al., 2009) The transcriptomic studies show that a very high number of Carbohydrate Metabolism related genes were upregulated during embolism (Secchi et al., 2011). Upregulation of the disaccharide metabolism gene group was observed, along with downregulation of monosaccharide metabolism; that suggests an accumulation of sucrose pool on the vessel wall (Secchi et al., 2011). Further upregulation of ion transporters and downregulation of carbohydrate transporters build up an osmoticum inside the cell to facilitate efflux of water. Figure 2B (inset) shows a summary of the number of gene categories showing differential expression during embolism (Secchi et al., 2011). The energy required for the pumping in comes from starch hydrolysis and one can presume, xylem specific isoforms of aquaporin, Starch synthetase and sucrose transporters will be highly expressed during refilling in plants. For critical evaluation of the model parameters, and its feasibility across the plant kingdom we extracted all aquaporin gene sequences from Arabidopsis and the Arabidopsis homologs of Populus trichocarpa sucrose transporters and amylases implicated in embolism Secchi et al., 2009Secchi et al., , 2011Secchi and Zwieniecki, 2010, 2013. The accession numbers of the fetched Arabidopsis genes are presented in Tables 1A,B. We subjected the gene sequences to protein-protein interaction network interaction analysis in String software in Expasy, without suggested functional neighbors (Szklarczyk et al., 2010). Generated interaction network for Arabidopsis gene subsets (mentioned in Table 1) clearly shows three interaction network clusters, connected to each other (Figure 3), the middle cluster (termed 'a' in Figure 3) shows evidenced network of PIPs as well as a RD28, dehydration stress related protein. Two other clusters (b and c in Figure 3) exhibit sucrose transporters and NIPs. Amylases form an un-joined node (d in Figure 3). We further localized the genes in Arabidopsis publicly available transcriptome analysis database in different tissues and observed shared enrichment in root endodermis, cortex and stele using e-northern ( Figure 4A, Toufighi et al., 2005). A co-expression profile ( Figure 4B) was obtained using string software, and the common n-mers present in the genes to induce a co-expression in certain tissues has been analyzed using promomer tool ( Figure 4C; Table 2, Supplementary Table 1, Toufighi et al., 2005). Many of the enriched cis-elements contribute to dehydration and sugar stress. Overall, the genomic and transcriptomic data and candidate-gene based data emphasizes the high probability of sugar sensing of embolism. Secchi and Zwieniecki (2014) also showed that in hybrid poplar, downregulation of PIP1 delimits the recovery of the plant from waterstress-induced embolism, and thus is probably manages the vulnerability of xylem in negative pressure under control condition. The sugar content in the plant tissue strengthens the view further (Secchi and Zwieniecki, 2014).
TRANSCRIPTION FACTORS
The corregulation of sugar metabolism and water transport pathways require a complex transcriptional switch. Indeed, a large number of transcription factors control the refilling process, and they may regulate the diurnal pattern, the temporal accuracy and spatial distribution of the pathways involved. The role of TFs is shared; However, a look at the cis elements of pathway components may elucidate the nature of such sharing. The transcription factors important for xylogenesis and probably embolism are: AP2/EREBP, bZIP, C3HHD-ZIPIII, NAC, MYB, bHLH, WRKY, AP2/ERF, WRKY, HD, AUX/IAA, ARF, ZF, AP2, MYC, (Arabidopsis); HD-ZIPIII, MYB, MADS, and LIM in Populus, MYB and Hap5a in Pine and HRT in Hordeum (Dharmawardhana et al., 2010). With the onset of genomic approaches, much more intensive analysis have been made possible. In a comprehensive genome-wide transcriptome analysis of P. trichocarpa, with snapshots from each elongating internode from a sapling stage (Internode1 through Internode11) a large number of differential representation of transcription factors have been obtained (Dharmawardhana et al., 2010). No less than 1800 transcription factors were readily detectable in at least one growth phase, of which, 439 are differentially regulated during xylogenesis (Dharmawardhana et al., 2010); some of which are represented in Table 3. Another study identified 588 differentially changed transcripts during shoot organogenesis in Populus (Bao et al., 2009(Bao et al., , 2013. While the refilling process is majorly governed by sugar and dehydration signaling, NAC and Myb TF families remain singularly important in both xylem maturation and lignin biosynthesis. Aspects of xylogenesis that may be linked with mechanical-functional trade-off of vascular bundle revolve around lignin. There have been studies on genomics and transcriptomics of xylogenesis and secondary wood formation; however the genes responsible to maintain integrity of the vascular cylinder are not clearly known. In Supplementary Table 2, a comparative snapshot of some selected transcripts and emanating studies revealing the xylogenesis transcriptome in gymnosperms and angiosperms is provided. Several recent studies address the genomics of xylogenesis excellently; some of which are summarized in Table 4.
CAVITATION RESISTANCE INTRODUCED BY PIT MEMBRANE
The major key of cavitation resistance is pit membrane adaptation. To survive, ultrastructure of pit membrane needs to balance between minimizing vascular resistance and limiting invasion by pathogen and microbes. While the first is favored by thin and highly porous membrane, the later needs thick membrane and narrower pores. This calls for a trade-off between water transport function and biotic invasion resistance.
The thickness range of the pit membranes in the angiosperms is very broad, almost 70-1900 nm and so are the diameter of the pores (10-225 nm). Species with thicker pit membrane and smaller pores prevent seeding and embolism more successfully and thus may represent the group of species which has higher drought resistance.
Pit membrane porosity is not the only determinant of air bubble propagation among conduits. The other factor which serve equally important role is the contact angle between pit membrane and air water interface. This particular property is a direct function of pit membrane composition. The more hydrophobic the membranes are the more the contact angle and subsequently lower the pressure needed for air-seeding. Additionally, high lignin content, though required for mechanical strength, interrupt with the hydrogeling of pectins. Pectic substances can swell Gene ID data compiled from Secchi et al. (2011);TAIR and phytozome public database. or shrink in presence or absence of water and thus they control the porosity of membranes. Polygalacturonase mutants in Arabidopsis showed a higher P 50 value (−2.25MPa), suggesting a role for pectins in vulnerability to cavitation (Tixier et al., 2013). Mechanically stronger pit membranes thus may resist stretching and expansion of pore membranes indicating a compromise in function. Water stress has been reported to exhibit a direct relation to low lignin synthesis (Donaldson, 2002;Alvarez et al., 2008) although it is not known whether this low lignin help the water transport better.
SUGGESTED GENETIC BASIS OF CAVITATION RESISTANCE BY PIT MEMBRANE MODULATION AND MECHANICAL SUPPORT
Identification of genes and proteins behind the structural and mechanical controls of pit membrane formation has not progressed so far as repair mechanism of embolism is concerned. Genetic aspects of plant hydraulics are little studied, since most of the xylem studies are done in woody trees and study of herbaceous crops is rather scant. It is hard to obtain mutants in trees as the generation time is high, and the study process is long and laborious. Also, hydraulics in plants is not a simple structural or functional trait but is a complex physiological phenomenon. Figuring out the multitrait control switch of this function is thus difficult.
CAN LIGNIN BIOSYNTHESIS BE CONSIDERED AS A CONTROL SWITCH?
Among the living cell processes that may take active part in controlling hydraulics, lignin biosynthesis is a major candidate and highly deciphered. In chemical nature, it is a polymer of phenylpropanoid compounds synthesized through a complex biosynthetic route (Figure 5; Hertzberg et al., 2001;Vanholme et al., 2010). Luckily enough, the genes on the metabolic grid are sequenced in plants like Arabidopsis and Populus, which is helpful to understand their modulation under stress. Till date, both biotic and abiotic stressors have been implicated in modulation of lignin biosynthesis, as well as seasonal, developmental and Zhong and Ye, 2009). Representing a large share of non-fossil organic carbon in biosphere, lignification provides mechanical support and defends the plant against pests and pathogens. The mechanical support, further, is mostly linked to xylem vessels and hydraulics. Lignin is made from monolignols (hydroxy-cinnamyl alcohol), sinapyl alcohol, coniferyl alchol, and p-coumaryl alcohol in a smaller quantity. The complex metabolic grid and the transcriptional switches are described in details elsewhere (Hertzberg et al., 2001). The major metabolic pathway channeling into this grid is phenylpropanoid pathways through phenylalanine (Phe). Phe, synthesized in plastid through shikimic acid biosynthesis pathway, eventually generates p-coumaric acid by the activity Phenylalanine Ammonia-Lyase (PAL) and Cinnamate 4-Hydroxylase (C4H). p-coumaric acid empties itself into the lignin biosynthesis grid to result into three kinds of lignin units; guaiacyl (G), syringyl (S), and p-hydroxyphenyl (H) units. Gymnosperm lignin polymer is majorly composed of G and H units, angiosperms show G and S units and H is elevated in compressed softwood and grasses (Boerjan et al., 2003).
There are stresses in nature that change plant lignin content. For example, lignin amount in Picea abies is predicted to correlate positively with annual average temperature (Gindl et al., 2000). Temperate monocots as well show an increase of lignin in response to increasing temperature (Ford et al., 1979). In Triticum aestivum, 2 • C chilling stress decreases leaf lignin but increases in root is observed (Olenichenko and Zagoskina, 2005). Curiously, some studies have shown that although no changes in the levels of lignin or its precursors were observed in plants maintained at low temperatures, there was an increase in related enzyme activities as well as an increase in gene expression. Cold acclimatization in Rhododendron shows upregulation of C3H, a cytochrome P450dependent monooxygenase without further functional characterization (El Kayal et al., 2006). It has been argued that expression of C3H could result in changes in the composition of lignin, altering the stiffness of the cell wall albeit without a definitive proof. The basal part of the maize roots show a growth reduction and low plasticity of cell wall associated with upregulation of two genes in lignin grid (Fan et al., 2006) in response to drought. The increase of free lignin precursors in the xylem sap and reduced anionic peroxidase activity in maize has been associated with low lignin synthesis in drought (Alvarez et al., 2008). It is possible that FIGURE 3 | The protein-protein interaction network of Arabidopsis sucrose transporters, amylases and aquaporins, generated using String database. Thicker lines indicate stronger reaction (Szklarczyk et al., 2010). reducing lignin may directly affect the vascular tissue, encouraging water transport, lowering air seeding and increasing cavitation resistance; however it is not known what share of reduced lignin actually amount to stem vasculature, water column support and pit membrane plasticity.
BIOTECHNOLOGICAL MODIFICATION OF LIGNIN METABOLISM
With the advancement of genomic data, it is now possible to map the genetic changes which may influence hydraulic architecture. However, the model systems are questionable. Among the woody plant species, the genome of poplar has been sequenced; and the lignin biosynthesis network is fully characterized in Arabidopsis and rice. It is expected that change in lignin content may result differently in herbaceous and woody plants. There are controversial results obtained so far. In free-standing transgenic poplar trees, a 20-40% reduction in lignin content was associated with increased xylem vulnerability to embolism, shoot dieback and mortality (Voelker et al., 2011). Similarly the severe inhibition of cell wall lignification produced trees with a collapsed xylem phenotype, resulting in compromised vascular integrity, and displayed reduced hydraulic conductivity and a greater susceptibility to wall failure and cavitation (Coleman et al., 2008). A study on the xylem traits of 316 angiosperm trees in Yunnan, and their correlations with climatic factors claimed that wood density and stem hydraulic traits are independent variables (Zhang et al., 2013).
A weak pipeline and less lignification compromises vascular integrity as observed from the above results. On the other hand, low lignin helps to increase the plasticity of the pit membrane pectin. Thus compromising lignin quantity may have serious impact on strength of the vascular cylinder; on the other hand, it may increase the pit membrane hydrophilic property and may offer resistance toward cavitation.
Lately, Arabidopsis has been taken in as a model for secondary tissue development, although it lacks formation of secondary wood. Tixier et al. (2013) argued that Arabidopsis might be as well considered to be a model of xylem hydraulics. They regarded the inflorescence stem of A. thaliana as a model for xylem hydraulics despite its herbaceous habit, as it has been shown previously that the inflorescence stem achieves secondary growth (Altamura et al., 2001;Ko et al., 2004), allows long-distance water transport FIGURE 4 | (A) Localization of the genes from Tables 1, 2 in various Arabidopsis tissue, from public microarray databases, and e-northern tool at Botany Array Resource (Toufighi et al., 2005). (B) Co-expression profile of the genes in Arabidopsis (Szklarczyk et al., 2010). (C) Distribution of relevant n-mers in the promoters of the above genes. That may induce shared expression. The results are generated using String and Promomer tools in Botany Array Resource (Toufighi et al., 2005). A tabulated form of the results are presented in Supplementary Table 5.
www.frontiersin.org
May 2014 | Volume 5 | Article 224 | 11 from the roots to the aerial parts of plant, and experience gravity and other mechanical perturbations (Telewski, 2006). There are distinct similarities between woody dicots and Arabidopsis inflorescence stems with respect to vessel length and diameter as well as presence of simple perforation plates and border Hacke et al., 2006;Schweingruber, 2006;Wheeler et al., 2007;Christman and Sperry, 2010). It has a genetic potential to develop ray cells and rayless wood is observed in juvenile trees (Carlquist, 2009;Dulin and Kirchoff, 2010). Having Arabidopsis as a full proof model for woodiness may open numerous possibilities. The best among them are study of environmental stresses on hydraulic characters. A number of mutants can be generated and screened in Arabidopsis with deviant safety vs. efficiency phenotype with little effort. The Arabidopsis thaliana irregular xylem 4 phenotype (irx4) a mutant for cinnamoyl-CoA reductase 1 (CCR1) gene, has provided us with valuable insight in the role of lignin reduction and associated phenotypic changes in vasculature. As reported by Jones (2001), near-half decrease of lignin component with no associated change in cellulose or hemicellulose content gives the plant an aberrant vascular phenotype. Most of the cell interior is filled up with expanded cell wall and the xylem vessels collapse. Abnormal lignin gives the cell wall a weak ultrastructure and less structural integrity Patten et al., 2005). Later it has been claimed that by modulating the CCR gene, irx4 mutant has obtained a delayed albeit normal pattern of lignification program (Laskar et al., 2006). It thus has to be borne in mind that not only the content but the spatio-temporal pattern of lignin deposition may change the xylem ultrastructure and change the safety-efficiency trade-off limit.
There are a few transcriptional control switches in lignin production which can be used in modification of vascular
Xylogenesis
Embolism Lignin biosynthesis Li et al., 2013Secchi et al., 2011Hertzberg et al., 2001Carvalho et al., 2013Zhong et al., 2011Pesquet et al., 2005Lu et al., 2005Li et al., 2012Schrader et al., 2004Dharmawardhana et al., 2010Karpinska et al., 2004Bao et al., 2009Rengel et al., 2009Mishima et al., 2014Plavcová et al., 2013Zhong et al., 2011 conductance. Modulation of co-ordinate expression of cellulose and lignin in rice is an important study regarding such transgene opportunities. Expression of the Arabidopsis SHN2 gene (Aharoni et al., 2004) under a constitutive promoter in rice alters its lignocellulosic properties along with introduction of drought resistance and enhanced water use efficiency (Karaba, 2007). The Arabidopsis SHINE/WAX INDUCER (SHN/WIN) transcription factor belongs to the AP2/ERF TF family, and besides wax regulation, control drought tolerance in Arabidopsis (Aharoni et al., 2004;Broun et al., 2004;Kannangara et al., 2007). Expression analysis of cell wall biosynthetic genes and their putative transcriptional regulators shows that moderated lignocellulose coordinated regulation of the cellulose and lignin pathways which decreases lignin but compensates mechanical strength by increasing cellulose. All the processes ascribed to master control switch SHN may be directed toward evolution of land plants; waxy cover to lignin synthesis for erect disposition and water transport. However, no xylem irregularities are seen in this mutant (Aharoni et al., 2004). As the best studied pathway related to secondary cell wall formation, lignin biosynthesis should offer the best metabolic grid that can be tweaked in plants to genetically understand mechanical functional trade-off and resistance to cavitation. General reduction of PAL (Phenylalanine ammonia lyase, E.C. 4.3.1.5) activities in developing plants may be one possible point of interest. PAL is a "metabolic branch-point" where Phe is directed toward either lignins or proteins (Rubery and Fosket, 1969). However, according to Anterola et al. (1999 and other such studies there are other pathways originating from pentose phosphate or glycolysis that may directly end into lignin biosynthesis and PAL may not serve as rate limiting step at all. Cinnamate 4-hydroxylase (C4H) is another candidate that has been downregulated with decrease in overall lignin content, however, with no effect on vascular integrity or function (Fahrendorf and Dixon, 1993;Nedelkina et al., 1999). p-Coumarate-3-hydroxylase (C3H) in Arabidopsis (CYP98A3) may be necessary and rate-limiting step in the monolignol pathway (Schoch et al., 2001). Its expression is correlated with the onset of lignification and a mutant line results in dwarfed phenotype with reduced lignin (Schoch , and cinnamyl alcohol dehydrogenase (CAD) isoforms are downstream pathways in monolignol formation, and their relation to vascular integrity are yet to establish, though phenotypes associated with their mutations are tall/dwarf stature, altered lignin composition, and reduced mechanical support. Conclusive data are yet to be obtained from these studies.
CONCLUSION
Hydraulic safety margin in a plant is clearly driven by its phylogenetic origin. Conifers have developed minimal hydraulic resistance which is a necessity for water transport through short unicellular tracheids. The unique torus-margo anatomy of the conifer pit membrane let them adaptively overpower multicellular vessels in angiosperms in certain cases. Conifer stems are proposed to have larger hydraulic safety margins when compared with most angiosperm stems (Meinzer et al., 2009;Choat et al., 2012;Johnson et al., 2012) although it is also suggested that they recover poorly from drought-induced embolism (Brodribb et al., 2010). The refilling mechanisms vary greatly between monocots and dicots and herbaceous and woody plants. Resistance to cavitation is thus closely related to many factors: such as nature of the mechanical tissue, the vasculature, the height of the plant, the systematic position of the plant, developmental stage and stresses the plant must face. It can be further emphasized that though, in certain dicots a trade-off within the water transport ability and mechanical strength (efficiency vs. safety) has been observed, the genomic factors which may control the tradeoff are not identified till date completely; and the observation is far from universal. The two major physiological phenomena which seem to be linked to embolism resistance are lignification and solute transport between xylem parenchyma, vessel and phloem. The genes and proteins behind these physiological traits are many, and even the obtained transgenic plants and mutants have only been scantily characterized. The effects of assembly of the components are poorly understood and the models proposed do not address all plant families universally. Overall, although a phylogenetic trend is observed among the plants for the evolutionary establishment of hydraulic safety margins, the mechanisms behind have not been understood enough till date to predict the molecular basis and evolution in genomic scale. However, the best metabolic pathway to offer advantageous biotechnological outputs appears to be the lignin synthesis network, which should be assessed by mutant screening as well as by tissue specific overexpression studies in the plant. In case of monocots, drought-induced root-specific overexpression may be of advantage in generating better crops, as root pressure seems to be the major regulator. Crop biotechnology is largely benefitted when the gene pool and their interaction behind a biological process is better known. Overexpressing aquaporins along with the sugar sensing network under a dehydration-responsive promoter could be a formidable strategy to prevent embolism-induced wilting. An approach toward modulation of lignin biosynthesis grid regulation may yield better woody, or even herbaceous crops. The overwhelming knowledge emanating from transcriptomic and genomic studies build the platform where biologists can attempt crop modification for such complex traits as vascular integrity and water transport, without or marginally limiting other beneficial traits, in near future. support. Arun Lahiri Majumder is a Raja Ramanna Fellow of the Department of Atomic Energy, Government of India. We cordially thank Dr. Harald Keller, Senior Scientist, INRA, France, for his kind permission to reproduce the lignin biosythetic pathway figure from his publication, appropriately cited. We further thank the reviewers for their valuable comments which helped us to improve the manuscript.
|
v3-fos-license
|
2018-12-21T01:06:48.649Z
|
2017-01-01T00:00:00.000
|
62797492
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1080/23311886.2017.1361601",
"pdf_hash": "a05ef5a076d608561c414ee674146a4599ee4d3a",
"pdf_src": "TaylorAndFrancis",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42362",
"s2fieldsofstudy": [
"Political Science"
],
"sha1": "a05ef5a076d608561c414ee674146a4599ee4d3a",
"year": 2017
}
|
pes2o/s2orc
|
Skill formation, cultural policies, and institutional hybridity: Bridging the gap between politics and policies at federal and state levels in Brazil
Abstract This article analyses ideas, institutions and policy designs that aim at facilitating the resolution of social, economic and cultural problems through the direct involvement of civil society organizations, and other parties. More specifically, it focuses on two hybrid designs of partnerships in Brazil for the deliverance of public policies: the first sealed between the Brazilian government and paulista industrialists; the second, between the state of São Paulo government and civil society organizations. From the former stems a set of non-state autonomous entities that, among other functions, provide apprenticeship; from the latter, Social Organizations (OS) are contracted (contratualizadas) to deliver policies of culture. This study concludes that distinct ideas that opened windows of opportunity to design and implement hybrid partnerships for the deliverance of public policies beyond the state led to similar strategies of institutional insulation. The article finishes discussing possible implications of insulation to the Brazilian democracy to suggest some questions to lead to a new research agenda.
PUBLIC INTEREST STATEMENT
Public policies of education and culture are important, especially to enhance a convergent and plural society, based on common ideas and knowledge, and shared symbols and memories. This article discusses how political ideas were converted into policies in Brazil, especially in the skill formation policy-area at national level, and cultural policies at state level. We demonstrate how political thoughts, such as national economic development and values associated to the maintenance of a democratic regime, may be embedded into political institutions. We end the article with the evaluation that it is possible to consider that the insulation of hybrid models of partnerships between government and society not necessarily results in better democracies.
Introduction
This article seeks to comprehend how ideas have been translated into policies, in Brazil, by comparing two hybrid institutional arrangements that emerged in the 1940s and 1990s to deliver skill formation and cultural policies. In the former policy-area, hybrid engineering took place between public and private sectors; in the latter, between public and non-state public sectors.
Considering the temporal gap, as well as the differences between the cases in relation to their engineering fit (Skocpol, 1992), the article provides historical and economic analyses to depict, in detail, what factors favored the opening of windows of opportunity (Kingdon, 1993) for the emergence of such trajectories. Also, the institutional differences that molded the partnerships' designs call attention for the political environment that they were formed.
In the case of skill formation, the idea of a hybrid partnership ended up increasing bureaucratic insulation from the beginning (late 1930s), by concentrating decision-making within the realm of the executive (Nunes, 1984). This situation was aggravated on 10 November 1937, when it suspended democratic elections to impose a new constitution on the country, inaugurating Getúlio Vargas' dictatorship (Estado Novo, 1937/1945. Moreover, Art.129 of the 1937 constitution stated that industrial firms and employer federations should take over the schools of apprenticeship for their employees' children and associates, while basic education (pre-professional education for the poor) was primarily the duty of the state.The normative assumption behind Art.129 was that a national development model for the Brazilian economy was constrained by the lack of capacity of federal government to develop a wide national apprenticeship system (Assumpção-Rodrigues, 2013). Therefore, emerging from Vargas' alliance with the industrial patron unions, hybridization in this case delegated responsibility through a set of constitutional rules that aimed at launching the development of a regular and systematic apprenticeship service for workers of a national manufacturing park that had recently been implemented. Furthermore, by relinquishing its authority to industry corporations, the Brazilian state dismantled, from the start, any possibility of introducing effective intermediary skills policies with the cooperation of trade unions.
By the end of the 20th century, however, with the transition to democracy process that culminated in the 1990s, a strong movement for Social Organizations (OS) to deliver public policies emerged in the country, reinforcing important aspects of the Brazilian state reform implemented during the first mandate of Fernando Henrique Cardoso (1995)(1996)(1997)(1998). During democratization, the strategy of entering into contracts (contractualization 1 ) between federal and state governments, and civil society organizations became an important institutional tool to facilitate the deliverance of several public policies that the state, by itself, was not able to do so. In fact, contractualization, a phenomenon inspired in the New Public Management school (Rhodes, 1996), emerged in Brazil as a strategic instrument for the deliverance of public policies in a context constrained by hipper-inflation, poverty, and socioeconomic inequalities (Romão Netto, 2016).
In sum, considering that policies and institutions are not frozen residue of critical junctures, we have to pay more attention to the kind of problems (issues) that mobilized federal and state levels of the Brazilian government, as well as civil society organizations, in authoritarian and democratic periods. Also, assuming the importance of agenda-setting for decision-making processes, this article makes use of the concept of "window of opportunity" (Kingdon, 1993) associated to specific approaches of the new-institutionalism, such as policy reproduction/change, path-dependence, and the ideational perspective. These procedures may not only facilitate the identification of motives that lead specific issues (and not others) into the public agenda-including the mobilization of governments, citizens, and other actors (like NGOs) but, most importantly, help us draw comparisons between the cases in relation to: (i) the beliefs/ideas that led specific issues into the public agenda; (ii) the way gaps between the "windows of opportunity" and policy implementation were bridged; and (iii) the institutional constraints that supported the implementation, development, and maintenance of both partnerships along time.
Finally, considering the impossibility for controlling policy environment, the existence of a broad space for discretionary behaviour of agents, and the fact that programs do not mean broad public consensus demonstrate that policy uncertainty levels are as diverse as the cognitive limitations of the actors involved. In these terms, the examination of hybrid institutional engineering must depart from the proposition that policy analyses should be considered as a project of social experimentation (Alston, Melo, Pereira, & Mueller, 2009). The intention is to conform a model of investigation for the cases set up in this introduction by focusing on three types of schedules: the systemic (or nongovernmental), the governmental, and that related to decision and policy-making. The article concludes presenting a research agenda for further investigation.
Ideas, windows of opportunity and insulation
Once embedded in institutions, prime ideas may reinforce a process of "social learning" (Béland, 2009;Hall, 1993). This concept assume that the presence of ideational consents as components of a specific policy process may lead to a reaction to the previous policy, starting a process of learning which occurs when policy makers respond to the failure of a past policy, drawing lessons and incorporating it into a new policy, leading these experts to a more specialized position regarding to a specific policy area and working with some autonomy from politicians and social pressures (Beland, 2005;Schmidt, 2011). However, from a perspective of ideas as causal beliefs, this perspective leaves less or no space for the study of policy change; for that reason, some scholars introduced a more political perspective on the social learning process (Skocpol, 1992;Pierson, 1996;Beland, 2005). Hall (1993) has largely contributed to the relation of ideas and institutional politics introducing the concept of "policy paradigms" referring to "a framework of ideas and standards that specifies not only the goals of policy and kind of instruments that can be used to attain them, but also the very nature of the problems they are meant to be addressing" (1993, p. 279). In this concern, Kingdon's concept of "agenda-setting" helps to bridge and bond ideas and institutions in historical learning processes, once policy agenda refers to problems that policy-makers perceive as significant in a specific moment and time, and it is usually attached to a public agenda, which generally refers to the interaction between public opinion and relevant issues in the media (John, 1995). Thus, considering a broader discussion on the relations between ideas and institutions in policy-making, and accepting Kingdon's argument that "agenda" (a cluster of topics nominated as the pressing problems in a specific moment) and "alternatives" (the policy options available to solve the appointed problems) are products of three autonomous streams (problem, policy and political streams) through which social and political actors are mobilized, according to specific issues or policy options, we assume that political actors frame their alternatives in specific ideational paradigms.
In this sense, ideological frames appear in the public pronouncements of the policy-makers and their staff, like speeches, press releases, interviews etc. (Campbell, 1998), and these frames also appear at the legislative activity of the elected members of the parliament in their formal speeches and legislative activities as the proposition of new legislation. Following Beland (2005, p. 12), "the ability to frame a policy programme in a politically-and culturally-acceptable and desirable manner is a key factor that can help explain why some policy alternatives triumph over others and why elected officials decide to 'do something' in the first place." Finally, the important notion to be addressed in this section relates to the bureaucratic insulation of public organizations in Brazil. To put it into perspective, it is worth noting that this concept (bureaucratic insulation) aimed at disaggregating the older conceptions of "two Brazils" in its various forms (such as developed vs. underdeveloped; urban vs. rural; modern vs. traditional). It resulted from Edson Nunes' analysis (1984), which looked at the problem of the articulation of capitalism in a state-building context from a political perspective. This approach proposed a new interpretative framework for understanding the relations between formal political institutions and society in contemporary Brazil, claiming that at least four different patterns of state-society links account for the articulation of the Brazilian society with politics and economy. From this perspective, sets of possible relations between mode of production, patterns of social action, and formal political institutions are bureaucratic insulation, clientelism, corporatism, and procedural universalism. Of these four institutionalized patterns of relationships, only the procedural universalism clearly reflects the logic of the modern capitalist market. The other grammars (bureaucratic insulation, clientelism, and corporatism) have characterized periods of most noteworthy tensions in contemporary Brazil, in which the "balance" among the patterns of relationships has been compromised by governments that place excessive emphasis on one or two particular grammars, in order to do politics.
Several scholars have emphasized that, in contexts where ingredients like the subjection to the law and/or to public interest are less likely to occur, "particularistic practices-such as clientelism and corporatism-are more likely to emerge" (DaMatta, 1987(DaMatta, , 1995Mainwaring, 1991;Nunes, 1984;O'Donnell, 2007;Schmitter, 1971). Taking into account the policy-areas studied in this article, one may consider that, from this situation, some sort of paradox may be brought about. On the one hand, the insulation of key agencies' bureaucrats (from clientelism, for instance), in the case of skill formation, may have helped to secure the necessary resources to maintain the incentive structure needed for agencies to attempt to carry out the president's promises. In these terms, insulation did not imply failure to respond to popular demands; quite the opposite (Geddes, 1994). On the other hand, the insulation of administrative agencies (combined with clientelism, for example) could also be interpreted as posing important problems to democracy, since it creates the possibility that unelected officials can not only decisively impact policy-making (Dunn, 1999) but, most importantly, enjoy substantial advantages over elected officials and civic organizations, in terms of information and capabilities. In these terms-and considering that insulated bureaucratic agencies may also have "their own" agendas rooted in organizational needs or professional habits and discourse-they may potentially disregard public preferences (Fung, 2006, p. 679). In any case, these aspects became important obstacles for the reinforcement of a more accountable political order in the country.
In this sense, it is especially intriguing that bureaucratic insulation, as a specific grammar, helped not only to organize the Brazilian state-building process of the 1930s, but also the relations between formal political institutions and society during the democratization of the 1990s. As we will attempt to demonstrate in this piece, one difference between both cases studied in this piece (skill formation and cultural policy-areas) refers to the degree of bureaucratic insulation: extreme, in the former case, and moderate, in the latter.
Research design and methods
Ideational research focuses its analysis on institutional change from an agency-centred perspective. In this sense, what and how the change occurs is a result of people's choice answering to the circumstances they face in their ordinary life. Politics arises when several agency-centred ones interacts with each other and/or with specific institutions. Thus, change may be fast or slow, radical or incremental, and they are mainly result from people's choices. As Béland and Cox (2011, p. 11) state, "the unique claim of ideational scholars is that these choices are shaped by the ideas people hold and debate with others. These ideas, in turn, are based on interpretations people have of the world and of those around them. There is a material reality, but it lends itself to many interpretations that open endless options for human agency. For this reason, the outcomes of any process of change are contingent. They are not predetermined and cannot be predicted".
Ideas and preferences, when aggregated and captured by a representative, a government, a coalition or a political party are shaped into specific legislative processes and take diverse institutionalized formats in the sense to be orally communicated, such as speeches, Propositions, Laws, Acts, Constitutions etc. Once these ideas are formally expressed, it is possible to specifically identify and categorize the intentional connection of these inputs with an output (a policy) or a throughput (a managerial process). In this sense, the legislative process may be perceived as an "institutional talk", representing ideas, social interactions, the diversity of identities and the institutional incentives itself (Heritage & Clayman, 2011).
As an exploratory investigation, and aiming to identify these ideas embedded in this institutionalized dialogue, we have applied the "qualitative content analyses", which consists in a deductive category of analyses, whose goal is to validate a conceptual framework or theory by evidencing specific ideas. So, after the identification of all legislation (national and regional) related to our objects, we have selected some descriptive evidence to demonstrate the argument by quoting key excerpts from the data (Curtis et al., 2001;Hsieh & Shannon, 2005). As an exploratory study, the main intention is not to elaborate a "cognitive map" from a categorization of the data evidenced, but first to highlight important ideational aspects in different arguments which may lead to illuminate how these initial ideas have conducted a process of "institutional learning", promoting the adaptation of specific ideas into institutional constraints, focusing on how the stiffening of ideas into norms may lead to the insulation of some public organizations through the institutional adaptation of their governance arrangements.
Skill formation policies & institutions in Brazil
The answer to the question where the Brazilian skill institutions come from must depart from the fact that Brazil was the last country in the world to abolish slavery, in 1888. In contrast to the cases of Germany and Japan-where the state policy actively organized the artisanal sector, strengthening the role of unions-or of the United States and Britain-where traditional corporate associations had been destroyed through liberalization (Thelen, 2007, p. 279)-the coalitional alignment among the three key groups (employers in skill-intensive industries, traditional artisans, and early trade unions) was absent in the nineteenth century Brazil. In this case, artisans (and emergent trade unions) were deliberately excluded from decision-making, especially during their formative years. Academically oriented education, in turn, was designed mostly "to fulfill the expectations of upper-class youth" (Teixeira, 1968, p. 50), "to train personnel for the governmental bureaucracy" (Silva, 1977, p. 3), or "to train doctors, engineers, and lawyers to serve the upper class" (Ribeiro, 1962, p. 11). All this increased the gap between the wealthy and the poor, at the same time it facilitated the insulation of the welfare within the hands of the haves.
In fact, the Brazilian industrial training policy gained momentum only in the 1940s, when Roberto Simonsen-a heavyweight participant in the design of the Estado Novo (1937)(1938)(1939)(1940)(1941)(1942)(1943)(1944)(1945) economic policies, who considered vocational training central to the overall process of economic developmentcame up with the idea of creating skill formation policies and institutions to face the transition from the country's heirloom of the slavery-based economy into a skill intense industrial system. Thus, in 30 January 1934, as a deputy representing the industrialists within the national constituent assembly, Simonsen delivered a speech stating that: The economic order should be organized according to principles of justice and with the aim of establishing a standard of living compatible with human dignity (Simonsen, 1934, p. 9).
Moreover, conceding that the Brazilian state could stimulate and defend production, [and] protect labour, the deputy specifically deplored the lack of vocational training among Brazilian workers, citing his own experience in this regard: In my work as an engineer, I have regretfully verified that the most productive and best-paid positions, that is, those for skilled workers, are mainly occupied by foreign labourers, with national workers relegated to performing the heaviest and most thankless tasks due to their ignorance of specialized trades (Simonsen, 1934, p. 27).
One year before Simonsen's speech in parliament (1933), Getúlio Vargas had already delivered his own speech to declare that the education which we need to develop to the extreme limits of our possibilities is the vocational and technical kind. Without it, organized work is impossible, especially in an age characterized by the predominance of the machine (Vargas, 1938, Vol.1: p. 25, Vol.2: pp. 121-122). Therefore, both Vargas' and Simonsen's political discourses advocated the idea that the Brazilian industrialization process required a well-prepared workforce with skilled professionals, but not, necessarily, under a democratic regime.
In 1937, when the State of São Paulo's Industries Federation-FIESP membership elected Roberto Simonsen president (and of the National Confederation of Industry-CNI), Vargas cancelled the upcoming elections to declare himself dictator of a New State (Estado Novo). Soon after, Simonsen moved into Vargas camp, and his growing association with the Executive opened a window of opportunity for the creation of a partnership between federal government and the paulista industrialists in the skill formation policy-area. This window, however, did not include the industry employees in the design of the new policy. In fact, as we have seen, the emergence of Vargas' authoritarian regime aggravated the insulation (Nunes, 1984) of labour unions 2 from policy-making, especially with the imposition of a new constitution on the country. On this matter, as mentioned, Art.129 of the 1937 constitution 3 stated, for instance, that industrial firms and employer federations should take over the schools of apprenticeship for their employees' children and associates, while basic education (pre-professional education) for the poor was primarily the duty of the state.
In order to make that article effective, Gustavo Capanema, Varga's minister of Education and Health, after some advances and setbacks with the CNI and FIESP, formed an inter-ministerial commission to issue a decree-law 4 based on the argument that federal government could not afford to implement skill formation policies without the industrialists' economic support.
Nevertheless, annoyed at the government's formulation of such an important decree without prior consultation with FIESP, Simonsen stated that both employers′ confederations (CNI and FIESP) would simply bypass the decree-law n.1.238 if the burden for funding vocational training was not shared by the state, workers, and industrialists. Moreover, in the name of FIESP, he also called for funding of training schools only in industrialized areas, in which the skills taught would be only those in greatest demand (Carone, 1977, pp. 273-284). Also, assuring the commission that all industrialists recognize the necessity for and the advantages of expanding vocational instruction, Simonsen argued that expanded vocational education would be useless unless incoming workers had better basic skills (FIESP, 1940). The inter-ministerial commission, in turn, was highly responsive to Simonsen's criticisms on behalf of FIESP 5 .
In late 1939, the commission presented to the ministries of Education and Labour a report that not only dismissed the distinction mentioned in decree-law n.1.238 between large and small industries, and called for tripartite funding (government, employers, and workers), but, most importantly, endorsed the FIESP view that only 10 to 15% of the industrial workforce performed tasks that required an extended period of apprenticeship. Most industrial workers, it noted, were manipulators, unskilled or semiskilled operatives who performed repetitive and easily mastered tasks (FGV-CPDOC, 1939).
While the industrialists' representatives were vociferous and systematic in their criticisms of Vargas experts that were responsible for designing decree-law n.1.238, 6 the subservient character of most labour unions was demonstrated by the lack of contestation for the definitions of skill or appropriate instruction formulated by the government technicians.
Despite the fact that this distinction between skilled and semiskilled workers served only to define more clearly the parameters of apprenticeship, the question raised by the commission was what to do with about the 85% of the workforce that would not receive systematic vocational instruction? What would be the parameters of creating a vocational education policy, in late 1939, for an increasingly mechanized industrial sector in a society where the average urban worker still had less than two years of schooling in contrast to an average of eight or nine years in the United States and Germany? (Weinstein, 1997, p. 94).
In the report presented to the ministries of Education and Labour, the commission mentioned that reading, writing, and elementary arithmetic would be useful instruments to such workers. As for who would provide this instruction, the commission agreed with Simonsen that primary responsibility should lie with the federal government. Employers would be expected to provide the minimal manual instruction necessary for the performance of semiskilled tasks. Beyond that, at most, employers should be encouraged to offer literacy courses or retraining programs on a voluntary basis.
In relation to the 10-15% of skilled workers, the commission presented three recommendations. First, that apprentices would study at existing professional schools and complete their education with a six-month internship in industrial enterprises. Second, it also suggested a procedure that involved rational selection of primary school graduates who would enter factories as apprentices. Third, where concentrations of factories requiring similar skills existed, a common apprenticeship centre would be set up to serve firms in the area, with regional councils overseeing both internship and apprenticeship courses.
While the inter-ministerial commission prepared a proposal of a decree-law, which included the creation of an organization to implement the new vocational education and training policy, the direct intervention of the industrialists in decision-making came, again, from Simonsen.
Only this time, expressing his general support for the commission's proposal, a FIESP report suggested specific modifications to improve the interaction between industry and training centres, including the expansion of the industrialists' administrative control, the reduction of the role of federal officials in management, and the elimination of worker's participation in the skill formation policyarea. Also, in somewhat of a turn around, Simonsen proposed that employers should assume full responsibility for funding the new training program, even though this would constitute an onerous burden for the industrialist class (FGV-CPDOC, 1941).
In fact, the reason why the industry leaders decided to accept the levy scheme may be related to the argument that such mechanisms tend to be more easily accepted by employers if they are targeted (sectorial or regionally), rather than universal, and if the levy is managed either locally or by corporatist federations (Smith & Billet, 2005). This argument fits in with the VET funding and financing models implemented in Brazil: it is sectorial and regionally targeted and managed by employers' corporations. Such scheme has worked for the last seventy years as an alternative to a national (centralized) funding model, ensuring a reliable budget that is independent of public resources and guaranteeing to the most industrialized areas of the country (especially the Southeast region) that they were the greatest beneficiaries of the program.
Nearly all conclusions of Simonsen's report were incorporated into a proposal of a decree-law presented by the inter-ministerial commission in December 1941, which included the creation of an organization to implement the new vocational training policy in the country, and of a levy scheme to fund it.
Thus, on 22 January 1942, the Executive issued decree-law n.4.048 7 to create the National Industrial Apprenticeship Service-SENAI. According to it, all industrial companies should pay a compulsory contribution of 2.000 réis per employee per month (Art.4, §1), in order to finance the new VET institution. 8 Conversely, on 30 January 1942, Vargas also signed the Organic Law of Industrial Training (decree-law n.4.073/1942), 9 which brought the new institution (SENAI) under the umbrella of the Ministry of Education.
Therefore, while the SENAI decree-law (n.4.048/1942) was a call to action with a levy scheme to fund it, with the Organic Law (decree-law n.4.073/1942), the Executive made clear that the private sector was responsible for providing VET for the workforce outside the regular education system provided by the state. Moreover, by later including the Apprenticeship Law (decree-law n.4.481/1942 10 ) in the legal landmark on which SENAI's activities are based, federal government made mandatory for industrial firms to employ 5% of all employees as apprentices (14 years old on), enrolling them in one of SENAI's VET courses.
Thus, the creation of the Brazilian vocational education and training system by decrees implied an important concession from Vargas' dictatorship to the industrialists: the emergence of a decentralized structure, in opposition to the Estado Novo centralism. It also revealed how the industrialists intervened in the formulation of social legislation under the Estado Novo by excluding workers and labour unions from decision-making. Since unions were not considered a key pillar of social partnership, the creation of SENAI allowed the industrialists to take credit for an initiative widely regarded as serving the nation's interests, while eliminating any formal participation by labour unions in the training process.
In these terms, SENAI's foundation was a clear-cut victory for industrialists. It created a training program specifically geared to the needs and interests of industry, virtually free of state intervention.
Therefore, the answer to the question where skill institutions come from in Brazil relies precisely on a combination of the authoritarian state's capacity for coercion with the private sector's preference for autonomy.
Skill formation institutional evolution
In order to address the question how skill institutions have evolved in Brazil, it is worth ascertaining the degree of their institutional continuity (and change) over the last decades.
We have seen that the creation of skill institutions took place under Vargas' dictatorship, which ended with the Second World War. Then, not only industry employers were absent in the design of the vocational training policy, but also unions were not considered a key pillar of social partnership. For that reason, the skill formation system was not shaped by the way workers defined their interests.
During the military dictatorship , however, the system gained more strength-which increased with the new democratic regime (1985)-and the political economy of skills of that period promoted both institutional continuity and change. On the one hand, the coalitional alignment that supported it over the years (which included industry employers and their confederation) did not really promote significant institutional changes, in terms of the functions of the skill formation system. However, on the other hand, the 1966 public administration reform drove institutional changes in the way the levy scheme was being managed, when the military created the National Institute for Social Security (INPS).
In fact, the new Institute represented an important adaptation to changes in the political and economic environment in which the military government was embedded, as its creation aimedamong other functions-to control, manage, and allocate funds collected by the payroll levy. Therefore, from 1966 to 1990-when the National Institute for Social Security (INSS) was created (by decree-law n.99/1990) -, the INPS helped the military to oversee, distribute and spend the financial resources of the system. From another perspective, the idea of transforming Brazil into an industrial power led the military to regard skill formation as an indispensable policy for promoting technological innovation. In this sense, as an attempt to reinforce it, the federal government decided to implement (in 11 August 1971), a new strategy (Law n.5.692, art.5/6) to make professional training a compulsory part of secondary education. However, pressures for a more wide-ranging education focused instead on university entrance examinations (vestibular) led federal government to issue a decree-law (n.7.004) in 1982, which represented a backward motion in the attempt to integrate vocational training within the Brazilian educational system. In these terms, the institutional arrangement of the skill formation system turned out to be incredibly insulated and resilient in face of the changes that the military aimed to introduce. 11 The democratization of the political regime, in turn, brought about important debates related to skill institutions and policies, though the framework of the system remained the same. On the first occasion, deputies of the Constituent National Assembly , while discussing what came to be art.149 of the new Constitution, 12 attempted to transform in 1987 the scheme of levy on payroll into a levy on firms' invoicing. Industry leaders and their federations, facing the threat of losing a reliable budget independent of public resources (levy on payroll), reacted almost immediately, collecting 1.6 million signatures to reverse the content of the piece of legislation that proposed a new collection method. As a result of the Constituent Assembly, the institutional reproduction of the system was, again, preserved, as it continued to be considered (and managed) as a private organization. Only this time, the bureaucratic commitment of the system's original founders gave way to more intense political disputes over the financial resources that came from the levy scheme (Cunha, 2000).
Just after the promulgation of the 1988 Constitution, the Central Workers Union Confederation (CUT), the most important Brazilian group of trade unions, engaged in the congressional debate over the design of a new education law (LDB). In relation to the skill formation system, CUT claimed, in 1989, that all levy funds for vocational training should be treated as public money and, as such, should be managed with full participation of workers. "Today," CUT's document stated, "we have an unsustainable situation in which the 1% payroll levy imposed on all industrial enterprises is administered by private organizations. These resources are public and should be managed as such" (1st ABC Metalworkers Congress, 1989, cited in Cunha, 2000). This same proposal was presented again in 1992 on the lower Chamber's floor, as an attempt to include it in the text of the new education law (LDB). In 1996, it was approved with no reference to the vocational education and training issue. In these terms, skill formation remained untouched and bureaucratically insulated.
However, Fernando Collor's administration reform of 1990 brought with it a significant change in the way the system's levy scheme was managed. By merging the National Institute for Social Welfare ( In any case, the argument that funds collected by the system's levy scheme are public (and should be managed as such) is based on the fact that they have been managed by public institutions, since 1966-in spite of the fact that its financial resources have been used (in terms of distribution and spending) in an insulated mode (see Table 1).
Cultural policies & institutions in São Paulo-historical evolution
What ideas facilitated the hybridization between NGOs and the state of São Paulo government to deliver policies of culture during the 1990s? What institutional mechanisms have contributed to the maintenance of such hybrid model of management (contractualization) along time? These are important questions to be answered especially in a country where culture has not been considered an object of consensus by the political elite-in spite of the fact that actions in this policy-area date back the arrival of the Portuguese Court in Brazil, in 1808. Unlike other policy-areas, such as public security, education, and health, whose policy relevance has been widely recognized, culture did not receive a systematic or a continuous treatment along time. For that reason, this segment has been traditionally characterized by its permanent struggle for funding and for institutions able to ensure a minimal development for its programs.
Indeed, experiences of recent democratic administrations evidence that actions within this policyarea are usually deemed by public officers as volatile or transitory. One exception in this panorama took place between the 1930s and the 1960s, when an intense institutionalization of state-controlled culture bodies was witnessed (Miceli, 1984).
Table 1. Ideas, decrees, and laws in the skill formation policy-area in Brazil, 1933-2011
Source: Authors.
Act
Abstract Getúlio Vargas' speech, 1933 Vocational training organizing industrial labour Roberto Simonsen's speech, 1934 Industrial production with labour protection The conditions for the private sector to participate in cultural investments emerged in Brazil only with the transition to democracy, in 1985, with the creation of the Ministry of Culture-MinC (91.144/85), 14 in 1985, and of the Sarney Act (7.505/86), 15 in 1986. Nevertheless, during Fernando Collor administration (1990-1992, a minimalist vision of the state prevailed, and MinC was downsized to a Secretariat of the President's Office; with it, again, public funds decreased enormously. A relative recovery of this situation occurred when funding systems were modernized, first, with the Rouanet Act (8.313/91), 16 during Collor administration, then, with the Audiovisual (8.685/93) Act, 17 in Itamar Franco administration (1992)(1993)(1994)(1995). Both laws focused on the production of movies and audiovisual contents, which had been reduced to nearly zero when Embrafilme was closed down (Moisés & Botelho, 1997;Brasil Ministério da Cultura, 2007). They became the cornerstones to resume cultural activities. Other improvements in this policy-area also took place during Lula administrations (2003Lula administrations ( -2011 with the increasing state investment, producing public policies that strengthened the channels for cultural activities of minorities (Rubim, 2007).
In fact, the idea of the Brazilian "State Reform" of the 1990s subordinated cultural policies and institutions to a managerial logic about the organization of the state. Implemented by an administrative reform performed by the Ministry of Federal Administration and State Reform-MARE (Bresser Pereira, 1998), the reform had economic, political, fiscal, and managerial intentions to increase public careers' and officers' professional qualifications, as well the efficiency of public resources' administration. Based on the argument that Brazil was in an enormous fiscal, administrative and political crises, and a reform was necessary to enhance the democratization process (Bresser Pereira, 1998Romão Netto, 2016), several strategies were carried out: the creation of new civil servant careers; the privatization of economic activities managed by the market (like telephone services); the establishment of regulatory agencies; and the public planning and funding of social and scientific services which, being not exclusive of the state, were contractualized with civil society (Bresser Pereira, 1998. In this context, not-for-profit foundations and NGOs that were qualified by the federal state as Social Organizations, in 1998, became responsible not only for performing activities that were considered socially relevant (Law 9.637/98), 18 but also for the competences of civil servants of extinguished public institutions. These Social Organizations operate with public funds, according to the Management Agreement′s terms (Costin, 2005;Oliveira & Romão, 2006), which are designed not only to assist the administrative and tax organization of the state, but most importantly as an attempt to make responsible, efficient, and more effective public policies (Bresser Pereira, 1998Bresser Pereira & Grau, 1999;Romão Netto, 2016). 19 Moreover, since each state of the Brazilian federation can regulate such Management Agreement according to its own autonomous law processes, the state of São Paulo government, almost in a parallel historic movement, enacted Law n.846/98, 20 qualifying its first Social Organization in the health policy-area. In the cultural field, in turn, contractualization started in the 2000s, when the first partnership between the state government and not-for-profit civil organizations (Social Organizations of Culture) took place within the realm of the São Paulo State Secretariat of Culture.
According to Costin (2005, p. 114), by adopting a previously tested hybrid management model in the cultural policy-area, São Paulo innovated for recognizing its potential to provide administrative reasonability, and for considering this instrument an alternative to "bureaucratic chains of direct administration bodies," which become a "serious problem when you deal with activities requiring the creativity, flexibility and promptness like the artistic activities." In this sense, Social Organizations helped, for example, the Secretariat of Culture of São Paulo to eliminate, in 2004, the precarization at work of 4,500 employees, ensuring them stable labour relations in terms of formal contracts and security benefits, according to the norms of the Ministry of Labour. 21 Under these circumstances, the claim that hybridization was responsible for limiting expenses with personnel and for establishing criteria to manage the public debt became robust. With it, flexibility in contracting qualified personnel (like musicians), and the simplicity that this model provided to deal with the huge state bureaucracy became other favourable arguments that reinforced the model (Costin, 2005;Romão Netto, 2015). Also, considering hybridization as a managerial idea, one could argue that the budget transference between Social Organizations 22 illustrate the fact that the paulista local government has supported such hybrid model of partnership all these years because the transaction costs in re-assimilating new alternative models, and/or restructuring a whole bunch of professionals (human resources), would be much higher than keeping it. Perhaps for that reason, from 2004 to 2015, the number of Social Organizations raised from 2 (2004) to 27 (2015) and, with it, the budget distributed annually by the Secretariat of Culture to Social Organizations increased tremendously: from 1.8% of the total annual budget of the Secretariat, in 2004 (US$517,000), to 80%, in 2014 (US$ 147 million). 23 From the same perspective, others could argue that such hybrid model has been supported by a state of the Brazilian federation that has been governed, since 1994, not only by the same party, the PSDB (Brazilian Party of Social Democracy), but also by the same governor (Geraldo Alckmin), since 2001. However, considering the fact that the PSDB is a centre-right party, others could also argue that, if the State of São Paulo was governed by a party located at the left side of the ideological spectrum, the support to this hybrid model of partnerships would not exist.
Indeed, in 1998, during Cardoso administration, the Workers' Party and the Democratic Labour Party (PDT) introduced a petition (ADIN n.1.923/DF) in the Brazilian Supreme Court arguing for the unconstitutionality of the federal law that created the Social Organizations (n.9.637/1998). The core of the argument presented was that by contracting Social Organizations the state was not only restricting citizens' participation, but most importantly ensuring the private sector the right to provide non-exclusive public services. As a result of this situation, the parties argued, the state was privatizing public services, and masking a situation of non-compliance in relation to the workers' labour rights. The vote of the Supreme Court against the petition came only in 2015. It stated that governments had the right to transfer the operation of non-exclusive policies through the proper public bidding procedures. 24 Meanwhile, in 2002, the Workers' Party won the presidential election, and the elections of important municipalities (like São Paulo) and states (including Minas Gerais, and Bahia), and, instead of deconstructing the model of hybridization, PT supported and improved it in many ways. For example, during the mandates of governor Jacques Wagner (2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015), PT implemented the model of hybrid partnership in the policy-area of culture in the Bahia state; and from 2002 to 2016, governments qualified diverse new Social Organizations, both at sub-national, as well as at national levels, 25 and, in 2013, it created the Brazilian Association of the Social Organizations for Culture-ABRAOSC, in 2013. 26 Nevertheless, this model of management also created a "managerial mirror" through the insulation of the Social Organizations. As noted below, civil organizations qualified as Social Organizations became more an appendix of the public structure of governance, and from the possibility of the Managerial Contracts in 1998 (even though in the cultural field they were established only in 2004) the efforts of the governmental bureaucracy was in the sense to became the civil organizations qualified as Social Organization more "state like", as well as to reorganize the governmental structure of the Secretariat of Culture to absorb and institutionalize managerial ideas and tools aiming to construct a political discourse in the sense of some "modernization" of the state. The last adequacy was made due to a judicial process that prevented the re-employment of the OS Pensarte Institute, which lead the Accounting Court of the State of São Paulo to investigate not only the suspicious of public call to recontract the OS, but also the mechanisms through which, without another public biding, the resources to manage the policies delivered by the Pensarte Institute were transferred to another contracted Social Organization, the Sisters Marcelina. The solution of the Secretariat to the future, as seen below, was to publish a Law permitting this financial movement once the Secretary publish an authorization (see Table 2).
Final remarks
This paper has demonstrated that ideas related to economic development and political democracy forged hybrid designs of partnerships to deliver public policies beyond the Brazilian state.
In the case of skill formation, hybridity promoted an extreme insulation of vocational education and training policies and institutions not only because industrialists would rather take sole responsibility for training a portion of the skilled labour force, but especially because they did not consider the democratic outcome in their equation. Moreover, the insulation of the Brazilian vocational education and training regime from the economic environment (and crises), as well as from the regular public educational system, is precisely the source of its resilience. The piece of legislation that is been discussed within the Senate, proposing to allocate 30% of the levy funds to finance Social Security, does not seem to succeed, especially considering the actual contours of the economic and political crises the country is going through.
|
v3-fos-license
|
2020-04-30T09:11:46.513Z
|
2020-04-29T00:00:00.000
|
219027457
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsbiomaterials.0c00191",
"pdf_hash": "b7040993132162004bfb867f079f4163b4a42d80",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42365",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"sha1": "fd0eda3bc9d25f3a54164c8e1fd579a425d55e95",
"year": 2020
}
|
pes2o/s2orc
|
Bioengineering Vascular Networks to Study Angiogenesis and Vascularization of Physiologically Relevant Tissue Models in Vitro
Angiogenesis assays are essential for studying aspects of neovascularization and angiogenesis and investigating drugs that stimulate or inhibit angiogenesis. To date, there are several in vitro and in vivo angiogenesis assays that are used for studying different aspects of angiogenesis. Although in vivo assays are the most representative of native angiogenesis, they raise ethical questions, require considerable technical skills, and are expensive. In vitro assays are inexpensive and easier to perform, but the majority of them are only two-dimensional cell monolayers which lack the physiological relevance of three-dimensional structures. Thus, it is important to look for alternative platforms to study angiogenesis under more physiologically relevant conditions in vitro. Accordingly, in this study, we developed polymeric vascular networks to be used to study angiogenesis and vascularization of a 3D human skin model in vitro. Our results showed that this platform allowed the study of more than one aspect of angiogenesis, endothelial migration and tube formation, in vitro when combined with Matrigel. We successfully reconstructed a human skin model, as a representative of a physiologically relevant and complex structure, and assessed the suitability of the developed in vitro platform for studying endothelialization of the tissue-engineered skin model.
INTRODUCTION
Angiogenesis is a sophisticated process regulated by a complex web of interactions of endothelial cells (ECs) with their extracellular matrix (ECM) and with biochemical and mechanical factors. 1 The delayed neovascularization of tissueengineered (TE) constructs postimplantation can cause them to fail clinically. 2 Thus, investigating the factors that regulate angiogenesis is particularly important to understand how they are involved in this complex process.
Angiogenesis assays are powerful tools to study aspects of angiogenesis and can be categorized into three main categories: (i) in vitro, (ii) ex vivo, and (i) in vivo. 3 In vivo assays are the most representative of native angiogenesis, but since healthy animals are used to perform these assays, they are ethically questionable, require considerable technical skills, and expensive. 4 In contrast, in vitro assays are inexpensive and relatively easy to perform. However, the majority of them are based on two-dimensional (2D) cell culture systems which lack the physiological relevance that three-dimensional (3D) structures can provide. 5 Thus, it is important to develop better in vitro platforms that enable the study of angiogenesis under more physiologically relevant conditions. Several in vitro angiogenesis models fabricated by combining several methods including Bio-MEMS, 6 3D printing and porogen leaching, 7,8 3D printing, and electrospinning 9,10 have previously been reported. However, most of them rely on the use of natural gels and allow the evaluation of angiogenesis at only the cellular level. Although these natural gels are biologically preferable by endothelial cells in terms of providing an improved cell attachment, proliferation, and sprouting, 11 the use of natural materials limits control over degradability, formability, and mechanical properties.
Skin is the largest organ in the body and acts as a physical barrier between the body and the external environment. It is composed of histologically definable three main layers: the epidermis, the dermis, and the hypodermis. In the cellular level, keratinocytes are the most common type of cells located in the epidermal layer of the skin, and they form different layers of the epidermis with different tasks. Fibroblasts, the second common type of cells in skin, are located in the dermal layer and provide physical strength as well as elasticity of skin. 12,13 Skin tissue engineering has gained great momentum over the years. However, developing biologically relevant in vitro tissue models as alternatives to animal models or as physiologically relevant tissue substitutes for clinical use is always open for improvement.
Several in vitro skin models have been developed by many groups or companies over the years to study different subjects such as being alternatives to animal testing, wound healing, pigmentation, contraction, tumor invasion, barrier function, and bacterial infection studies. 12,14 Facy et al. created a reconstructed epidermis model with Langerhans cells and used this model to test the reactivity of these cells to known allergens and UV. 15 Kandarova et al. studied skin irritation using two reconstructed human skin equivalents as an alternative to animal testing. 16 To study pigmentation, Bessous et al. developed an in vitro reconstructed epidermis using autologous keratinocytes and melanocytes. 17 Meier et al. developed a human skin equivalent to study melanoma progression, and they reported a close correspondence between the growth of melanoma into engineered skin construct and in vivo. 18 Admane et al. reported the direct 3D bioprinting of full-thickness skin constructs that mimics the signaling pathways of skin. 19 Similarly, Kim et al. developed a 3D printed skin model with perfusable vascular channels to create a vascularized skin model. 20 Kolesky et al. developed a platform using a multimaterial 3D bioprinting method, which enables researchers to create thick tissue models with engineered matrix and embedded vasculature. 21 Recently, John et al. demonstrated the regeneration of TE skin substitute on human amniotic membrane. 22 Our laboratory has previously reported a 14 day protocol for the reconstruction of a 3D human skin model that is suitable for clinical use 23 and have previously explored adding human dermal microvascular endothelial cells (HDMECs) to the TE skin model with very little successthe cells struggled to enter the TE skin and showed no signs of being organized when they did enter. 24 Although TE skin was being studied for a long time to be used as skin substitutes in the clinic or in vitro models for research, the main challenge remains the same: studying and improving angiogenesis/vascularization of a TE skin for translation of it to clinic or for doing research on understanding the basic principles of skin vascularization. Either for implanting or for in vitro laboratory research, developing a vascularized 3D human skin model is highly important for the successful take of TE skin substitute after implantation or studying the effect of chemical, mechanical, and environmental factors on neovascularization of skin. Thus, there is a need to develop new platforms that enable the study of vascularization of complex tissues such as skin.
Accordingly, in this study, we fabricated synthetic vascular networks (SVNs) made of poly-3-hydroxybutyrate-co-3hydroxyvalerate (PHBV), a biocompatible and biodegradable polyester, combining electrospinning and 3D printing techniques to study angiogenesis in a physiologically more relevant environment and to investigate the vascularization of a reconstructed human skin model. The main aim of this study was to create a unique in vitro platform that enables researchers to study more than one aspect of angiogenesis at both cellular and tissue levels. PHBV channels were used as physical support and a structural guide for ECs to create a preformed endothelium-like structure. This endothelium-like structure was then used to study the migratory response and tubeforming capability of ECs in response to proangiogenic agents in vitro and to explore how synthetic channels can be used as a model for the vascularization studies at the tissue level. The chick chorioallantoic membrane (CAM) assay has been used for the first time as a surrogate for a well-vascularized wound bed to provide the source of blood vessels to grow into the 3D human skin as a positive control to the PHBV SVN vascularization studies. adenine, AlamarBlue cell metabolic activity assay, alginic acid sodium salt, amphotericin B, anti-CD31 (PECAM-1) antibody produced in mouse, bovine serum albumin (BSA), calcium chloride dihydrate, chlorotoxin, collagenase A, D-glucose, dimethyl sulfoxide (DMSO), Dulbecco's modified Eagle's medium (DMEM), ethylenediaminetetraacetic acid (EDTA), eosin Y solution, ethanol, F-12 HAM nutrient mixture, fetal calf serum (FCS), fibrinogen from human plasma, glutaraldehyde (25%), glycerol, hematoxylin solution, hydrocortisone, insulin (human recombinant), L-glutamine, methylene blue, penicillin/streptomycin, phalloidin, fluorescein isothiocyanate (FITC), phalloidin, tetramethylrhodamine isothiocyanate (TRITC), sodium hydroxide pellets, Trypan blue, trypsin EDTA, Tween20, vascular endothelial growth factor (VEGF), and sodium chloride (NaCl) were purchased from Sigma-Aldrich. Dichloromethane (DCM), DPX mounting medium, industrial methylated spirit (IMS), methanol, Triton X-100, and xylene were purchased from Fisher Scientific. Human dermal microvascular endothelial cells (HDMECs), endothelial cell growth medium MV (EC GM), and EC GM microvascular (MV) supplement pack were purchased from PromoCell. CellTracker Green, CellTracker Red, and Alexa Fluor 546 Goat anti-Human IgG (H+L) cross-adsorbed secondary antibody were purchased from ThermoFisher. Poly-3-hydroxybutyrate-co-3-hydroxyvalerate (12%) (PHBV) was purchased from GoodFellow. Matrigel (growth factor reduced) was purchased from Corning. Thrombin (human) was purchased from Cayman Chemical. Epidermal growth factor (EGF) was purchased from R&D systems. Optimum cutting temperature tissue freezing medium (OCT-TFM) was purchased from Leica Biosystems.
2.2. Methods. 2.2.1. Manufacturing of the SVN Made of PHBV. The channels of the SVN were designed using computer-aided design (CAD) software (SolidWorks 2012, Waltham, MA). Following the 3D design of SVN channels, scaffolds were manufactured via the fourstep process, as shown by Figure 1.
Alginate was then used as a sacrificial substrate and 3D printed on to PHBV using a 3D bioprinter (BioBots, Philadelphia, PA). Following that, another layer of PHBV was electrospun on top of the alginate using same parameters. Finally, alginate was removed via EDTA solution.
2.2.1.1. Electrospinning PHBV. First, PHBV (10% (w/w)) granules were dissolved in DCM:methanol (90:10 w/w) solvent blend in a fume hood. PHBV polymer solution (∼5 mL) was loaded into 5 mL syringes fitted with 0.6 mm inner diameter syringe tips. Syringes were then placed in a syringe pump (GenieTMPlus, KentScientific, Torrington, CT). Aluminum foil was used as the collector and placed at a distance of 17 cm from the needle tips. The pump was set to 40 μL/min, and 17 kV voltage was applied to both the collector and the tips. The polymer was electrospun on the collector with the parameters given above for 1 h.
2.2.1.2. 3D Printing of Alginate as a Sacrificial Material. 1.5% alginate paste was produced by dissolving 0.2 g of calcium chloride dihydrate (CaCl 2 ·2H 2 O) into 72.7 g of distilled water (dH 2 O) while continuously stirring using a magnetic stirrer. The solution was then heated to approximately 60°C before adding 1.5 g of alginic acid sodium salt while continuously stirring on a hot plate magnetic stirrer. Once it was fully dissolved and dehydrated, 24.25 g of glycerol was added and stirred until a smooth viscous paste was obtained.
Prior to 3D printing, the desired numbers of 3D models were oriented and sliced using g-code generator software (Repetier-Host, Willich, Germany). The model was then exported as g-code using the following parameters: 0.4 mm layer height, 0.4 mm nozzle diameter, and 2 mm/s speed. The alginate paste was transferred into a 10 mL syringe with 0.4 mm blunt tip, and the syringe was inserted to the extruder of the 3D bioprinter. The aluminum foil containing the electrospun PHBV layer was placed onto the lid of a 6-well plate and fixed using adhesive paper tape. G-code was then uploaded to the 3D printing software (Bioprint, Philadelphia, PA), and the pressure was adjusted between 11 and 20 psi. Finally, the extruders were calibrated, and the alginate was 3D printed on the PHBV electrospun sheet. Following the 3D printing process, the electrospinning process was repeated using the same parameters to create synthetic vascular channels inside two layers of PHBV.
2.2.1.3. Removal of Alginate. 0.5 M EDTA solution was prepared in dH 2 O. The pH was then adjusted to 8.0 by adding sodium hydroxide (NaOH) beads while stirring continuously.
The scaffolds were submerged in 0.5 M EDTA solution overnight on a shaker (Fisher Scientific, Waltham, MA) set to 70 oscillations/ min to create hollow channels between two layers of PHBV sheets by removing alginate. Two ends of the scaffolds were cut to allow alginate to be removed prior to submerging it into EDTA solution.
2.2.2. Characterization of the PHBV SVN. 2.2.2.1. Scanning Electron Microscopy (SEM). The surface morphology and the cross sections of PHBV SVN were observed under SEM (Philips/FEI XL-20 SEM; Cambridge, UK). The samples were coated with gold using a gold sputter (Edwards sputter coater S150B, Crawley, England) prior to imaging. Average fiber diameter and pore size were measured using ImageJ software (Wayne Rasband, National Institutes of Health) as described previously. 25 2.2.2.2. Biomechanical Testing of PHBV SVN. Tensile testing was carried out for the dry and wet scaffolds using a uniaxial mechanical testing machine (BOSE Electroforce Test Instruments, Eden Prairie, MN) equipped with a 22 N load cell. Scaffolds were submerged in PBS for 1 h before testing to be wetted. The clamps of the device were positioned 15 mm away from each other, and the width and thickness of each scaffold were measured. Test samples either dry or wet were clamped with two grips in a tensiometer. Tensile tests were performed on each sample at a rate of 0.1 mm/s until the samples fail. The raw data of the tests were taken and tabulated before converting them into stress−strain curves. Stress and strain values were calculated using eqs 1 and 2: Ultimate tensile strength (UTS), yield strength (YS), and stiffness parameters were calculated using stress (σ) and strain (ε) curves of each sample.
Suture retention tests were performed based on the BS EN ISO 7198:2017, which is the standard for testing vascular grafts and patches. Before clamping the samples to a uniaxial testing device, scaffolds were sutured from 2 mm away from the upper end with a suture (Ethicon, Bridgewater, NJ) which is for use in general soft tissues. The distance between clamps was then adjusted, and the tests were conducted at a rate of 0.1 mm/s until the samples fail. Suture retention strength was calculated using eq 3:
= ×
suture retention (MPa) suture retention force (N) suture diameter (mm) thickness (mm) 2.2.3. Cannulation of the PHBV SVN to Test the Channel Structure and Patency. Prior to cell seeding into PHBV SVN, channels were cannulated with a 25 G cannula by perfusing PBS into the channels under a dissection microscope (Wild Heerbrugg, Heerbrugg, Switzerland). Methylene blue was then injected into channels to visualize the channel structure and patency using a 25 G cannula, and the images of the channels were obtained under a dissection microscope.
2.2.4. Cellularization of Synthetic Scaffolds. For cellularization of the PHBV SVN, two different procedures were assessed: (i) HDMECs were seeded into the channels in isolation, and (ii) HDMECs were seeded into the channels whereas human dermal fibroblasts (HDFs) were seeded to the outer surfaces of the channels.
Cellularization of the PHBV Channels with HDMECs in
Isolation. The PHBV SVN was disinfected by submerging them in 70% ethanol for 45 min and then washed three times with PBS prior to cell seeding and transferred to Petri dishes. HDMECs were used between passage 2−4. Once they reached 80−90% confluency, 0.5 × 10 6 HDMECs were resuspended in 0.25 mL of EC GM (supplemented with 2% FCS, 0.4% EC growth supplement, 10 ng/ mL EGF, 90 μg/mL heparin, 1 μg/mL hydrocortisone) and then perfused into the SVN using a 1 mL syringe with a 25 G cannula. Before adding culture medium, scaffolds were returned to the incubator for 1 h to allow HDMECs to attach to the inside of the channels. Then, 10 mL of HDMEC culture medium was added to each Petri dish, and they were incubated at 37°C overnight. Scaffolds were flipped over, and the same seeding process was repeated in order to cellularize the other side of the channels on the following day. Scaffolds were kept in culture for 7 days by changing the culture medium every 2−3 days. The PHBV SVN scaffolds were fixed in 3.7% FA. Fixed scaffolds were then embedded in freezing medium and frozen in liquid nitrogen for 3 min. Sections were cut 5−10 μm thick using a cryostat (Leica Biosystems Nussloch, Germany) and stained with hematoxylin and eosin (H&E) as described previously. 26,27 Briefly, the slides were stained with hematoxylin for 5 min and eosin for 90 s prior to dehydration with serial alcohol washes. The slides were then mounted with DPX mountant and investigated under a light microscope (Motic BA210).
2.2.4.2. Cellularization of the PHBV Channels with HDMECs and HDFs. HDMECs were used between passage 2−4. HDFs were used between passage 2−6, once cells reached 80−90% confluency. The PHBV SVN was disinfected by submerging them in 70% ethanol for 45 min and then washed three times with PBS prior to cell seeding. To be able to image them separately under a fluorescent microscope, each cell type was marked using CellTracker fluorescent probes. To label the HDMECs, 50 μg of CellTracker Red dry powder was dissolved in 7.3 μL of DMSO. Then, 3 mL of serum-free HDMEC culture medium was added to prepare a ∼25 μM working dye solution. The prewarmed dye solution was then added gently to the T75 flask, and HDMECs were incubated for 1 h under growth conditions. To label the HDFs, 50 μg of CellTracker Green dry powder was dissolved in 10.75 μL of DMSO. Then, 4.3 mL of serumfree HDFs culture medium was added to prepare ∼25 μM working dye solution. The prewarmed dye solution was then added gently to the T75 flask, and HDFs were incubated for 1 h under growth conditions. Following the labeling of cells, sterile scaffolds were transferred to Petri dishes. 0.5 × 10 6 HDMECs were trypsinized, centrifuged, and resuspended in 0.25 mL of culture medium and then perfused into the synthetic vascular channels using a 1 mL syringe with 25 G cannula. Following that, 0.5 × 10 6 HDFs were trypsinized, centrifuged, and resuspended in 200 μL of HDMEC growth medium and pipetted on the outer surface of the channels. Before submerging the scaffolds into HDMEC culture medium, scaffolds were incubated at 37°C for up to 2 h in order to allow HDFs to be attached on the outer surface. Then, 10 mL of HDMEC culture medium was added to each Petri dish, and they were incubated at 37°C overnight. Scaffolds were flipped over, and the same CellTracker labeling and seeding protocol was followed in order to cellularize the other side of the channels on the following day. Scaffolds were kept in culture for 7 days by changing the culture medium every 3 days.
In order to verify the presence and check the distribution of the HDMECs within the PHBV vascular channels prior to further experiments, scaffolds were fixed in 3.7% FA, and 5−10 μm thick sections were taken as described in Section 2.2.4.1, immunostained for the expression of CD31, and counterstained with DAPI after the 7 day culture of HDMECs and HDFs in PHBV SVN. Briefly, a hydrophobic barrier pen was used to draw circles around each sample on the slide in order to create a water repellent barrier which creates a reservoir on sections for staining reagents. Cells were permeabilized by incubating in 0.1% Triton-X 100 for 20 min at room temperature (RT) and then in 7.5% BSA at room temperature for 1 h to block unspecific binding of the antibodies. This step was followed by washing once with 1% BSA, and the samples were incubated with the appropriate primary antibodies diluted in 1% BSA (1:50 dilution was used for anti-CD31 primary antibody) at 4°C overnight. The next day, samples were washed 3 times with PBS before incubating with the appropriate secondary antibodies diluted in 1% BSA (1:500 dilution is used for AlexaFluor546 conjugated secondary antibody) at RT for 1 h and washing three times with PBS. Samples were counterstained with DAPI solution by incubating for 20 min at RT. Slides were then washed three times with PBS and imaged using a fluorescent microscope (Olympus IX3, Tokyo, Japan).
2.2.5. Fluorescent Staining. For PHBV SVN recellularized with HDMECs in isolation, the scaffolds were fixed in 3.7% FA for 1 h and sectioned using a cryostat as described in Section 2.2.4.1. For analyzing the cells in the PHBV SVN, the sections were stained with phalloidin-TRITC (1:500 diluted in PBS) (or phalloidin-FITC (1:500 diluted in PBS) in some of the experiments) to stain the cytoskeleton. Sections were then stained with DAPI (1:1000 diluted in PBS) to stain cell nuclei. Briefly, 0.1% (v/v) Triton X 100 (in PBS) was added on samples, and the samples were incubated for 20−30 min at room temperature. After three times washing with PBS, either phalloidin solution was added to cells and incubated for 30 min at RT in the dark. Sections were then washed three times with PBS. DAPI solution was then added and incubated for 10−15 min at RT in the dark, and the cells were then washed 3 times with PBS. Finally, DPX mountant was pipetted onto the samples, and samples were covered with a coverslip. Cells were then examined under a fluorescent microscope.
2.2.6. Direct Imaging of Prelabeled Cells. While investigating HDMECs in coculture with HDFs, cells were prelabeled using CellTracker fluorescent probes with the intent of distinguishing them during fluorescent imaging. Use of fluorescent probes prior to cultivating cells in the scaffolds enabled us to image HDMECs and HDFs directly under a fluorescent microscope following the sectioning step.
2.2.7. Development of a 3D TE Skin Model. 2.2.7.1. Isolation of Human Foreskin Keratinocytes and HDFs from Skin Grafts. Skin grafts were obtained from patients who were informed of the use of their skin for research purposes according to a protocol approved by the Sheffield University Hospitals NHS Trust Ethics Committee. Fibroblasts and keratinocytes were isolated from the skin, as described by Ghosh et al. 28 Briefly, skin samples were cut into 0.5 cm 2 pieces and incubated overnight in Difco-trypsin (0.1% (w/v) trypsin, 0.1% (w/v) D-glucose in PBS, pH 7.45) before being washed and maintained in PBS.
For isolating keratinocytes, skin samples were taken from the solution and transferred into a Petri dish filled with growth media. The epidermis was peeled off, and the surface of the epidermis (papillary surface) was gently scraped; basal keratinocytes were collected into the growth media. Cells were then harvested by centrifuging at 1000 rpm for 5 min, resuspended and seeded into 75 cm 2 tissue culture flasks with the presence of a feeder layer (irradiated mouse 3T3 (i3T3) cells), and cultured in Green's media (66% insulin, 0.5% adenine, 0.1% T/T, 0.1% chlorotoxin, 0.016% hydrocortisone, 0.01% EGF, 100 IU/mL penicillin, 100 μg/mL streptomycin, 2 mM L-glutamine, and 0.625 μg/mL amphotericin B).
HDFs were isolated by mincing the dermis with into 10 mm 2 pieces. The pieces were then incubated overnight at 37°C in 0.5% (w/v) collagenase A solution. The suspension of fibroblasts was centrifuged at 1000 rpm for 5 min and resuspended in DMEM containing 10% (v/v) FBS, 100 IU/mL penicillin, 100 μg/mL streptomycin, 2 mM L-glutamine, and 0.625 μg/mL amphotericin B.
2.2.7.2. Preparation of Acellular De-Epidermized Dermis (DED). DED was prepared from skin grafts according to a modified method described by Chakrabarty et al. 29 Briefly, the skin graft was treated in 1 M NaCl solution for 24 h at 37°C and then washed with PBS for 40 min. The epidermis was removed by peeling off or scraping gently (if the epidermal layer remains, and cells have not been harvested before). DED was kept in Green's media at 37°C for 2 days to check its sterility. A 3D human skin model was reconstructed in vitro to study vascularization of the skin using a well-established protocol. 23 Briefly, 1 cm 2 pieces were cut from DED, and a stainless-steel ring (0.79 cm 2 ) was placed onto the papillary side. HDFs were trypsinized and centrifuged at 1000 rpm for 5 min before being resuspended in DMEM. HDFs (1 × 10 5 ) were seeded into the stainless-steel ring and kept in 37°C while preparing keratinocytes for seeding. The i3T3 feeder layer was removed first using 5 mL of 0.5 M sterile EDTA solution with 3−5 min incubation at 37°C. After removal of the feeder layer, keratinocytes were then trypsinized and centrifuged at 1000 rpm for 5 min and resuspended in Green's media. HDFs (3 ×10 5 ) were seeded into the stainless-steel ring as a coculture with HDFs. TE skin models were incubated overnight at 37°C before removing the ring and addition of Green's media. 3D skin models were incubated in Green's media for another day (2 days in total), then raised to the air−liquid interface by using a sterile stainless-steel grid, and cultured for a total of 14 days in order to ensure differentiation of the layers of the epidermis. were incubated for 7 days as described in Section 2.2.4.1. Once a uniform monolayer of HDMECs was obtained, escape holes were pierced on the channels. The piercing procedure was performed as described previously. 30 Approximately 100 equally distanced holes per scaffold were created in random orientations (from top and sides) to cover the surface of all channels as evenly as possible using a sterile 30 G syringe needle. The final concentrations of VEGF and 2dDR within the Matrigel were 80 ng/mL and 1.34 μg/mL, respectively. 100 μL of Matrigel was pipetted into hexagonal wells formed by synthetic channels and 200 μL into the well between two hexagonal wells. Scaffolds were then returned to the incubator for Matrigel to set at 37°C for 15 min. PHBV scaffolds were then submerged in HDMEC culture medium and cultured for 7 days.
For analyzing the HDMEC outgrowth through Matrigel after culturing HDMECs in the synthetic PHBV vascular scaffolds, the scaffolds were fixed and stained with phalloidin-TRITC and DAPI as described previously. As the scaffolds were opaque and too thick to image directly, they interfered with the visualization of the cellular outgrowth and tube formation when the Matrigel was in place. Thus, the Matrigel was peeled off from the surface of the PHBV SVN, and fluorescent images were taken within the Matrigel close to the edges of the PHBV SVN to investigate the tube formation and the branching. The formation of the tubes can be defined as the gradual formation of capillary-like tubular structures by the ECs in response to proangiogenic stimulants, and connected capillary-like tubes to each other form a meshlike structure within the gel. These meshlike structures are maintained for approximately 24 h. Each closed loop (mostly pentagonal or hexagonal loops) formed by ECs within the gel is defined as a tubelike structure, and the branch sites/nodes are defined as branching points. The number of tubes and branch points are two widely used measures of in vitro angiogenesis when conventional tube formation assays are used. 31 Accordingly, we quantified the total number of tubes and the branch points per field of view as described previously using the Angiogenesis Analyzer plugin of ImageJ 32 and AngioTool software, 30 HDMECs per scaffold) were incubated for 7 days as described in Section 2.2.4.2, and the cellularized scaffolds were then transferred into 6-well plates in a class II biological safety cabinet. Using a sterile 30G syringe needle, holes were pierced on the channels. TE skin models were prepared as described in Section 2.2.7 and cut circular at day 7 prior to implantation. Fibrin glue was used to glue TE skin to PHBV SVN. Use of fibrin glue in skin grafts and TE skin replacements has previously been reported. 33 Fibrin glue was prepared by mixing fibrinogen from human plasma (20 mg/mL in 0.9% NaCl solution in dH 2 O) and human thrombin (25 units/mL in 0.1% BSA). Briefly, 50 μL of fibrinogen was pipetted over the surface of the PHBV SVN channels. Then, 50 μL of thrombin was pipetted over fibrinogen, and TE skin models were glued immediately on channels. PHBV scaffolds with TE skin models on them were then submerged in EC GM either supplemented with 80 ng/mL VEGF or nonsupplemented (control) and cultured for a further 7 days at the air−liquid interface. Throughout the experiment duration, EC GM either nonsupple- For the investigation of the HDMEC outgrowth through reconstructed TE skin models, scaffolds with TE skin on top of them were fixed in with 3.7% FA. Fixed PHBV scaffolds with TE skin models were then embedded in OCT freezing medium and frozen in liquid nitrogen for 3 min. The scaffolds were sectioned 5−10 μm thick using a cryostat (Leica Biosystems Nussloch, Germany) at −20°C and permeabilized with 0.1% Triton-X100 for 30 min. The sections were then immunostained for anti-CD31 and counterstained with DAPI as described in Section 2.2.4.2. The sections were further investigated histologically by staining the sections with hematoxylin for 1.5 min and eosin for 5 min. The outgrowth distance of HDMECs was determined using ImageJ software, and the results were then statistically analyzed using GraphPad Prism software.
2.2.9. Investigating the Vascularization of the TE 3D Skin Equivalent Using ex Ovo CAM Assay. Ex ovo CAM assay was used to evaluate the vascularization of the TE skin model as a positive control. A video protocol of ex ovo CAM assay has been reported previously by our group. 34 Briefly, fertilized chicken eggs (Gallus domesticus) were purchased from Henry Stewart & Co. MedEggs (Norwich, UK) and cleaned with 20% IMS solution. Eggs were incubated at 37.5°C for 3 days in a rocking egg incubator (RCOM King SURO, P&T Poultry, Powys, Wales). On day 3, the embryos were transferred gently into sterile Petri dishes and incubated at 38°C in a cell culture incubator (Binder, Tuttlingen, Germany). CAM assay was conducted in care of the guidelines of the Home Office, UK. On day 7, reconstructed human skin equivalents (14 day cultured) were cut circular (8 mm diameter) using a biopsy punch and implanted to CAMs for a further 7 days. In order to study the effect of proangiogenic drugs, VEGF and 2dDR were added twice a day dropwise throughout the assay duration. The concentrations of the drugs were 80 ng/day/embryo and 200 μg/day/embryo for VEGF and 2dDR, respectively.
Macroimages of the reconstructed skin equivalents implanted on CAM were taken using a digital USB microscope at embryonic development day 14. Embryos were then euthanized, and the skins were cut with a rim of surrounding CAM tissue and fixed in 3.7% FA solution. Angiogenesis was quantified by counting all blood vessels growing toward the scaffolds in a spoke wheel pattern, as described previously. 25 Histological analysis of the samples was performed with H&E staining as described previously in Section 2.2.4.1.
2.2.10. Statistical Analysis. Statistical analysis was carried out using either one-way or two-way analysis of variance (ANOVA) using statistical analysis software (GraphPad Prism, San Diego, CA). Where relevant, n values are given in figure captions. Error bars indicate standard deviations in the graphs unless otherwise stated.
Results of the Characterization of the PHBV SVN.
3.1.1. Macrostructure and Microstructure of the PHBV SVN and the Confirmation of the Channel Patency. The combination of electrospinning and 3D printing allowed the production of a number of replicate scaffolds in a short period of time (in less than 2 h). The SEM images of the PHBV SVN showed that it was possible to obtain a connected network of hollow channels after removal of the alginate. The PHBV SVN scaffolds used in this study were approximately ∼30 mm in length and ∼18 mm wide with the elongated hexagonal shapes. For each production batch, approximately 12 scaffolds were produced, and 100% of these were used. The macrostructure and the microstructure of the PHBV SVN scaffolds before alginate removal are given in Figure 3A. The removal of alginate was confirmed by the cannulation of the PHBV SVN with methylene blue dye and with the SEM imaging of the Antihuman CD31 (red) stained sections of the PHBV SVN recellularized with HDMECs inside the channels and HDFs on the outer surface cultured over 7 days are given in Figure 5B. The immunostaining showed an evenly distributed HDMEC monolayer within the channels in both curved and flat surfaces, whereas HDFs covering the outer surface of the scaffolds were only stained with DAPI (blue).
3.2. Use of the PHBV SVN to Study Angiogenesis in Vitro: Results of HDMEC Outgrowth from PHBV SVN to Matrigel. The results of the Matrigel outgrowth experiments showed that HDMECs were observed as outgrowing and forming interconnected tubelike structures within the Matrigel close to edges of the pierced synthetic PHBV channels by day 7 (Figure 6). Inclusion of both proangiogenic agents, 2dDR and VEGF, increased the formation of tubelike structures. However, tubelike structures were more obvious and wellorganized in VEGF loaded Matrigel groups when compared with 2dDR loaded and control groups. Although these experiments were repeated 3 times, and 5 replicates were used for each repeat, it is important to note that the formation of tubelike structures was witnessed in only 20% of the experiments for VEGF loaded Matrigel and 13.3% of the 2dDR loaded and control groups.
The quantification of the fluorescent images showed that inclusion of 2dDR and VEGF in Matrigel increased the number of tubes formed per field within the gel up to 3.5 ± 1.1 and 8.2 ± 4.0, respectively, where the number of tubes per field was 1.0 ± 0.9 in the control group. Similarly, average branch points were increased from 3.1 ± 1.9 (control) to 12.3 ± 4.4 and 27.6 ± 8.2, respectively, when 2dDR and VEGF loaded to Matrigel. 80 ng/mL VEGF was found significantly more effective for stimulating tube formation and for increasing branch points when compared to 100 μM 2dDR.
Use of the PHBV SVN to Study Vascularization of a TE Skin Model. 3.3.1. General Appearance and
Histological Evaluation of the 3D TE Skin Models. The macroevaluation of the developed skin model showed that the color of the circular area seeded with HDFs and keratinocytes started to change to a yellowish color which identifies the formation of a new epithelium on DED. The histological evaluation of the reconstructed TE skin models showed that the developed TE skin model achieved a normal-looking gross skin morphology in 14 days. A multilayered epithelium was formed and found to be well attached to the dermis (Figure 7).
3.3.2. Results of the Endothelial Outgrowth from PHBV Channels to 3D TE Skin Model. Following the encouraging results of HDMEC outgrowth through Matrigel, PHBV scaffolds populated with HDMECs and HDFs were investigated for HDMEC outgrowth through the reconstructed TE skin equivalent model. Immunostained (antihuman CD31) sections showed that HDMECs were evenly distributed within the channels and formed a monolayer, and the outer surface of the PHBV SVN was covered with HDFs ( Figure 8). High-magnification images of the immunostained sections revealed that the outgrowing cells from the PHBV channels toward the reconstructed skin models were CD31 positive HDMECs. The outgrowth of HDMECs was mostly observed from the connection edges of two separate electrospun sheets.
The results of the H&E and anti-CD31 staining showed that the addition of VEGF to the growth media significantly increased the outgrowth distance of HDMECs toward the reconstructed TE skin model. The distance of migration went up to 121.7 ± 6.3 μm in the VEGF group when compared to nonsupplemented controls, where the outgrowth distance was 27.9 ± 11.9 μm. However, no cellular infiltration to the dermal layer of the implanted skin models was observed in any of the groups.
3.3.3. Results of the Vascularization Study of the TE Skin Model on CAM. In order to assess the effect of the presence of cells and proangiogenic factors on vascularization of TE skin equivalents, DEDs and developed skin models were assessed using an ex ovo CAM assay. The results showed that the mean number of blood vessels were the highest in 2dDR added TE skin equivalents, where the fewest blood vessels were observed in DED groups. The presence of dermal cells and the addition ACS Biomaterials Science & Engineering pubs.acs.org/journal/abseba Article of both proangiogenic agents significantly increased the mean vessel count growing through the samples (Figure 9). Mean vessel counts for TE skin models when no proangiogenic agent was added, when administered with 2dDR, and when administered with VEGF were 27.0 ± 1.3, 34.4 ± 1.9, and 45.6 ± 2.0, respectively, whereas the mean vessel count was 19.2 ± 1.5 for the control DED group. None of the implanted groups affected the embryo survival rate, which was over 70% for each group.
Although no complete integration was shown in any of the groups, the DED only group was completely separable from CAM where TE skin samples either administered with VEGF or 2dDR were better attached to CAMs but without apparent tissue infiltration.
DISCUSSION
PHBV is a biocompatible polymer which is widely used in tissue engineering applications, 35 and it has previously been reported as a suitable biomaterial to fabricate tissue engineering scaffolds using electrospinning. 36 PHBV was chosen for the production of the vascular scaffolds not only due to the previous experiences of our research group 37−39 but also Figure 8. H&E and immunostained (CD31 positive cells are shown with red) sections show that HDMECs were outgrowing from the PHBV channels toward the TE skin models. "e", "d", and "p" show epidermis, dermis, and PHBV SVN layers, respectively. The outgrowth was mostly observed from the connection edges of two separate electrospun sheets. Inclusion of VEGF in the growth medium enhanced the outgrowth distance of the HDMECs. The graph shows the quantification of the HDMEC outgrowth distance from PHBV SVN to TE skin models when the growth medium was supplemented with VEGF or nonsupplemented as the control group (*p ≤ 0.05, n = 6).
ACS Biomaterials Science & Engineering
pubs.acs.org/journal/abseba Article because PHBV has previously been reported as a suitable host for supporting ECs to attach and proliferate on it and form a monolayer. 40 Both electrospinning and 3D printing techniques have various advantages and are frequently used in tissue engineering applications. The 3D printing technique allows controlling the production of a large number of scaffolds with exactly the same geometries in a short time while electrospinning enables fabricating scaffolds with a wide range of properties in terms of material composition, fiber diameter, thickness, porosity, and degradation rates. 41−44 Accordingly, in this work, PHBV nanofibers were successfully manufactured via electrospinning, and alginate, a natural and biocompatible polysaccharide that is largely preferred for biomaterial applications, 45,46 was used as a sacrificial substrate to create temporary support as interconnected networks. The perfusion of the channels with methylene blue dye showed that the channels were interconnected, and no leakage was observed neither between the two layers of electrospun PHBV nor through the small pores between fibers. The average fiber diameter and pore size were 0.76 ± 0.22 and 2.73 ± 1.47 μm, respectively. PHBV fibers in these diameters have been shown to allow transportation of nutrients through fibers while preventing cells from escaping through them for up to 6 weeks. 38,39 The suture retention test results demonstrated that the PHBV SVN was suitable to be used by suturing the tissue models onto the scaffold. The scaffolds were resistant to suture up to 1.70 and 0.89 MPa pull out strength, respectively, under dry and wet conditions without any tearing. DuRaine et al. reported the suitability of their TE constructs with a suture retention strength of 1.45 MPa for in vivo implantation by suturing them in place. 47 Selders et al. demonstrated that the suture retention strength of the developed polymer templates was between 0.40 and 1.20 MPa under dry conditions. 48 Similarly, Syedain et al. showed that acellular vascular grafts with a suture retention of approximately 0.15 MPa (reported as 175 g for a 12.1 mm 2 graft area) were suitable for suturing in vivo as pulmonary artery replacements. 49 Nanofibers have been shown to provide better surface properties for ECs to adhere and proliferate on them over microfibers. 50−52 This is likely due to the nanofibers being structurally similar to the ECM of natural tissue with their submicron-scale topography and highly packed morphology. 50,53 Furthermore, PHBV nanofibers have previously been shown to be a suitable environment for ECs to form an endothelial monolayer. 40 However, nanofibers also create a physical barrier for cells, which limits the infiltration. 54 Thus, prior to the outgrowth experiments, holes had to be pierced onto the channels of the scaffolds.
This four-step manufacturing route allowed the production of a large number of identical vascular scaffolds in less than 2 h. Similar fabrication routes of vascular scaffolds combining electrospinning and 3D printing have previously been reported. Jeffries et al. used 3D printed poly(vinyl alcohol) channels as a template in electrospun polydioxanone scaffolds to be used as a Figure 9. Representative macroimages given in top row show the angiogenic activity of DED, TE skin only, and TE skin with daily addition of 2dDR and VEGF. The histological appearance of the samples can be seen in the middle row. Black, red, green, and blue arrows indicate the CAM, dermal layer, epidermal layer, and blood vessels, respectively. The graph in the bottom row demonstrates the quantification of blood vessels growing toward the samples. Scale bars for macroimages and histological images represents 3 mm and 200 μm, respectively (***p ≤ 0.001, **p ≤ 0.01, n = 4).
ACS Biomaterials Science & Engineering
pubs.acs.org/journal/abseba Article prevascularized implantable construct in future. 55 Dew et al. previously reported the use of alginate as a sacrificial material in electrospun scaffolds and showed the successful endothelialization of these scaffolds to study the factors that affect neovascularization. 37,40 However, none of these studies has gone further than studying only one aspect of angiogenesis at the cellular level and did not assess the potential of these scaffolds to be used with biologically relevant tissue models. In this study, we aimed to evaluate the potential use of these bioengineered vascular channels to be used to study angiogenesis in vitro and with complex tissue models in comparison with CAM, as a well-vascularized wound bed analogue, for the first time.
The results suggested that it was possible to cellularize the PHBV channels either with HDMECs in isolation or with HDMECs in the presence of HDFs which slightly improved the coverage of the channels with HDMECs as expected from the previously reported studies performed by other groups as well as our group. 37,56 Although the coverage of the channels was not investigated quantitatively, the qualitative visualization of the cell distribution within the channels showed almost a full coverage of the channels with HDMECs ( Figure 5A,B). However, CellTracker-labeled cellularization results showed a more intermittent-like layer within the channel ( Figure 5C). The most probable explanation for this is that CellTracker binds target DNA, and it is distributed equally between daughter cells after cell division. 57 Although it is a very simple and rapid way of identifying different types of cells within a construct, we found that it is not highly effective for estimating the cellular confluency and for investigating the distribution of cells within the channels.
Fibroblasts have previously been reported to play a key role in the angiogenic process by producing considerable amounts of ECM molecules (i.e., collagen, fibronectin, and other molecules), growth and proangiogenic factors which control the shape and density of blood vessels. 58,59 Although fibroblasts secrete some VEGF, the main role of these cells is to create an ECM in which endothelial cells can be embedded to form tubules. This ECM structure is rich in collagen I and fibronectin. 60,61 PHBV SVN has been found to provide a suitable environment for HDMECs to form a monolayer either in the presence or absence of HDFs as helper cells. The use of HDFs was found to be desirable depending on the intended use of the PHBV SVN.
For the Matrigel experiments, the PHBV SVN was cellularized with HDMECs in isolation, and the outgrowth of HDMECs was investigated toward the Matrigel. The results showed that HDMECs were outgrowing and forming interconnected tubelike structures within the Matrigel (loaded with either VEGF or 2dDR) close to edges of the pierced synthetic PHBV channels. The tubelike formed structures were more obvious and well-organized in the VEGF loaded Matrigel group when compared with 2dDR loaded and control groups. VEGF is an effective and well-established proangiogenic factor 62 which has been proven to be a regulator of EC proliferation, migration, and survival. 63,64 2dDR, a small sugar that naturally occurs in the body as a result of the enzymatic degradation of thymidine to thymine, 65 has recently been reported to have potential to induce angiogenesis in vitro, 30 in ex ovo CAM assay, 25 and in diabetic rats. 66 The formation of the tubelike structures was pretty similar to those which can be observed in Matrigel tube formation assays. In vivo, endothelial cells are in direct contact with a basement membrane which is specific and biologically functional for enabling endothelial cells to form tube structures. 67 This biologically active protein mixture is a wonderful candidate for mimicking the native basement membrane of endothelial cells in vitro and promotes endothelial cells to form tubelike capillary structures. 68 Kubota et al. seeded endothelial cells on a mimicked basement membrane and reported that endothelial cells could attach and form tubelike capillary structures within 2−3 h. 69 Our observations were in line with the literature where VEGF has been reported to regulate outgrowth of ECs. 70−72 Loading of 2dDR into Matrigel also stimulated ECs to form tubelike structures as we have previously reported the promotion of tube formation in Matrigel assay. 30 The proposed platform can be used to study more than one aspect of angiogenesis in vitro when combined with Matrigel. However, several factors should be considered while using the developed model for the study of angiogenesis: (i) Matrigel is a protein gel mixture which is rich in ECM proteins such as laminin, collagen heparin sulfate, proteoglycans, etc. However, the exact concentrations of the ingredients are not clearly defined, and it shows high batch-to-batch variations. 73 (ii) The thickness of Matrigel should be considered as the thickness of gels has previously been shown to have a negative impact on the survival of ECs and HDFs. 74 (iii) HDMECs are very sensitive to culture conditions and show batch-to-batch variations. 75 These variations of ECs have been previously shown to be a cause for not being reproducible for in vitro angiogenesis models. 76 (iv) The holes pierced on SVN channels were randomly oriented, and their positions and diameters might have an impact on the variations in the outgrowth of HDMECs.
Following the Matrigel experiments, a more physiologically relevant tissue model, the TE skin model, was used with PHBV SVN to study vascularization of a reconstructed human skin model. The TE skin model was successfully developed using a well-established protocol. 28 The air−liquid interface has previously been confirmed to provide a stimulus for the gradual differentiation of keratinocytes. 23 The histological evaluation of the reconstructed TE skin models showed that the developed TE skin model achieved a normal-looking gross skin morphology in 14 days. A multilayered epithelium was formed and found to be well attached to the dermis. Following the reconstruction of TE skin, after 7 day culture at the air− liquid interface, the TE skin equivalent was attached to the top surface of the PHBV SVN and cultured for further 7 days at the air−liquid interface.
The outgrowth of HDMECs toward the TE skin model was mostly observed from the connection edges of two separate electrospun sheets, and the inclusion of VEGF to the growth media significantly increased the outgrowth distance of HDMECs approximately 4.4-fold when compared to controls. However, cells were not found to be invading into the dermal layer of the developed skin models either supplemented with VEGF or not. Santos et al., previously demonstrated that starch-based scaffolds combined with growth factors and fibrin sealant (fibrinogen 75−115 mg/mL, thrombin 4 IU/mL) were capable of promoting vascular infiltration to newly formed tissue in vivo. 77 In addition, the concentration of fibrin glue used in this study is also approximately 3−4 times lower than some commercially available skin graft sealant fibrin glues. 78 We have previously demonstrated that fibrin glue with a fibrinogen concentration of 18.75 mg/mL, a similarly high concentration as used in this study, did not hinder cell outgrowth from tissue explants. 80 Thus, the concentration of fibrin glue does not seem to be the major cause of the prevention of cell penetration. The most probable explanations for this are that the outgrowth direction of HDMECs was against gravity; the rate of outgrowth of HDMECs from PHBV channels was low. Furthermore, the orientations, positions, and diameters of the manually pierced random holes might also have negatively affected the outgrowth of HDMECs. Our group had previously explored the endothelialization of a TE skin model and reported that the cells struggled to enter the TE skin and showed no signs of being organized when they did enter. 24 CAM is a well-vascularized membrane, and we hypothesized that CAM might represent a very well-vascularized wound bed. Thus, as a positive control experiment, we implanted the TE skin models to assess the vascularization of reconstructed skin models from CAM. The results of the ex ovo CAM assay were in compliance with the results obtained from PHBV SVN studies. Although CAM is a highly vascularized and dynamic environment with fast proliferating embryonic cells, 81 the results showed that there was no sign of blood vessel or tissue integration into the dermal layer of the reconstructed skin substitutes. However, the presence of dermal cells (fibroblasts and keratinocytes) significantly improved the vascularization in the area of implantation (toward the implanted TE skin) in comparison with DED (with no cells). In addition, the administration of VEGF and 2dDR showed a further increase in angiogenic activity. Although the major function of fibroblasts is to synthesize and maintain ECM structure, they have been reported to produce collagen, fibronectin, proteoglycans, and connective growth factors, especially in response to wounding. 82,83 They have also been reported as producing soluble angiogenic growth factors such as VEGF, 84 transforming growth factor-beta (TGF-β), 85 and plateletderived growth factor (PDGF). 86 Furthermore, keratinocytes have previously been reported to improve the proliferation of endothelial cells and to express VEGF. 87 Recently, the presence of cells and in vitro generated ECM has also been shown to improve angiogenesis in the ex ovo CAM assay. 88,89 The enhanced angiogenic properties of TE skin over DED on CAM might be validated by the studies given above.
While an increased angiogenic activity was observed when cells and drugs were presented to CAMs, the histological evaluation of the implanted TE skin models showed that there was no tissue infiltration and vascularization through the dermal layer of the reconstructed TE skin models. Although no vascularization was observed in any of the implants, one important thing to note was that the inclusion of dermal cells (fibroblasts and keratinocytes) and proangiogenic agents (VEGF and 2dDR) improved the "take" of the TE skin model by CAM when compared to DED with no cells. The attachment of the TE skin model to CAM (either supplemented with proangiogenic agents or not) was stronger whereas the DED showed no integration with CAM, and it was easily separable from the surface of the membrane after the implantation period.
The developed platform showed encouraging results to be used as an in vitro platform to study angiogenesis at either cellular or tissue levels. Future studies need to be conducted to improve the reliability of the proposed in vitro platform and to standardize the methodology for seeding of the cells, loading of Matrigel to the synthetic vascular scaffolds, piercing holes, and assessing the angiogenesis. In the scope of this study, only one tissue model was developed and assessed on PHBV SVN. However, promising results have shown that, through further improvements, the PHBV SVN can offer a great platform for studying in vitro vascularization of tissue models.
CONCLUSION
Herein, we demonstrated the development of a polymeric vascular network to be used as an in vitro platform to study angiogenesis and to investigate the vascularization of complex tissue models. The nanofibrous channels have been found to provide a suitable environment for HDMECs to form a monolayer in either the presence or absence of HDFs. The indirect coculture with HDFs has been shown to be a desirable approach depending on the intended use of the PHBV SVN. The developed in vitro platform enabled the study of more than one aspect of angiogenesis (migration and tube formation) when combined with Matrigel. In addition, PHBV SVN provided a convenient platform to study vascularization of a reconstructed human skin model as a physiologically more relevant and complex structure. All of these results demonstrated that the developed PHBV SVN could offer a really good platform to study angiogenesis in vitro with potential developments.
Research Council (Grant EP/I007695/1) and the Medical Research Council (Grant MR/L012669/1) for funding the equipment used in this study. We thank Dr. Anthony Bullock for his guidance on the reconstruction of the human skin model.
|
v3-fos-license
|
2021-12-08T16:04:50.702Z
|
2021-12-01T00:00:00.000
|
244933428
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2304-8158/10/12/2992/pdf",
"pdf_hash": "fec71693b7ab72d5f60b7436a34fcdffcae2957c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42366",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "260932712a11db88493cc259a6ffff03d35d9487",
"year": 2021
}
|
pes2o/s2orc
|
Effect of Kernel Size and Its Potential Interaction with Genotype on Key Quality Traits of Durum Wheat
This study was conducted to evaluate the influence of kernel size and its potential interaction with genotype on durum wheat quality with emphases on kernel physical characteristics, milling performance, and color-related quality parameters. Wheat samples of seven genotypes, selected from the 2018 Canadian durum variety registration trial, were segregated into large (LK), medium (MK), and small-sized kernels (SK). In general, the kernel size greatly affected the durum wheat milling performance. Within a given size fraction, a strong impact of genotype was shown on the test weight of SK and the milling yields of MK and LK. Particularly, the MK fraction, segregated from the genotypes with superior milling quality, had a higher semolina yield than LK from the genotypes of inferior milling quality, inferring the importance of intrinsic physicochemical properties of durum kernels in affecting milling quality. SK exhibited inferior milling quality regardless of the genotypes selected. A strong impact of genotype was shown for the total yellow pigment (TYP) content and yellowness of semolina, while the kernel size had a significant impact on the brightness and redness of the semolina and pasta. Despite SK possessing much higher TYP, the semolina and pasta prepared from SK were lower in brightness and yellowness but with elevated redness.
Introduction
Durum wheat physical properties are very important in determining its commercial value. Strong associations have been reported between kernel physical characteristics and durum wheat milling performance, semolina composition, and pasta processing quality [1][2][3][4][5][6]. Emphasis has been on unveiling the relationship between test weight (TWT) to durum wheat milling potential by evaluating samples with a wide range of TWT, protein content, and kernel size distribution (KSD) [3][4][5]. Recent study in our laboratory has shown that kernel size is more effective than TWT in predicting the milling performance of durum wheat by assessing Canadian durum samples with a wide range of TWT and KSD [5].
In general, with the decrease of kernel size from large to medium, the semolina and total milling yields of durum wheat reduced gradually. A drastic decrease in milling quality was observed for small kernels passing through the no. 6 slotted sieve (2.38 mm aperture) [4,5] with much reduced milling yields coupled with elevated ash content. Baasandorj, Ohm, Manthey, and Simsek (2015) studied the impact of kernel size and mill type on the milling and baking quality of hard red spring wheat [7]. Compared with large-sized kernels, the small-sized kernels had a much lower flour yield because of the lower proportion of starchy endosperm to bran.
The kernel size of durum wheat can significantly affect not only the milling performance but also the semolina and pasta quality [3,5]. Semolina milled from SK exhibited higher protein content, finer granulation, and was higher in TYP but less bright in color with elevated ash [3,5]. Cooked pasta made from durum samples with a high proportion of SK had higher firmness but was duller in color.
While the impact of kernel size on semolina and pasta quality is well-documented, limited information is available on the response of genotype to the general relationships between kernel size and the key durum wheat quality parameters. Due to the variation in intrinsic quality, the degree of impact of kernel size on quality could be genotype dependent. Using milling performance as an example, it is not clear if the genotypes with superior milling quality would be less susceptible to kernel size variation than those of inferior milling quality, or vice versa. Genotypes with different intrinsic quality could respond differently to variations in kernel size.
On the other hand, differences in quality among genotypes could be affected by the variation in kernel size. Although TKW was shown to be highly correlated with semolina yield across four different durum varieties (R 2 = 0.92) evaluated by Wang and Fu (2020), greater variation in semolina yield was seen for larger kernels than for smaller ones [5]. The fact that the genotypic variation in durum milling performance was related to kernel size suggests a potentially greater role of genotype in the milling quality of large kernels than that of the small ones.
With the prevalence of hot and dry growing conditions on Canadian prairies in the last few years, some durum samples, although graded as No.1 or No. 2 Canada Western Amber Durum (CWAD), showed relatively wide range of KSD and milling quality [5]. To optimize the commercial value of durum wheat of different KSD and understand how quality parameters respond to kernel size variations, a thorough investigation is required to further elucidate the combined effect of kernel size and genotype on key durum wheat quality parameters.
Therefore, the objective of this study was to evaluate the influence of kernel size and its potential interaction with genotype on key durum wheat quality traits with emphases on the wheat physical properties, milling performance, and color-related quality attributes.
Wheat Samples
Seven genotypes were selected from the 2018 Canadian durum wheat variety registration trial based on their intrinsic differences in milling and color-related quality parameters. A composite of each genotype was prepared from wheat samples grown at nine locations across western Canada. Based on availability and grading information of wheat samples from the nine locations, a recipe was developed for the preparation of the wheat composites. All composites were graded as No.1 CWAD. Each of these variety composites was segregated into three size fractions using a Carter dockage tester (Simon-Day Ltd., Winnipeg, MB, USA) equipped with no. 6 (2.38 mm × 19.05 mm) and no. 7 (2.78 mm × 19.05 mm) slotted sieves. The segregated kernel size fractions were categorized as small-sized kernels (SK, through no.6 slotted sieve), medium-sized kernels (MK, passing no.7 but remained above no.6 slotted sieve), and large-sized kernels (LK, remained above no.7 slotted sieve).
Wheat Physical Properties
To accommodate the small sample size, the test weight (TWT) was measured using a 0.5 L container equipped with a cox funnel following the standard procedure described by the Canadian Grain Commission [8]. The value in gram per half liter was converted to kg per hectoliter using the test weight conversion chart for amber durum wheat. TKW was determined with an electronic seed counter (Model 750, The Old Mill Company, Savage, Maryland) using a 20 g sample of wheat of which all broken kernels were manually removed. KSD was determined on a series of slotted sieves (i.e., no. 6, 7, and 8). One hundred grams of wheat was subsampled and manually shaken for 30 s, after which the four fractions separated by the sieves were collected and weighted individually. All wheat physical tests were conducted in duplicate.
Standard Durum Milling Procedure
Following the mill flow previously described by Dexter et al. (1990) [9], original unsorted wheat samples were milled into semolina in duplicates of 2.3 kg lots with a four stand Allis-Chalmers laboratory mill (West Allis, WI, USA) in conjunction with a laboratory purifier. The mill room was controlled at 21 • C and 60% relative humidity. Semolina is defined as having less than 3% pass through a 149 µm sieve. The total milling yield is the combination of semolina and flour. Both the total and semolina yields are reported as a percentage of the cleaned wheat on a constant moisture basis. Semolina granules were prepared by adding the most refined flour stream(s) to semolina until 70% extraction was reached for quality analysis.
Micro-Milling and Purification Protocol
Wheat samples of various size fractions were milled to predict semolina and total milling yields following the micro-milling procedure previously developed by Wang et al. (2019) [10]. After tempering to a moisture content of 16% overnight, 200 g of wheat sample was ground with a Quadruma Junior (QJ)-II-G mill-semolina version (C.W. Brabender Instruments, Inc., South Hackensack, NJ, USA) with the original sifter removed. The resulting wholemeal was sifted through a universal laboratory sifter (Bühler MLUA GM sieve, Bühler AG) equipped with a bottom screen of 180 µm to remove the flour and a top screen of 630 µm to retain the bran-rich fraction. The unpurified semolina fraction (SY1) between the two screens was collected. Based on the prediction models developed by Wang et al. (2019) [10], semolina yield and total milling yield were calculated according to the amount of SY1 and bran-rich fraction. Formulas (1) and (2) are as follows: Semolina Yield (%) = 1.02 × Bran-rich fraction + 1.80 × SY1 − 73.17. (1) Total Milling Yield (%) = 0.62 × SY1 + 39.42 (2) To prepare refined semolina for analysis and pasta processing, the original purification steps described by Dexter et al. [9] were modified to accommodate the small semolina sample size with three purification and two sizing passages. A detailed description of the micro-milling and purification steps is illustrated in Figure 1. In a typical experiment, SY1 obtained from QJ semolina mill was passed over a laboratory purifier (Namad, Rome, Italy) equipped with four different sizing sieves (335, 425, 570, and 670µm). After the first purification (P1), large semolina granules collected in tray 4 and 5 were reduced with the first sizing roll (S1). The reduced semolina was sifted through a box sifter equipped with a 180 µm sieve for 30 s to remove the flour. The resulting fraction retained above the 180 µm sieve together with the semolina collected in tray 3 at P1 were subject to a second purification (P2). After P2, the semolina granules which remained in tray 4 and 5 were subject to a second sizing step (S2). The reduced fraction was sifted with a box sifter for 30 s to remove bran/shorts (>425 µm) and flour (<180 µm). The semolina fraction between 180 and 425 µm was combined with the semolina collected in tray 3 at P2 and transferred to the third purification (P3). Refined semolina was collected as tray 1 and 2 in P1, tray 1 and 2 in P2 and tray 1, 2, and 3 in P3. Tray 4 and 5 in P3 were defined as Feeds.
Semolina Quality Testing
The protein content of the whole wheat and semolina were measured following the method previously described by Williams et al. [11] with a LECO Truspec N CNA (combustion nitrogen analysis) analyzer (Saint Joseph, MI). Ground wheat meal was prepared using a Retsch ZM 200 mill (Retsch GmbH, Haan, Germany) equipped with a 0.5 mm screen (Trapezoid holes) at a speed of 14,000 rpm. Ash content, wet gluten, and gluten index were determined using AACC International approved methods 76-31.01 and 38-12.02, respectively [12]. Semolina color was measured with a Minolta colorimeter CR-410 (Konica Minolta Sensing, Inc., Tokyo, Japan) with a D65 illuminant. Color readings are expressed on the CIELAB color space system with L*, a* and b* parameters representing brightness, redness, and yellowness values, respectively. A micro scale rapid extraction procedure as described by Fu et al. [13] was used for the determination of the total yellow pigment (TYP) content of the semolina.
Spaghetti Processing and Color Measurement
Spaghetti were produced from semolina using a customized micro-extruder (Randcastle Extrusion Systems Inc., Cedar Grove, NJ, USA) following the method of Fu et al. [6]. Semolina was first mixed with water in a high-speed asymmetric centrifugal mixer (DAC 400 FVZ SpeedMixer, FlackTec, Landum, SC, USA) at water absorption of 31-32% to maintain a constant extrusion pressure of about 100 psi. Vacuum was applied to eliminate introduction of air bubbles and minimize oxidative degradation of the yellow pigment, after which the dough crumbs were extruded through a four-hole Teflon coated spaghetti die (1.8 mm). The fresh pasta was subsequently dried in a pilot pasta dryer (Bühler, Uzwil, Switzerland) coupled with a 325 min drying cycle and a maximum temperature of 85 °C. To measure spaghetti color, 6.5 cm bands of spaghetti strands were mounted on a white mat board with minimum interspace. Spaghetti color was determined using a Minolta colorimeter (CR-410) as described above.
Semolina Quality Testing
The protein content of the whole wheat and semolina were measured following the method previously described by Williams et al. [11] with a LECO Truspec N CNA (combustion nitrogen analysis) analyzer (Saint Joseph, MI). Ground wheat meal was prepared using a Retsch ZM 200 mill (Retsch GmbH, Haan, Germany) equipped with a 0.5 mm screen (Trapezoid holes) at a speed of 14,000 rpm. Ash content, wet gluten, and gluten index were determined using AACC International approved methods 76-31.01 and 38-12.02, respectively [12]. Semolina color was measured with a Minolta colorimeter CR-410 (Konica Minolta Sensing, Inc., Tokyo, Japan) with a D65 illuminant. Color readings are expressed on the CIELAB color space system with L*, a* and b* parameters representing brightness, redness, and yellowness values, respectively. A micro scale rapid extraction procedure as described by Fu et al. [13] was used for the determination of the total yellow pigment (TYP) content of the semolina.
Spaghetti Processing and Color Measurement
Spaghetti were produced from semolina using a customized micro-extruder (Randcastle Extrusion Systems Inc., Cedar Grove, NJ, USA) following the method of Fu et al. [6]. Semolina was first mixed with water in a high-speed asymmetric centrifugal mixer (DAC 400 FVZ SpeedMixer, FlackTec, Landum, SC, USA) at water absorption of 31-32% to maintain a constant extrusion pressure of about 100 psi. Vacuum was applied to eliminate introduction of air bubbles and minimize oxidative degradation of the yellow pigment, after which the dough crumbs were extruded through a four-hole Teflon coated spaghetti die (1.8 mm). The fresh pasta was subsequently dried in a pilot pasta dryer (Bühler, Uzwil, Switzerland) coupled with a 325 min drying cycle and a maximum temperature of 85 • C. To measure spaghetti color, 6.5 cm bands of spaghetti strands were mounted on a white mat board with minimum interspace. Spaghetti color was determined using a Minolta colorimeter (CR-410) as described above.
Statistical Analysis
All data were analyzed with Microsoft Excel and SAS 9.4 Software (SAS Institute Inc., Gary, NC, USA). A 3 × 7 factorial experiment was applied to evaluate the impact of kernel size and genotype on key durum wheat quality characteristics by including 3 levels of kernel size (small, medium, and large) and 7 different genotypes (A to G) representing the major source of variations. Each segregated kernel size fraction from a selected genotype was treated as an independent sample. Significance of each factor as indicated by F values and percentage of variability assignable to each factor as measured by the ratio of sum of square to the total sum of squares was calculated. Tukey's test, which followed the analysis of variance, indicated significant differences with a level of p < 0.05.
Influence of Kernel Size and Genotype on Physical Properties of Durum Wheat
To understand the impact of kernel size, genotype, and their interactions on major durum wheat quality parameters, seven durum genotypes with variation in milling and color related quality attributes were segregated into three kernel size fractions using a Carter dockage tester. The wheat and semolina quality parameters of the unsorted samples are summarized in Table 1. The selected genotypes differed greatly in semolina and total milling yields, TYP, and gluten index, but with less variation in wheat physical properties (i.e., HVK, TWT, TKW, KSD), wheat protein, and ash contents. The semolina and total milling yields from the micro-milling procedure were comparable to those of standard laboratory milling except genotype D which showed higher semolina and total milling yields in the micro-milling process.
The significance of kernel size, genotype, and their interactions on major durum wheat quality parameters, as measured by the F value and percentage of variability assignable to each factor and their interactions, are summarized in Table 2. Significant impact was found for kernel size, genotype, and their interactions on all wheat quality parameters examined (p < 0.001). In terms of wheat physical properties, kernel size accounted for more than 80% of the variability in TWT and TKW with minor influences shown for genotypes and their interactions. Table 3 summarizes the impact of genotype on key quality parameters in relation to kernel size. TKW reduced drastically from 51.0 ± 1.8 g of LK to 36.1 ± 0.9 g of MK, but was only accompanied by a small decrease of TWT from 83.7 ± 0.7 kg/hL to 82.2 ± 0.6 kg/hL. Further decrease of kernel size from MK to SK led to a much greater reduction in average TWT from 82.2 kg/hL to 77.6 kg/hL, suggesting SK (TKW of 23.9 ± 0.4 g) was much less dense than the corresponding larger ones. A similar decrease in TKW and TWT was reported when a bulk CWAD cargo aggregate was fractioned into five different kernel sizes [5]. Wang and Fu reported that TWT is less effective than TKW in distinguishing the difference in kernel size [5].
Interestingly, the impact of genotype on TWT was greater for SK than for both MK and LK (Table 3). Although there was no significant difference in TKW of the SK fractions, SK possessed much greater variability in TWT, ranging from 74.5 to 80.6 kg/hL (F value = 465.6, p < 0.001) as compared to MK (81.1-82.6 kg/hL, F value = 73.3, p < 0.001) and LK (82.3 to 84.5 kg/hL, F value = 130, p < 0.001). On the other hand, greater variation in TKW among genotypes was shown for LK (48.3 to 52.8 g, F value = 23.81, p < 0.001) in comparison to MK (34.8-37.0 g, F value = 4.5, p < 0.05) and SK (23.2 to 24.4 g, F value = 1.9, ns). TWT can be affected by wheat moisture, kernel density, kernel shape, and packing factors, which were not directly associated with milling yield [14][15][16][17][18]. Simmons and Meredith attributed the difference in TWT to bran surface roughness, distribution of kernel size, shape, volume, and kernel density [19]. Troccoli and di Fonzo found that kernel shape such as rectangular aspect ratio (kernel width/kernel length) and circularity shape factor (4π × area/perimeter 2 ) were positively related to TWT [20]. More recently, Wang and Fu reported that durum wheat with a high proportion of SK could exhibit TWT comparable to the wheat samples of larger kernel size but exhibited much lower milling yields [5]. The relationship appears to be genotype dependent. The great variation in TWT of the SK fraction could likely be attributable to large differences in kernel shape and packing density. Due to the potential strong impact of genotype, TWT can vary widely for small-sized kernels. Therefore, TWT may not be reliable as a direct indicator of the milling potential of durum wheat when SK is predominantly present. It is critical to monitor the KSD when a larger proportion of SK is present. Wang and Fu (2020) demonstrated that by accounting for the difference in KSD, greater relationships were found for TKW (R 2 > 0.91, p < 0.001) or the proportion of kernels passing the no.6 slotted sieve with milling yields than TWT alone (R 2 = 0.75, p < 0.001) by studying 21 wheat composites of four major CWAD varieties [5].
Influence of Kernel Size and Genotype on Milling Quality of Durum Wheat
From Table 2, a significant impact of kernel size, genotype and their interactions was found on durum milling performance (semolina and total milling yields and semolina ash content). Based on the ANOVA test, more than 80% of variation in milling yields was attributed to kernel size alone, with a greater impact of kernel size being noted for semolina yield than total milling yield (F value: 13177.7 vs. 7392.8). Figure 2 demonstrates the semolina and total milling yields in relation to TKW and TWT as affected by kernel size. Regardless of genotype selected, decrease of kernel size significantly reduced semolina and total milling yields. A drastic reduction of milling yields was evident for kernels passing no.6 slotted sieve (Table 3). On average, LK (68.0 ± 0.9%) had 1.3% higher SY than MK (66.7 ± 0.7%), and the latter was about 3.1% higher in SY than that of SK (63.6 ± 0.7%). Kernel size is clearly a better indicator of average milling yields for SK than the TWT (Figure 2). For LK and MK; however, both TWT and TKW provided strong indication of average milling quality. A similar adverse effect of SK on durum milling quality was reported by Wang and Fu (2020) and Dexter et al. (2007) by examining durum composites with a wide variation in kernel sizes [4,5].
From Figure 2, considering the response of genotype to the relationship between kernel size and milling quality, genotypes A and B appeared to be more susceptible to kernel size variations showing a greater decrease (~4.9%) in semolina yield from 68.9 to 64.0% than those of the inferior ones (e.g., G) from 66.2 to 62.9% (vs. 3.3%). A similar trend was found for total milling yield (3.5% vs. 2.7%). There were significant differences in semolina and total milling yields among the genotypes at all three kernel size fractions (Figure 2a,b). The difference in milling yields was greater for LK (2.7%) than MK (1.8%) and SK (1.3%) among the selected genotypes (Table 3).
When comparing milling quality of all kernel size fractions (Figure 2), semolina and total milling yields of MK segregated from genotypes with superior milling quality (A and C) were comparable or superior to the LK from genotypes of inferior or moderate milling quality (E, F, and G) despite the TKW of those MK (34.8 to 37.0 g) being significantly lower than LK counterparts (48.3 to 52.8 g). In addition, LK from genotypes with inferior milling quality showed lower milling yields. SK exhibited inferior milling quality to both MK and LK regardless of the genotypes selected (Table 3). SK is very detrimental to the overall milling quality but usually represents only a small proportion in commercial durum shipments. Analysis of variance by excluding SK revealed that genotype accounted for 52.0% of variation in semolina yield, followed by kernel size of 44.3% and their interaction of 3.4%. These results strongly suggest that the intrinsic kernel properties could play an important role in determining the milling quality of durum wheat. Selection of genotypes with superior milling quality could compensate the negative impact of SK which is usually present in higher percentage in dry and hot growing seasons. When a large proportion of small kernels was present; however, milling quality could be poor regardless of the genotypes selected.
Influence of Kernel Size and Genotype on Semolina and Pasta Color Parameters
Both genotype and kernel size significantly affected semolina TYP (Table 2). Figure 3 presents the semolina TYP of three kernel size fractions segregated from the selected genotypes. The decrease of kernel size led to significant increase in semolina TYP for all genotypes. Alvarez et al. (1999) reported a similar negative relationship between kernel weight and yellow pigment concentration [27]. A greater difference in TYP was shown between MK and LK (1.0-1.6 ppm) than between small and medium ones (0.2-0.9 ppm). The degree of increase in semolina TYP as shown in Figure 3 was comparable to the level previously reported by Wang and Fu, who found that semolina TYP of SK was about 1.5 ppm higher than that of LK segregated from a bulk CWAD cargo composite [5]. Large genetic variations in semolina TYP from 2.3 to 3.0 ppm were noted for the genotypes used in this study across three different kernel sizes. In addition to the milling yields, ash content is an important part of overall milling quality. The ash contents of wheat and semolina increased with the decrease of kernel size (Table 3). Coupled with the lower semolina yield of SK, its high semolina ash could further decrease the wheat milling potential when a constant degree of semolina refinement is required.
Milling quality of durum wheat is a complicated trait [10]. From Figure 2, a cooperative effect between kernel size and genotype on durum milling quality was evident when considering both MK and LK. The average milling yields of SK were lower and the impact of genotype was much less (Table 3). While the impact of some common kernel physical parameters (e.g., vitreousness, TWT, and KSD) on milling quality has been extensively investigated, the work on the intrinsic properties that contribute to varietal differences in milling quality of durum wheat are scarce [19,[21][22][23][24]. Both kernel morphological parameters (e.g., length, width, thickness, size, shape, etc.) and kernel physical properties (e.g., hardness, vitreousness, TWT) could affect milling quality. Simmons and Meredith (1979) summarized three major factors that contribute to the difference in milling quality: the amount of endosperm contained in the grain (endosperm-to-bran ratio); the separability of the endosperm from the aleurone and bran layers (structure dissociates on fracture and milling); and endosperm hardness, which determines how the kernel fragments during the milling process [19]. Novaro et al. (2001) reported ellipsoidal volume was the best predictor of semolina yield among other grain morphological parameters evaluated [25]. Haraszi et al. found that the rheological phenotype phases of an average crush response profile obtained from a single kernel characterization system provided good predictions of the laboratory milling potential of durum wheats [26].
Due to the relatively large kernel size of the original unsorted samples (Table 1) and the similar TKW of the segregated kernel fractions (Table 3), the varietal differences in milling quality among selected genotypes could be attributed to their intrinsic kernel properties. Information on hardness, endosperm-to-bran ratio, and kernel fracture behavior could shed some light on the genotypic variation in milling quality. A study is currently being conducted in our laboratory to investigate the underlying factors, which could affect the milling quality of durum genotypes with a similar size of wheat kernels.
Influence of Kernel Size and Genotype on Semolina and Pasta Color Parameters
Both genotype and kernel size significantly affected semolina TYP (Table 2). Figure 3 presents the semolina TYP of three kernel size fractions segregated from the selected genotypes. The decrease of kernel size led to significant increase in semolina TYP for all genotypes. Alvarez et al. (1999) reported a similar negative relationship between kernel weight and yellow pigment concentration [27]. A greater difference in TYP was shown between MK and LK (1.0-1.6 ppm) than between small and medium ones (0.2-0.9 ppm). The degree of increase in semolina TYP as shown in Figure 3 was comparable to the level previously reported by Wang and Fu, who found that semolina TYP of SK was about 1.5 ppm higher than that of LK segregated from a bulk CWAD cargo composite [5]. Large genetic variations in semolina TYP from 2.3 to 3.0 ppm were noted for the genotypes used in this study across three different kernel sizes. The colour of semolina and pasta made from the size fractions are summarized in Figure 4. Brightness and redness of semolina were greatly influenced by kernel size, while the genotype had a large impact on semolina yellowness ( Table 2). In general, semolina The colour of semolina and pasta made from the size fractions are summarized in Figure 4. Brightness and redness of semolina were greatly influenced by kernel size, while the genotype had a large impact on semolina yellowness (Table 2). In general, semolina prepared from MK and LK was much brighter (Figure 4a) and less dull (Figure 4c) compared to that prepared from SK. Much greater variation in brightness and redness was also shown for SK fractions than MK and LK ones (Figure 2 and Table 3).
With the decrease of kernel size from LK to MK, significant increases in semolina TYP and yellowness were shown (Figure 4e). However, except for genotypes D and G, reduction of kernel size from MK to SK did not lead to further increase in semolina yellowness despite the TYP being significantly higher in SK. The drastic decrease in semolina brightness and increase in redness for small kernels might mask semolina yellowness. Table 2 showed a large impact of kernel size on pasta color. The decrease in kernel size led to a significant reduction in pasta brightness (Figure 4b) and an increase in pasta redness (Figure 4d). Superior yellowness was seen for pasta prepared from medium and large kernel fractions. However, a drastic decrease in pasta yellowness of about 7 units was noticed for SK despite its semolina TYP being significantly higher (Figure 4f). By plotting semolina yellowness against TYP for three different kernel size fractions of the selected genotypes, it was shown that semolina b* linearly increased about 1.2 units with each ppm increase in TYP (Figure 5a). The degree of increase in semolina yellowness in relation to TYP was similar for all three size fractions. For a given TYP, however, semolina prepared from LK and MK consistently showed superior yellowness than that of SK, inferring the negative impact of SK on semolina yellowness. This negative impact was much more profound for pasta yellowness (Figure 5b). As far as SK fraction is concerned, the increase in semolina TYP resulted in little increase in pasta yellowness. This is in contrast to the MK and LK fractions evaluated in this study.
Pasta brightness and yellowness decrease with the increase of semolina ash content [28,29]. Although SK have lower semolina and total milling yields, the higher ash content suggests inclusion of a greater proportion of external tissues, which could lead to pasta browning due to high enzymatic activities [28]. Maillard reaction between amino acid and reducing sugars could lead to the undesirable reddish color of pasta dried at high temperature [30,31]. Although the protein content was not significantly higher for SK as compared with MK and LK, pasta prepared from SK was much redder (6.2-7.3 in a*) than that made from LK (2.7-3.7 in a*), suggesting other underlying factors such as amino acid composition or reducing sugar content may favor the development of the reddish coloration of pasta prepared from small kernels. Joubert et al. revisited the role of particle size, ash, and protein on pasta color and viscoelasticity [32]. By combining the milling fractions of five durum wheat patches, a series of formulated mixes of semolina/flour were prepared so that the effect of protein, ash, and particle size distribution (PSD) could be evaluated in an unbiased manner. The authors found that pasta brightness and yellowness decreased while redness increased with the increase of semolina ash content regardless of protein content and PSD. The authors attributed the increase in pasta redness to the elevation of reducing sugars accompanied by the high ash content in the semolina. A significant correlation was found between the ash content and total arabinoxylans in semolina, which were known to concentrate in the outer layers of the grain [33]. The extrusion process can significantly increase the reducing sugars due to shearing stress [34]. It is likely that the SK contains a high level of arabinoxylan, which could result in a high level of reducing sugar during extrusion and increase the potential of Maillard reactions [32]. The elevated redness/brownness and decrease in pasta brightness could subsequently mask pasta yellowness. Wang and Fu proposed that the drastic elevation in pasta redness due to the Maillard reaction under high-temperature (85 • C) drying conditions could adversely impact pasta yellowness regardless of the level of TYP [5].
Conclusions
By segregating durum samples of selected genotypes into three kernel size fractions, the impact and relative importance of kernel size, genotype, and their interaction on major quality parameters were characterized in this study. For LK and MK fractions, TWT and kernel size are closely related. However, a greater influence of genotype on TWT of SK was evident. Regardless of the genotype, the SK fraction is detrimental to durum milling performance as shown by low semolina yield, high semolina ash content, and poor semolina color. The degree of impact of genotype on the durum milling performance appears to be related to kernel size. A greater impact was shown for LK than MK and SK, based on seven genotypes evaluated in this study. When the SK fraction is excluded, the genotype or intrinsic property of the durum kernel played an important role in contributing to overall milling quality. Genotype is a dominant factor in determining semolina TYP and yellowness despite TYP increases with the decrease of kernel size. Semolina and pasta
Conclusions
By segregating durum samples of selected genotypes into three kernel size fractions, the impact and relative importance of kernel size, genotype, and their interaction on major quality parameters were characterized in this study. For LK and MK fractions, TWT and kernel size are closely related. However, a greater influence of genotype on TWT of SK was evident. Regardless of the genotype, the SK fraction is detrimental to durum milling performance as shown by low semolina yield, high semolina ash content, and poor semolina color. The degree of impact of genotype on the durum milling performance appears to be related to kernel size. A greater impact was shown for LK than MK and SK, based on seven genotypes evaluated in this study. When the SK fraction is excluded, the genotype or intrinsic property of the durum kernel played an important role in contributing to overall milling quality. Genotype is a dominant factor in determining semolina TYP and yellowness despite TYP increases with the decrease of kernel size. Semolina and pasta prepared from MK and LK fractions were much brighter and less dull than those made from SK. Regardless of the genotype, the SK fraction exerted a strong detrimental effect on pasta yellowness, despite the higher level of TYP in SK. To meet the milling and end-product quality expectation of domestic and international durum buyers, it is critical to monitor the presence of SK (through a no.6 slotted sieve) in commercial durum samples, particularly in hot and dry growing seasons. More research is needed to confirm the potential interactions between genotype and kernel size and their effects on durum quality by using wheat samples from various genotypes and different growing conditions.
|
v3-fos-license
|
2020-07-30T02:06:17.741Z
|
2020-07-28T00:00:00.000
|
220853106
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1002/jlcr.3874",
"pdf_hash": "91e8d98fba96cee51b8742992424c4fa7717180c",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42367",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "e4316a5ddad2f1dd2573276e59b70c5d6af68cb1",
"year": 2020
}
|
pes2o/s2orc
|
Enantioselective synthesis of tri-deuterated ( – )-geosmin to be used as internal standard in quantitation assays
For the accurate and sensitive quantitation of the off-flavor compound geosmin, particularly in complex matrices, a stable isotopologue as internal standard is highly advantageous. In this work, we present a versatile synthetic strategy leading from (4a R )-1,4a-dimethyl-4,4a,5,6,7,8-hexahydronaphthalen-2(3 H )-one to tri-deuterated ( – )-geosmin ((4 S ,4a S ,8a R )-4,8a-dimethyl(3,3,4- 2 H 3 ) octahydronaphthalen-4a(2 H )-ol). The starting material was readily accessible from inexpensive 2-methylcyclohexan-1-one using previously published procedures.
| INTRODUCTION
(-)-Geosmin ( Figure 1) is a highly odorous molecule with a characteristic musty and earthy smell and a low odor detection threshold value in the range of 1-20 ng/kg. 1,2 Its name is derived from the ancient Greek words "geo" meaning earth and "osme" meaning odor.
In nature, geosmin is produced as secondary metabolite by several types of microorganisms, including actinomycetes, cyanobacteria, myxobacteria, and fungi. [3][4][5] The biosynthesis involves a Mg 2+ -dependent sesquiterpene synthase, which converts farnesyl diphosphate (FPP) to a mixture of sesquiterpenoids including geosmin. 3 The compound can cause a musty and earthy off-flavor in foods and beverages such as drinking water, wine, fish, and cereals. [6][7][8] In the worst case, the off-flavor may lead to consumers' rejection and significant economic loss. Recently, we identified geosmin in fermented cocoa and demonstrated that it may be transferred in odor-active amounts to chocolate. To avoid this, an accurate and sensitive method for its detection and quantitation in fermented cocoa is essential. In gas chromatography-mass spectrometry (GC-MS) and in liquid chromatographymass spectrometry (LC-MS), the use of a stable isotopically substituted analog of the target compound as internal standard is currently considered the best approach. 9,10 Although racemic deuterated geosmin is available from chemical companies, it is highly expensive (20 000 € for 150 mg). Therefore, we attempted to find a convenient synthetic route to deuterated geosmin as an alternative to the commercial product.
Enantiopure
(4aR)-1,4a-dimethyl-4,4a,5,6,7,8-hexahydronaphthalen-2(3H)-one 1 was converted to trideuterated geosmin by a sequence of four synthetic steps as depicted in Scheme 1. The first step was the epoxidation of the double bond in 1. Based on the work of Gosselin et al., 13 mCPBA was chosen as oxidizing agent. For the synthesis of isotopically unmodified geosmin, they compared the suitability of mchloroperbenzoic acid and hydrogen peroxide for the epoxidation of 1. The use of the peroxy acid afforded a 96:4 mixture of the αand β-epoxyketones and an overall yield of 80% of the α-epimer, whereas such a high stereoselectivity could not be achieved with hydrogen peroxide. Gosselin et al. 13 concluded that the steric hindrance induced by the angular methyl group over the β-face in 1 accounts for the preferential attack of the bulky aromatic peroxy acid molecule on the α-face. We adopted the approach of Gosselin et al. for the epoxidation of 1 but applied NaHCO 3 as an additional base because preliminary experiments had revealed that this slightly increased the yield (data not shown). This was to be expected as the acidity of m-chloroperbenzoic acid can lead to side products. 18 Protonation of the double bond in the educt could lead to the formation of an alcohol whereas protonation of the epoxide would lead to the formation of a diol. The epoxidation step proceeded with a yield of 78%. The epoxide was then subjected to reduction with LiAlD 4 , which led to the incorporation of two deuterium atoms and finally resulted in diol 3.
By selective tosylation, the secondary hydroxy group of 3 was converted into a good leaving group, affording 4. Without isolation of 4, a second reduction step with LiAlD 4 replaced the tosyl group by deuterium, finally leading to the trideuterated target molecule (4S,4aS,8aR)-4,8a-dimethyl(3,3,4-2 H 3 )octahydronaphthalen-4a(2H)-ol 5, that is, ( 2 H 3 )geosmin. The compound was purified by flash chromatography. The overall yield from 1 was 24%. The enantiomeric distribution of ( 2 H 3 )geosmin was determined by GC-MS using a β-cyclodextrin-based chiral column. The elution order was taken from a previous report on the enantioseparation of geosmin in wine. 19 Results indicated an enantiomeric purity of 91%, which confirmed the proposed enantioselectivity of the synthetic approach.
The incorporation of three deuterium atoms was confirmed by GC-MS. The EI mass spectrum of ( 2 H 3 ) geosmin ( Figure 2A) showed a molecular ion of m/z 185, whereas the spectrum of the isotopically unmodified geosmin showed a molecular ion of m/z 182 ( Figure 2B). No signals of m/z 182, 183, and 184 were present in the spectrum of the synthesized molecule, showing that no undeuterated, monodeuterated, and dideuterated geosmin isotopologues were present. Thus, the approach resulted in a uniformly trideuterated product. Further evidence was achieved by NMR. 1 H and 13 C NMR spectra allowed to unambiguously assign the positions of the three deuterium atoms. The singlet obtained in the 1 H NMR spectrum confirmed the presence of the deuterium atom at C4. Moreover, the multiplicity of the signals obtained in the 13 C NMR spectrum for carbons C3 and C4 indicated the coupling with two and one deuterium atoms, respectively.
| Chemicals and materials
The chemicals used were obtained from commercial sources: m-chloroperbenzoic acid (77%), ptoluenesulfonyl chloride, pyridine, sodium sulfate, and lithium aluminum deuteride were purchased from Merck (Darmstadt, Germany); sodium bicarbonate from Alfa Aesar (Karlsruhe, Germany); tetrahydrofuran from Santa Cruz Biotechnology (Heidelberg, Germany). Diethyl ether and dichloromethane were purchased in technical grade from Fisher Scientific (Loughborough, UK) and VWR (Darmstadt, Germany), respectively, and they were freshly distilled before use. Hexane, tetrahydrofuran, and chloroform were purchased in technical grade and stored over molecular sieves (4 Å). Chloroform was filtered through alumina before use to eliminate traces of ethanol present as stabilizer. Silica gel 60 (particle size: 0.035-0.070 mm) and LiChroprep ® DIOL (particle size: 0.040-0.063 mm) used for purification as well as precoated silica gel thin-layer chromatography (TLC) plates (layer thickness 750 μm, no fluorescence indicator) used for reaction monitoring were purchased from Merck. Hexane and diethyl ether mixtures in different proportions were used as mobile phase. Cerium ammonium molybdate or potassium permanganate solutions were employed in TLC as stains for substance detection, followed by heat treatment (200 C).
| Gas chromatography-mass spectrometry
EI mass spectra were recorded using a GC-MS system consisting of a Trace GC Ultra gas chromatograph coupled to a single quadrupole ISQ mass spectrometer (Thermo Fisher Scientific, Dreieich, Germany). Compounds were dissolved in dichloromethane at a concentration of 20 μg/mL. An aliquot (1 μL) was introduced by an autosampler GC PAL, PAL Firmware 2.5.2 (Chromtech, Bad Camberg, Germany), into a PTV injector (Thermo Fisher Scientific) at 40 C. The injector temperature was raised at 12 C/s to 60 C (held for 0.5 min) and then by 10 C/s to 240 C (held for 1 min). The carrier gas was helium at a flow rate of 2 mL/min. The splitflow was 24 mL/min. The column was a DB-1701 coated fused silica capillary, 30 m × 0.25 mm i.d., 0.25-μm film thickness (Agilent, Waldbronn, Germany). The initial oven temperature was 40 C. After 2 min, it was raised at 6 C/min to 230 C (held for 5 min). Mass spectra were acquired at an ionization energy of 70 eV and a scan range of 40-300 m/z. The mass spectra were evaluated using Xcalibur 2.0 software (Thermo Fisher Scientific). Chemical ionization (CI) mass spectra were recorded using an enantioGC-MS system consisting of a Trace 1310 gas chromatograph coupled to a Q Exactive mass spectrometer (Thermo Fisher Scientific). Compounds were dissolved in dichloromethane at a concentration of 10 μg/mL. An aliquot (1 μL) was introduced by a TRI Plus RSH autosampler (Thermo Fisher Scientific) into a PTV injector (Thermo Fisher Scientific) used in oncolumn mode. The carrier gas was helium at a flow rate of 1 mL/min. The column was a BGB-176 coated fused silica capillary, 30 m × 0.25 mm i.d., 0.25-μm film thickness (BGB Analytik, Rheinfelden, Germany). The initial oven temperature was 40 C. After 2 min, it was raised at 2 C/min to 200 C, and the final temperature was held for 5 min. Mass spectra were acquired with targeted SIM mode at an ionization energy of 70 eV and a scan range of 50-500 m/z. The reagent gas was isobutane. The mass spectra were evaluated using Xcalibur 2.0 software (Thermo Fisher Scientific).
Compound 1 (400 mg, 2.25 mmol) was dissolved in dichloromethane (30 mL), and sodium hydrogen carbonate (378 mg, 4.50 mmol) was added. The mixture was cooled on ice, and m-chloroperbenzoic acid (581 mg, 3.37 mmol) was added under argon over a period of 10 min. Quickly, a white precipitate was formed. After 2 h, the suspension was brought to room temperature and left under magnetic stirring for additional 24 h. The suspension was washed with a mixture of a saturated aqueous sodium thiosulfate solution and a saturated aqueous sodium carbonate solution (1 + 1, v + v; 2 × 50 mL), followed by brine (50 mL) and finally dried over anhydrous sodium sulfate. After filtration, the solvent was removed under reduced pressure, and the residue was purified by flash chromatography on silica gel to afford 342 mg of 2 (78% yield). TLC: R f 0.48 (hexane/diethyl ether, 4 + 1, v + v).
Under an argon atmosphere, lithium aluminum deuteride (185 mg, 4.41 mmol) was suspended in dry THF (15 mL), and the flask was heated to gentle reflux. Epoxide 2 (342 mg, 1.76 mmol) was dissolved in dry THF (5 mL) and added dropwise. After 2 h, the flask was cooled on ice, and a saturated aqueous solution of sodium sulfate (5 mL) was slowly added. Hydrochloric acid (1%; 1 mL) was added, and the mixture was stirred. The aqueous layer was separated and extracted with diethyl ether (2 × 20 mL). The organic phase and the diethyl ether extracts were combined, washed with brine (2 × 20 mL), and dried over anhydrous sodium sulfate. After filtration, the solvent was removed under reduced pressure, and the crude product was purified by flash chromatography on silica gel to afford 198 mg of 3 (56% yield). TLC: R f 0.28 (hexane/diethyl ether, 2 + 3, v + v). 4.6 | (4S,4aS,8aR)-4,8a-dimethyl(3,3,4-2 H 3 ) octahydronaphthalen-4a(2H)-ol (5) Diol 3 (198 mg, 0.99 mmol) was dissolved in chloroform (40 mL). Under an argon atmosphere, pyridine (800 μl, 9.90 mmol) and subsequently p-toluenesulfonyl chloride (1.91 g, 10.0 mmol) were added slowly under stirring. The mixture was kept for 72 h at 10 C. A suspension of lithium aluminum deuteride (83.4 mg, 1.99 mmol) in dry THF (10 mL) was added dropwise, and the reaction mixture was heated at reflux. After 4 h, the mixture was cooled down to room temperature. Diethyl ether (20 mL) was added, followed by water (5 mL), and subsequently aqueous hydrochloric acid (1%; 20 mL). The organic phase was separated and washed with an aqueous sodium hydrogen carbonate solution (5%; 20 mL) and brine (20 mL). After drying over anhydrous sodium sulfate and filtration, the solvents were removed under reduced pressure, and the crude product was purified by flash chromatography on a diol phase to give 98.5 mg of 5 (55% yield) with an enantiomeric purity of 91%
|
v3-fos-license
|
2019-01-02T17:23:16.033Z
|
2018-09-30T00:00:00.000
|
116606869
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.jvejournals.com/article/18758/pdf",
"pdf_hash": "b33c96a78e66801b2d32b642ecfe6cf32bfcd92a",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42370",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "b33c96a78e66801b2d32b642ecfe6cf32bfcd92a",
"year": 2018
}
|
pes2o/s2orc
|
Experimental and numerical investigation on the influence of the clocking position on hydraulic performance of a centrifugal pump with guide vane
The investigation of the clocking effect mainly concentrates on turbines and compressors, but seldom in centrifugal pumps. In this paper, using numerical simulation and experiment, the influence of the clocking effect on the hydraulic performance of centrifugal pump with guide vane is studied. Numerical simulations with SST k-w turbulence model were applied to obtain the inner flow field of the test pump. The numerical simulations coincide with the test result, which indicates the accurate of the utilized numerical approach. The results show the clocking positions have an important effect on hydraulic performance of the centrifugal pump with guide vane. The pump demonstrates the higher efficiency and head as the tongue locate between two guide vanes. The hydraulic performance of the volute is a major factor impacting the performance of the centrifugal pump with different clocking positions. However, the clocking position has almost no effect on the performances of the impeller and diffuser. When the guide vane is close to the volute tongue, flow field of volute is more non-uniform, and the energy loss in volute appears to be larger. The results and the method of this paper can provide theoretical reference for the design and installation of guide vane in centrifugal pump.
Introduction
The guide vane is a significant flow passage component for rotating machinery and can be applied in turbines, compressors and pumps.The guide vane used in the multistage centrifugal pump can convert kinetic energy to pressure energy and reduce the radial force of the impeller in a single-stage centrifugal pump.However, the clocking positions which altering the relative circumferential located between stator rows or rotor rows is a common phenomenon and has a significant influence on the performance of the rotating machine.
The clocking effect is first discovered by Huber et al. [1] in a 2.5 stage turbine, and the results showed that the performance of turbine could be improved by appropriately adjusting the circumferential position of the wake flow from the first stage stator.Subsequently, they carried out the experiment to study the influence of the circumferential positions between the two stator rows in turbine and obtained a similar conclusion.Thereafter many researchers had adopted both numerical simulation and experiment methods to further study the clocking effect in the turbines and compressors.For example, Hathaway et al. [2], Doney et al. [3], Arnone et al. [4] conducted numerical simulation and experiment on the clocking effect in compressors.They pointed out that the impacting of the stator clocking position on the efficiency could be up to 0.3 %-0.7 %.Städing et al. [5] experimentally investigated the clocking effect on the aerodynamic performance of a 3 stages compressor, and the results indicated that the aerodynamic efficiency under the design condition and low load condition was maximum when the wake flow in the upstream passed through the middle passage of downstream stator, however, it reached minimum while the wake flow from the upstream was closed to the ledge edge of the downstream stator.Cizmas et al. [6] carried out the numerical simulation to study the effect of the clocking position on unsteady pressure of the blade surfaces and obtained the changing rules on the pressure fluctuation and unsteady force of the blade surfaces under the different clocking positions.Haldeman et al. [7] adopted experimental method to explore the impact of the clocking position on the steady and unsteady aerodynamic force in a 1.5 stage compressor, and they pointed out that the time-averaged aerodynamic force changed least under the different clocking positions.The impact of the clocking effect on the inner flow field in the axil compressor has been widely investigated based on the numerical simulation and experimental method [8][9][10][11][12][13][14][15][16], and the stator clocking effect has a significant impact on the boundary layer, kinetic energy and pressure fluctuation in the downstream region.
Although a large amount of research on the clocking effect has been done in aerodynamic turbo machines, little attention has been devoted to the clocking effect in the centrifugal pump and is mainly focused on the pressure pulsation and the transient radial force [17][18][19].Therefore, more research must be done on centrifugal pumps with guide vane.In this paper, the impact of the clocking positions on the hydraulic performance of the centrifugal pumps with guide vane has been studied by adopting both numerical simulation and experiment.And the investigation focuses on the analysis on influencing laws of clocking effect for the hydraulic performance and internal flow field under the different flow conditions and explores the causes of the above analysis.The results can provide theoretical reference for the design and installation of guide vane in centrifugal pump.
Design parameters
The single-stage centrifugal pump with guide vane was selected to investigate the impact of the clocking positions on the performance.The main geometric characteristics are shown in Table 1.The characteristic parameters include the flow rate and the head .The flow rate is 40 m 3 /h and the head is 60 m with the rotational speed = 2900 rpm.The specific speed was 52.The geometric model and the installation locations of the guide vane are shown in Fig. 1.The diffuser is rotated 12° in turn in an anticlockwise sense to change the positions between the guide vane and volute tongue, and the corresponding installation locations of the diffuser are shown in Table 2.
Experimental facilities
The flow passage components of the test pump which includes impeller, diffuser and volute are made by organic glass, and the diffuser is fixed on the volute by the eight pin holes (as shown in Fig. 1).The pump with the guide vane installed at different positions (as shown in Fig. 2) is tested to investigate the influence of the clocking effect on the hydraulic performance.In order to prove the accuracy of the numerical simulation, the initial installation location between the diffuser vane and the volute tongue is 41° (as shown in Fig. 2).For Position 1 and Position 2, the volute tongue basically locates between two diffuser vanes, and the tongue located near the guide vane trailing edge at Position 3 and Position 4. The measurement methods and test facilities are accord with measurement requirements which described in Ref. [20].The torque and speed are measured by using the torque transmitter with ±0.2 % full scale measurement error.The full scale measurement error of an electromagnetic flow meter applied to measure the flow rate is ±0.5 %.An uncertainty of the pressure transmitter used to measure the inlet pressure, outlet pressure and instantaneous pressure of pressure taps is ±0.075 %.The electric signals from the all transmitters are converted to digital signals by Data Acquisition Board and LABVIEW software, and the performance curve, pressure fluctuation etc. are consequently obtained.
Flow solver and mesh generation
The numerical model of the test pump comprises four modules; each one is generated and meshed independently: a) inlet duct, b) impeller, c) diffuser, d) volute and outlet duct, e) front chamber and back chamber (as shown Fig. 3).The computational grid number can influence the accuracy of the numerical simulation for the centrifugal pump.The more number of grid, the more computer memory and time are required.Thus, the optimal grid number of each calculation area is selected to ensure the accuracy and reliability of numerical simulation.In this paper, five different mesh numbers are applied to numerical calculation (Table 3).Table 4 shows the grid independence analysis under the design conditions with guide installation angle = 41°.When the grid number is more than 5.6×10 6 , the ranges of the pump efficiency and head become smaller.Specifically, the ranges of efficiency and head are 0.1 % and 0.2 m, respectively.Thus, to ensure the calculation speed and the calculation accuracy, Grid 3 is used in numerical simulation, and the total grid elements number is approximately 5.61×10 6 (as shown Table 3).The value of + for the entire computational flow domain presented in the paper is between 30 and 50.The inlet and outlet ducts are included in the model to take apart boundary conditions.Meshes of the computational domain are generated in commercial software ICEM CFD 17.1.The entire flow field is meshed with structured hexahedral grid.The numerical simulation for a single stage centrifugal pump with guide vane is performed by commercial code ANSYS-CFX 17.1 with SST - turbulence model [21,22].The pressure at the pump inlet and mass flow at pump outlet are in accordance with the experimental measurement.The no-slip boundary condition is imposed to all physical surfaces of the pump.The interfaces between the stationary and rotational components are set as the frozen-rotor and the transient-rotor-stator methods for steady and unsteady calculations, respectively.In the unsteady numerical simulation, the steady-stage simulation is taken as an initial calculation condition of transient simulation, and time step is 5.74×10 -5 s which corresponded to a rotating angle of 1° for every time step at 2900 rpm rotation speed.The time for one cycle is 0.02 s and 6 rounds are simulated.
Governing equation
The internal flow of the impeller, guide vane etc. in hydraulic machinery is three-dimensional viscous incompressible unsteady flow.The expression of Navier-Stoke equations in the rectangular coordinate system is as follows: where is the density of the fluid, is speed, is pressure, is time, is space coordinate, is dynamic viscosity, is source item, and are the component of the coordinate axis.
In this paper the SST - turbulence model is used, which combined the - turbulence model and the - turbulence model.Meanwhile, it has the accuracy of calculating viscous flow in the near wall region and the accuracy of free flow in the far field, and the main calculation equations are as follows: where is the generation of Turbulence Kinetic Energy; is the generation of turbulent dissipation rate; Γ and Γ are the diffusion coefficient of and ; and are the dissipative term of and ; is Lateral diffusion term; , are the standby user defines item of the original.Effective diffusion coefficient as follows: where , is turbulence Prandtl number of and ; is turbulence viscosity coefficient.Related calculation equations as follows: where * is low Reynolds number correction factor.Calculation method is as follows: In the formula, Re = , = 6, * = and ≈ 0.072.In high Reynolds number flow, * = * = 1.Calculation method of mixed function is as follows: where is the distance from the wall; is the positive component of the lateral diffusion term.
Numerical method validations
Fig. 4 shows a comparison of the experimental and numerical results for the head and efficiency of the model pump with diffuser installation angle = 41°.The maximum efficiency of the model pump cannot be at the design condition ( = 40 m 3 /h) but deviates from the design condition ( = 37 m 3 /h), which may be attributed to the smaller throat area of the diffuser.The numerical results are in good agreement with that of the experiment, especially under the pump operating flow area ( = 37 m 3 /h-48 m 3 /h).For example, at = 40 m 3 /h, the predicted error between the theoretical results and the experimental data for total head and pump efficiency is 5.1 % and 1.8 %, respectively.The pump head and the efficiency predicted numerically are higher than those obtained experimentally, which may be attributed to neglect of leakage loss through balancing holes and mechanical loss by mechanical seal and bearing.The comparison between the experimental and numerical results indicates that the grid discretization and turbulence model are suitable for the simulation of a centrifugal pump with the guide vane.The comparison of the pressure fluctuation for monitoring point P1 and P2 (as shown Fig. 5) between the numerical and experimental results in volute at = 40 m 3 /h are shown in Fig. 6 and Fig. 7 to further verify the accuracy of the numerical results.In the numerical simulation, the pressure pulsation curves are similar to experimental results, and they present periodic fluctuation due to the interaction between volute tongue and impeller.The predicted pulsation of the pressure for P1 is in great agreement with the experimental result, but for P2, it presents poorly.Whether in the numerical and experiment results, the flow frequency for both P1 and P2 is the blade-passing frequency ( ).Meanwhile, the numerical pressure amplitudes at are higher than test results.4 and Table 5).The clocking position has a significant effect on the pressure amplitudes at , while it has little impact on them at the shaft-frequency and the harmonic frequency.As the guide vane gradually approaches the volute tongue, the pressure amplitudes at decrease, and they reach the minimum at Position 4 under different flow rates (as shown Table 5 and Table 6).Therefore, it is rather clear that the clocking positions have a significant impact on the pressure pulsation caused by the interaction between impeller blade and volute tongue.
Numerical results
Fig. 12 illustrates the comparison of the numerical simulation results on the hydraulic performance under different diffuser installation positions.The clocking effect has a significant impact on the head and efficiency of the model pump.As the guide vane gradually approaches the volute tongue, the head and efficiency gradually decrease.And they reach the maximum while the tongue lies between two diffuser vanes.For Plane 1 ( = 29°), the pump achieves the maximum head and efficiency, but it becomes the minimum one at Plane 1 ( = 5°).Meanwhile, the differences between = 29° and = 5° increase with the flow rate increasing.For example, at = 24 m 3 /h and 32 m 3 /h, the difference between = 29° and = 5° are 1 % and 1.4 % for the efficiency respectively, and it is 2.2 %, 2.4 %, 4.5 % at = 40 m 3 /h, = 48 m 3 /h and 56 m 3 /h respectively.Thus, the conclusions obtained numerically are similar with those obtained by experiment (as shown in Fig. 8), and the accuracy of the numerical simulation is further validated.The impeller is an important flow passage in a centrifugal pump, thus the performance of the impeller is defined as follows: where is the transient performance of impeller; and are the transient average total pressure at the outlet and inlet under the absolute coordinate system, respectively.Fig. 13 shows the unsteady performance of the impeller.It is noted that the performance of the impeller decreases with increase of the flow rate, which indicates the head of the pump drops when the flow rate increases.The periodic fluctuations of performance can be clearly observed due to the interaction between the impeller and the diffuser under different flow rates.The clocking position has little impact on the performance of the impeller, and it is similar with fluctuation and magnitude under different diffuser installations at the same flow rates.And the differences under different clocking positions cannot exceed 0.2 m (as shown in Fig. 14), thus the effect of clocking position on the hydraulic performance of pumps cannot be dominated by the performance of impellers.The diffuser and the volute are important flow components in a centrifugal pump with guide vane, and the main functions are converting kinetic energy to pressure energy.Thus, the performance of the diffuser and volute are defined as follows: where is the transient total pressure loss; and are the transient average total pressure at inlet and outlet, respectively.
The transient total pressure loss for the guide vane and volute are as shown in Fig. 15 and Fig. 16.The results illustrate that the total pressure loss in both diffuser and volute increases with the flow rate, and the loss pulsation presents the periodic pulsation due to the interaction between rotor and stator.However, the loss fluctuation for diffuser is more violent than that for volute.And this is that the inner flow field in the diffuser is affected by both the upstream and the downstream region, but the upstream region is the major factor for affecting that in the volute.The clocking positions have little influence on the total pressure loss of diffuser, while it has a significant effect on that for volute [17].The differences of the loss for diffuser under different clocking positions cannot exceed 0.2 m (as shown in Fig. 17), thus the impact of clocking position on the hydraulic performance of pump cannot be dominated by the performance of diffuser.The influence of the clocking effect on the total pressure loss in the volute increases with the flow rate.When the diffuser vane locates near the volute tongue ( = 5°), the total pressure loss for volute achieve maximum, but it reaches minimum at = 29°.Meantime, the difference between = 29° and = 5° is 2.5 m, 3 m and 4 m at = 32 m 3 /h, = 40 m 3 /h and 56 m 3 /h, respectively (as shown in Fig. 18).Thus, the inner flow field in volute is the major factor affected by the hydraulic performance of a centrifugal pump with guide vane under different diffuser installation positions.A similar phenomenon that the total pressure gradient in volute tongue region and the region located near the guide vane trailing edge are larger variety, indicating that the total pressure loss in these regions is higher than those in other regions.The trailing edge of the suction of the diffuser vane is the main factor affecting the total pressure gradient changing in the flow section of the volute.The clocking effect has a significant influence on the total pressure in volute.The total pressure in volute decreases as the guide vane is close to the tongue, especially in region near the tongue, the volute outlet region and the large flow section.For = 5°, the total pressure gradient achieves maximum variety, and it reaches minimum for = 29°, which indicates that the flow field in the volute at = 29° is more non-uniform than that at = 5°.Meantime, a lower total pressure loss occurs when guide vane gradually approaches the tongue.
Fig. 20 shows the influence of different clocking positions on the static pressure in the volute at = 40 m 3 /h.It is noted that the static pressure of each cross section of the volute gradually decreases as the guide vane is close to the volute tongue.The static pressure gradient gradually increases at the trailing edge of the guide vane and the volute tongue region when the diffuser vane approaches the volute tongue, indicating that the flow field in the volute can be non-uniform and causes larger hydraulic loss to reduce the hydraulic performance of centrifugal pump.For = 41°, = 29° and = 17°, the high pressure region locate at Section 1-Section 5, and pressure distribution in the volute cross section of passage is more uniform, but for other installation positions, the high pressure region locate at Section 1-Section 3 and the pressure gradient greatly changes.The pressure is minimum and static pressure gradient change maximum, especially in the area of Section 8 when the diffuser vane installation positions are = 5° and = 65°, indicating that the velocity in volute is larger and the internal flow field of volute is uneven which produce larger hydraulic loss and make poorly hydraulic performance of volute.The hydraulic loss in volute is related to the absolute velocity.The direction of the velocity cannot be a variety with the flow rate increasing due to the geometry of the volute.The velocity in the flow section of volute increases as the diffuser vane get close to the tongue and the region of high-velocity extends.For = 41°, 29° and 17°, the velocity reaches maximum and the high-velocity region locates between Section 7 to Section 8, but it reaches minimum for = 5° and 65°, and the high-velocity region extends to Section 5 (as shown in Fig. 21) These results indicate that the hydraulic loss generated by the velocity at = 41°, 29° and 17° is higher than other diffuser installation angles.Due to the tongue, the fluid in volute can be divided into two parts: flowing to the volute outlet and the flow passage of volute along the flow section.When the volute tongue locates between two diffuser vanes, the swirl appears in the region of the suction vane due to the flow collision between the volute inlet and diffuser outlet, and the fluid in the region adjacent to Section 1 flows reversely to the volute outlet.However, the fluid flows directly to the volute outlet when guide vane located near the tongue, causes higher mass flow from Section 1 to Section 8 (as shown in Fig. 22) and larger velocity (as shown in Fig. 21).
Conclusions
In this study, the influence of the clocking effect on the hydraulic performance of a radial centrifugal pump with guide vane is researched.The performance of pumps is tested by both experiment and numerical simulation.Although leakage through the balancing holes and mechanical seals is not included in the numerical models, the differences of the total head and pump efficiency between the numerical and experiment are 5.1 % and 1.8 % at = 40 m 3 /h, respectively.Meantime, the predicted pulsation of the pressure at each monitoring point is in agreement with the experimental result.
According to the experimental results, it can be observed that the head and the efficiency decreased gradually as the guide vane get close to the volute tongue, and at = 40 m 3 /h, the differences of the head and efficiency between the volute tongue located in middle of two diffuser vane and the tongue approached to the diffuser vane can reach 4.8 m and 3.6 %, respectively.However, the pressure pulsation in volute achieves maximum as the tongue located between two diffuser vanes.Thus, the clocking position has a great impact on the performance of the pump and cannot be neglected.
It is noted that the major factor impacting on the hydraulic performance of the pump under different clocking positions is the performance in volute rather than that in impeller and diffuser, and the differences for the performance of impeller and the energy loss in diffuser under different clocking positions cannot exceed 0.2 m.The energy loss pulsation in diffuser becomes more violent than that in volute due to the effect of both upstream and downstream region for the inner flow field in a diffuser.The volute can obtain better hydraulic performance when the tongue lies between two diffuser vanes, and it presents worse when the guide vane approaches the tongue, and the difference can research 2.5 m, 3 m and 4 m at = 32 m 3 /h, = 40 m 3 /h and 56 m 3 /h, respectively.
The total pressure loss in diffuser and volute increases with the flow rate.The total pressure gradient changes greatly in the region near the tongue, the volute outlet and the large flow section of the volute.It indicates that these regions have more total pressure loss.The energy loss is higher as the guide vane approaches the tongue.Meanwhile, the flow field in the volute is more non-uniform when the volute located in the near middle of the two diffuser vane.Thus, the inner flow field in volute is the major factor affecting the hydraulic performance of a centrifugal pump with guide vane under different diffuser installation positions.
Fig. 1 .
The geometric model
Fig. 3 .
Mesh of computational domain
Fig. 4 .
Fig. 4. Comparison of the hydraulic performance between the numerical and test results at = 41°
6 . 7 . 8 .
Fig.8shows a comparison of the experimental results for head and efficiency at different clocking positions.It is noted that the clocking positions have a great influence on the hydraulic performance of a centrifugal pump with guide vane.The test results show that the head and efficiency gradually decrease with the diffuser vane approaching the volute tongue.For example, the head and the efficiency of the model pump achieve the maximum at Position 1 and Position 2, and they reach minimum at Position 4. The head and efficiency differences between Position 1 and Position 2 increase with the flow rate.And they can reach 4.8 m and 3.6 % at = 40 m 3 /h, respectively.Thus, the impact of clocking position on the hydraulic performance of a centrifugal pump with guide vane cannot be neglected.
Fig. 9 ,
Fig. 9, Fig. 10 and Fig. 11 illustrate the frequency domain of static pressure pulsations within the volute under different flow rates, respectively.For the five diffuser installation positions, the flow frequency at the two monitoring points (P1, P2) is the blade-passing frequency.And the pressure amplitudes at gradually increase along the volute flow channel.For example, when = 20.27m 3 /h, the pressure amplitudes of P1 and P2 for Position 1 at are 1154.67Pa and 3010.88Pa respectively (as shown in Table4 and Table 5).The clocking position has a significant effect on the pressure amplitudes at , while it has little impact on them at the shaft-frequency and the harmonic frequency.As the guide vane gradually approaches the volute tongue, the pressure amplitudes at decrease, and they reach the minimum at Position 4 under different flow rates (as shown Table5 and Table 6).Therefore, it is rather clear that the clocking positions have a significant impact on the pressure pulsation caused by the interaction between impeller blade and volute tongue.
Fig. 12 .
Fig. 12.Comparison of the numerical simulation results on hydraulic performance
13 .Fig. 14 .
Fig. 14.Comparison of the instantaneous average for impeller under different clocking positions
16 .
a) = 32 m 3 /h b) = 40 m 3 /h d) = 56 m 3 /h Fig. 15.Total pressure loss of diffuser under different diffuser installations 2938.EXPERIMENTAL AND NUMERICAL INVESTIGATION ON THE INFLUENCE OF THE CLOCKING POSITION ON HYDRAULIC PERFORMANCE OF A CENTRIFUGAL PUMP WITH GUIDE VANE.JIANG WEI, CHEN DIYI, YUCHUAN WANG, LI TING 2480 JOURNAL OF VIBROENGINEERING.SEPTEMBER 2018, VOLUME 20, ISSUE 6 a) = 32 m 3 /h b) = 40 m 3 /h d) = 56 m 3 /h Fig.Total Pressure loss of volute under the different diffuser installations
Fig. 17 .Fig. 18 .
Fig. 17.Comparison of the instantaneous average for diffuser under different clocking positions Fig. 18.Comparison of the instantaneous average for volute under different clocking positions Fig. 19 shows the total pressure contour in volute under different clocking positions at = 40 m 3 /h.The Section 1-Section 8 represents the corresponding eight flow sections of volute.A similar phenomenon that the total pressure gradient in volute tongue region and the region located near the guide vane trailing edge are larger variety, indicating that the total pressure loss in these regions is higher than those in other regions.The trailing edge of the suction of the diffuser vane is the main factor affecting the total pressure gradient changing in the flow section of the volute.The clocking effect has a significant influence on the total pressure in volute.The total pressure in volute decreases as the guide vane is close to the tongue, especially in region near the tongue, the volute outlet region and the large flow section.For = 5°, the total pressure gradient achieves maximum variety, and it reaches minimum for = 29°, which indicates that the flow field in the volute at = 29° is more non-uniform than that at = 5°.Meantime, a lower total pressure loss occurs when guide vane gradually approaches the tongue.Fig.20shows the influence of different clocking positions on the static pressure in the volute at = 40 m 3 /h.It is noted that the static pressure of each cross section of the volute gradually decreases as the guide vane is close to the volute tongue.The static pressure gradient gradually increases at the trailing edge of the guide vane and the volute tongue region when the diffuser vane approaches the volute tongue, indicating that the flow field in the volute can be non-uniform
Table 2 .
Different schemes of numerical simulation
Table 3 .
Grid numbers
Table 6 .
Pressure 2938.EXPERIMENTAL AND NUMERICAL INVESTIGATION ON THE INFLUENCE OF THE CLOCKING POSITION ON HYDRAULIC PERFORMANCE OF A CENTRIFUGAL PUMP WITH GUIDE VANE.JIANG WEI, CHEN DIYI, YUCHUAN WANG, LI TING
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2011-05-10T00:00:00.000
|
33457816
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://zookeys.pensoft.net/lib/ajax_srv/article_elements_srv.php?action=download_pdf&item_id=2801",
"pdf_hash": "b24c1dafdc4dd65f48c2714674981740a0286f18",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42371",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "58cc4c9d3a788862b73305103341627761f6ab5b",
"year": 2011
}
|
pes2o/s2orc
|
Three new species and reassessment of the rare Neotropical ant genus Leptanilloides (Hymenoptera, Formicidae, Leptanilloidinae)
Abstract We describe three new species of the Neotropical ant genus Leptanilloides: Leptanilloides gracilis sp. n. based on workers from Mexico and Guatemala, Leptanilloides erinys sp. n. based on workers and a gyne from Ecuador, and Leptanilloides femoralis sp. n. based on workers from Venezuela. The description of Leptanilloides gracilis is a northern extension of the known range of the genus, now numbering eleven described species. We also describe and discuss three unassociated male morphotypes from Central America. We report the occurrence of a metatibial gland in Leptanilloides and a fused promesonotal connection (suture) in some species. We provide a modified, detailed diagnosis of the genus and a revised key to the worker caste of the known species.
Introduction
Leptanilloides Mann, 1923 is a genus of rarely collected Neotropical ants with army ant-like habits, convergently similar to the Old World genus Leptanilla Emery, 1870.
Little is known about their biology (Brandão et al. 1999b), but a number of papers on taxonomy and phylogenetic affinities has been published. In 1923 Mann described Leptanilloides biconstricta from Bolivia and placed it in the subfamily Dorylinae (Mann 1923). Subsequently the genus was considered a member of the Cerapachyinae (Brown 1975, Bolton 1990a, 1990b and then placed in its own subfamily Leptanilloidinae (Baroni Urbani et al. 1992, Bolton 1994. Brandão et al. (1999a) revised the subfamily, adding the new genus Asphinctanilloides Brandão et al., 1999, describing three new species of Leptanilloides, and proposing a morphology-based phylogeny. Bolton (2003) provided a detailed taxonomic history of the genus and subfamily. Longino (2003) and Donoso et al. (2006) described further species, the latter also providing information on hitherto unknown gyne and male castes. Ward (2007) used molecular data to associate a male from Costa Rica with workers described by Longino (2003). Also with the aid of molecular methods, Ward and Brady (2009) established that the male-based genus Amyrmex Kusnezov, 1953, previously placed in Dolichoderinae, is in fact a member of Leptanilloidinae and a potential senior synonym of Asphinctanilloides. There is no doubt that Leptanilloidinae represents a group within a larger clade of the so called dorylomorph ants, as evidenced by a multitude of morphological (Bolton 1990b, Brady andWard 2005) and molecular (Brady 2003, Moreau et al. 2006, Brady et al. 2006, Ward and Brady 2009 data. The genus level taxonomy, however, is still unsettled, and names are expected to change in the future due to the unresolved affinity of Amyrmex (Ward & Brady 2009) and the uncertain distinction between Leptanilloides and Asphinctanilloides (Longino 2003, Donoso et al. 2006, Ward and Brady 2009. New species will also undoubtedly continue to be discovered, as the ratio of distinct species to collecting events remains high.
Below we describe three new species, mostly collected by leaf litter sifting and extraction with the Winkler apparatus, with the exception of the type series of L. erinys, where workers were initially collected from the forest floor by sifting and later extensive search revealed an entire colony. The newly described species are incorporated into a key to all the known species of Leptanilloides. In addition we describe three male morphotypes from Central America. These are males from Malaise trap samples and thus unassociated with workers. We expect future molecular work to associate males and workers, although we hypothesize the association of one of the male morphotypes with L. gracilis based on geographic overlap. We also provide evidence for the occurrence of metatibial glands in Leptanilloides, discuss the structure of the promesonotal suture, and give a detailed diagnosis of the genus based on the worker caste.
Methods
Measurements were made using a Wild M5A stereomicroscope at 50× magnifications with a dual-axis Nikon micrometer wired to a digital readout. Color photographs were prepared using a Leica MZ 16 stereomicroscope with a JVC digital video camera. The scanning electron micrographs were prepared using a Zeiss/LEO 1450VP SEM at the California Academy of Sciences. All images were processed using Syncroscopy Automontage and Zerene Systems Zerene Stacker software and cleaned and adjusted using Adobe Photoshop.
The description of wing venation is based on an unpublished scheme by Bolton (pers. comm.), including description of veins as tubular, nebulous, and spectral. Recommendations for illustration of veins follow Mason (1986). For male genitalia, we adopt the terminology of Yoshimura and Fisher (2011). All specimen data along with images have been deposited on the AntWeb public database (http://www.antweb.org/).
In lists of material examined and other reporting of specimen data an error term may occur after latitude and longitude values. This error term is the sum of GPS error and spatial extent of the sampling area around the point where latitude and longitude were recorded.
The following measurements and indices are used: HW head width: maximum width in full face view. HW for males includes eyes (workers are eyeless). HL head length: maximum length along midline in full face view, measured medially from the anteriormost part of the head (anterior edge of frontal lobes) to the center of posterior margin. Excavation of posterior head margin reduces HL. SL scape length: maximum length measured without condyle and neck.
LAII, LAIII, LAIV, LAXIII
(male only): length of second, third, fourth and terminal (13th) antennal segments, respectively. EL eye length (male only): measured in full face view, maximum length of eye parallel to midline. MH mesosoma height: in lateral view, maximum height measured from the lowermost point of mesopleuron (in front of middle coxa) to dorsal edge of mesosoma, measured perpendicular to long axis of mesosoma. ML mesosoma length: in lateral view, maximum longitudinal distance from farthest point on anterior face of pronotum, excluding the neck, to posteroventral corner of mesosoma.
Results
During our study we have had a chance to examine type material of most species of Leptanilloidinae and carry out a detailed SEM study of L. erinys, L. gracilis, L. femoralis and L. nubecula. We have found that, contrary to previous studies (Brandão et al. 1999a, Longino 2003, Donoso et al. 2006, at least some species of Leptanilloides do possess a metatibial gland and have the promesonotal connection fused and immobile. The metatibial gland was first recognized and considered a synapomorphy of dorylomorph ants (=doryline section) by Bolton (1990b) and subsequently described in detail by Hölldobler et al. (1996). It has been claimed to be absent from hitherto described Leptanilloides (Brandão et al. 1999a, Longino 2003, Donoso et al. 2006. With the aid of SEM we have been able to observe small differentiated patches of porous cuticle and granulate secretion on the hind tibia of L. erinys and L. nubecula ( Figures 1A-D). We believe these represent vestigial pore plates of the metatibial gland. It is possible other species of Leptanilloides possess it, although due to positioning of legs in our specimens we were unable to find it in L. gracilis and L. femoralis. Since the pore plate is extremely small and Leptanilloides ants themselves are tiny, the gland is impossible to discern with a stereomicroscope under magnifications of about 100×. Also, due to its position on the flexor (inner, ventral) surface of the tibia, it is easily overlooked even under SEM.
The promesonotal connection has also been described as universally unfused and flexible in workers of the genus (Brandão et al. 1999a, Longino 2003, Donoso et al. 2006. We have found that this character is in fact very variable in Leptanilloides, ranging from completely unfused and apparently flexible in L. biconstricta,L. caracola,L. erinys,L. femoralis,L. gracilis,L. improvisa and L. sculpturata (Figure1F) and gradually increasing in fusion in L. legionaria through L. mckennae to L. nubecula and L. nomada ( Figure 1E), where the connection seems to be completely fused dorsally, barely visible as a faint groove. The fusion of the promesonotal connection correlates with other morphological features: the lateroclypeal teeth are reduced, abdominal segment III is small in relation to segment IV, and the spiracles of abdominal segment III are shifted posteriorly. The latter three characters had already been noticed by Donoso et al. (2006) and interpreted as blurring the distinction between Asphinctanilloides and Leptanilloides. The segregation of Leptanilloides into two natural species groups seems to be supported by molecular data, although taxon sampling is still unsatisfactory (Phil Ward, pers. comm.). Adding somewhat intermediate species to the dataset, like L. legionaria that has a small abdominal segment III but only weakly fused promesonotal suture, or L. biconstricta with apparently complete promesonotal connection but intermediate abdominal segment III, may blur this distinction.
Given the new morphological findings, coupled with comparative character investigation (Marek Borowiec, unpublished) for other dorylomorph lineages, we feel it useful to provide a detailed and revised definition of Leptanilloides based on the worker caste.
The known leptanilloidine males show substantial variation (see Donoso et al. 2006, Ward 2007, Ward and Brady 2009 and the lack of definite worker-male associations prevents us from characterizing the male caste of the genus in a structured way. Ward and Brady (2009) enumerated differences between the known Amyrmex morphotypes and the then known males of Leptanilloides, mckennae and nubecula, pointing out that the distinction is weak and that undescribed Leptanilloidinae material weakens it even further. If the three Central American males described here turn out to belong to Leptanilloides, then for the following characters the distinction is weakened further still: small body size of Leptanilloidinae male 1 (HW on average < 0.30), short scapes of Leptanilloidinae male 3 (SL/LAII 1.4-1.9), relatively short legs of the Central American males (HTiL/HL 1.2-1.4), parameres shorter than petiole in male 3, veins M and Cu diverging at cu-a in all the below described morphotypes, and the absence of free abscissae of M joining R in Leptanilloidinae male 3. The only character from their list differentiating Amyrmex from Leptanilloides that seems to hold is the more narrow and elongate submarginal cell of fore wing in the former.
Diagnosis of Leptanilloides based on worker caste
Antennae with 12 segments. Apical antennal segment slender, not swollen; round in cross-section. Clypeus with well developed, translucent lamella (apron). Lateroclypeal teeth (same as "genal" teeth in Donoso et al. 2006) present or absent. Parafrontal ridges absent or weakly developed. Preocular grooves absent. Frontal carinae vertical, very reduced and fused, completely exposing antennal sockets. Antennal scrobes absent. Maxillary palps two-segmented, except in gracilis, where apparently weakly fused and forming one segment; labial palps two-segmented (palp formula 1,2 or 2,2) (in situ count in gracilis, femoralis and legionaria, also reported by Brandão et al. 1999a Although originally the genus Asphinctanilloides had been differentiated from Leptanilloides by several characters (Brandão et al. 1999a), subsequent descriptions of new taxa somewhat blurred this distinction. At present at least presence of a deep metanotal groove and absence of a constriction between abdominal pre-and postsegments V and VI can be regarded synapomorphic for Asphinctanilloides. Ward and Brady (2009) discussed the subject in detail and noted that differentiation of Asphinctanilloides may render Leptanilloides paraphyletic. In lateral view, sternite of abdominal segment III (postpetiole) distinctly bulging anteriorly, making the sternal portion of the segment deeper than tergite ( Figure 2D). Abdominal segment IV narrowly attached to the preceding segment III, and broadly to succeeding segment V, so that there is a contrast between widths of anterior and posterior articulation of the segment IV in lateral view (Colombia, Bolivia In lateral view, petiolar sternite distinctly bulging medially ( Figure 2C) ......8 -In lateral view, petiolar sternite bulging anteriorly ( Figures 2B, 2D) ............9 8
Key to workers of Leptanilloides
Hind tibia with two very small, simple spurs, without pectinate spur clearly visible under 50× magnification ( Figure 2J). Petiolar spiracle opening in an excavation distinctly larger than propodeal spiracle ( Figure 5G, 5H). Flange over the metapleural gland opening sharply pointed posteriorly ( Figure Diagnosis. Worker can be distinguished by combination of relatively small size, promesonotal articulation complete and articulated, abdominal segment III large relative to petiole, presence of lateroclypeal teeth, relatively heavy sculpturing, parafrontal ridges absent, flange overhanging metapleural gland opening pointed posteriorly. It is most similar to Leptanilloides sculpturata from Colombia, but can be distinguished by significantly larger size (HW ≥0.31 in erinys versus 0.20-0.26 in sculpturata), relatively broader head (CI >70 vs. 58-67) and shorter petiole (PI >74 vs. PI=70 measured in holotype). Leptanilloides erinys also differs in weaker sculpturation of head dorsum, with small foveolae separated by about their diameter ( Figure 1G), while in L. sculpturata the foveolae are separated by much less than their diameter, often contiguous ( Figure 7 in Brandão et al. 1999a).
Worker description. With characters of Leptanilloides (see Diagnosis of Leptanilloides based on worker caste, above). Head elongate and subquadrate with lateral margins nearly straight and parallel. Posterior corners rounded and posterior border weakly concave. Parafrontal ridge absent. Clypeus laterally with blunt tooth pointing outwards. Mandible short, masticatory margin with three distinct blunt teeth basally and basal margin crenulate. Basal and masticatory margin distinct, but separated by a rounded angle. Palp formula unknown. Scape short and clavate. Antennal joints submoniliform, gradually increasing in size toward apex but not forming an antennal club. Mesosoma long, slender and flattened. Pronotum with a flexible promesonotal suture. Metanotal groove absent. Propodeum unarmed. Propodeal declivity very short and rounding into the dorsal face. Propodeal spiracle round, situated posteriorly on the sclerite. Metapleural gland flange conspicuous, translucent and posteriorly pointed. Femur not conspicuously enlarged, relatively slender. Mid tibia with one simple and hind tibia with one pectinate spur. Metatibial gland absent. Petiole smaller than abdominal segment III (postpetiole) in dorsal view. Petiole rectangular, uniformly wide across its length in dorsal view and with straight sides and abdominal segment III dilating posteriorly. In lateral view, petiolar tergite posteriorly sloping, without well differentiated posterior face and without long tubulated portion posteriorly. Petiolar sternite bulging anteriorly. Abdominal sternite III evenly rounded. Metasoma relatively robust. Abdominal segments IV-VI subequal in length in dorsal view and separated by strong constrictions. Segment VII (pygidium) small and mostly concealed by the preceding segment, U-shaped.
Head with abundant deep punctures and smooth interspaces on average about equal to puncture diameter, except on sides where punctures sparser, separated by more than their diameter. Mesosoma and abdomen more finely and sparsely punctate. Laterally on lower pronotum, entire mesopleuron, propodeum and petiole fine microreticulate sculpture present. Head, body and appendages with abundant, rather coarse, short and erect hairs. Body color yellowish to brownish.
Gyne measurements and indices (1 measured Gyne description. Subdichthadiigyne. Head rectangular, lateral borders weakly convex and posterior border distinctly concave. Compound eyes present and comprised of about ten weakly defined ommatidia, situated behind head midlength. Mandible subtriangular, masticatory margin crenulate, basal margin edentate. Clypeal apron present, small. Wingless, without any wing sclerites or wing buds. Petiole enlarged, taller than in worker and wider than long in dorsal view. Abdominal segment III broadly attached to following segments, tergosternal fusion not assessed. Petiolar and abdominal segment III spiracles located as in workers. Girdling constriction of abdominal segments IV-VI weakly developed and conspicuous only on segment IV. Tergite of abdominal segment VII (pygidium) large, not U-shaped and mostly concealed by preceding segment as in workers. Promesonotal connection present, articulated. Entire body covered with dense pubescence, more erect than in worker.
Male. unknown. Biology. This species was collected in montane cloud forest habitat. Workers were first located in sifted leaf litter. After scraping leaf litter and removing root mat in an area of about 3m 2 , a colony was discovered ca. 5cm below ground in a single soil cavity adjacent to a root. In a mass of workers a single gyne could be seen, as well as many slender larvae. The gyne did not have an extended gaster, there were no eggs visible in the nest, and all the larvae were of approximately the same size, suggesting synchronized brood production. Measurements in mm and indices (7 measured): HW 0.23-0.25, HL 0.32-0.34, SL 0.14-0.16, MH 0.12-0.14, ML 0.41-0.44, PrW 0.15-0.17, PW 0.08-0.10, PL 0.12, AIIIW 0.12-0.14, AIIIL 0.11-0.14, AIVW 0.22-0.23, AIVL 0.17-0.19, FFeW 0.08-0.09, FFeL 0. Diagnosis. Worker relatively slender and small compared to most species in the genus, promesonotal connection complete and articulated, abdominal segment III (postpetiole) large relative to petiole, lateroclypeal teeth present, sculpturing moderate, parafrontal ridges present, flange overhanging metapleural gland opening rounded posteriorly. In general habitus and size it is most similar to Leptanilloides gracilis but can be distinguished by the small opening of petiolar spiracle (situated in large depression in gracilis), the pointed flange over the metapleural gland (rounded in gracilis), single pectinate spur on hind tibia (two simple spurs in gracilis), and relatively broader femur (FFeW 0.08-0.09 in femoralis, 0.06-0.07 in gracilis). Both femoralis and gracilis are similar to biconstricta from Bolivia and improvisa from Ecuador, but can be distinguished by the distinctly bulging sternite of the petiole, with the bulge most prominent medially (versus indistinctly broadened anteriorly in biconstricta and improvisa).
Leptanilloides femoralis
Worker description. With characters of Leptanilloides (see Diagnosis of Leptanilloides based on worker caste, above). Head elongate and rectangular with lateral margins nearly straight and parallel. Posterior corners rounded and posterior border concave. Parafrontal ridge distinct. Clypeus laterally with blunt tooth distinctly pointing outwards. Mandible short, masticatory margin with small teeth and basal margin crenulate. Basal and masticatory margins distinct, but separated by a rounded angle. Maxillary palp two-segmented. Labial palp two-segmented (in situ count). Scape short and clavate. Antennal joints submoniliform, gradually increasing in size toward apex but not forming an antennal club. Mesosoma long, slender and flattened. Pronotum with a flexible promesonotal suture. Metanotal groove absent. Propodeum unarmed. Propodeal declivity very short and rounding into the dorsal face. Propodeal spiracle round, situated posteriorly on the sclerite. Metapleural gland flange conspicuous, translucent and posteriorly blunt. Femur enlarged, broad. Mid tibia with one simple and hind tibia with one pectinate spur. Petiole smaller than abdominal segment III (postpetiole) in dorsal view. Petiole rectangular, uniformly wide across its length in dorsal view and with straight sides and abdominal segment III dilating posteriorly. In lateral view, petiolar tergite with differentiated anterior and posterior faces, posterior tubulated portion short. Petiolar sternite distinctly bulging medially. Abdominal sternite III evenly rounded. Metasoma long and slender. Abdominal segments IV-VI subequal in length in dorsal view and separated by strong constrictions. Pygidium small and mostly concealed by the preceding segment, U-shaped.
Head with abundant punctures with smooth interspaces on average equaling puncture diameter, except on sides where punctures sparser. Mesosoma and abdomen more finely and sparsely punctate. Laterally on mesopleuron, propodeum and petiole fine microreticulate sculpture present. Head, body and appendages with abundant, rather coarse, short and erect hairs. Body color yellowish.
Gyne and male. Unknown. Biology. Leptanilloides femoralis is known to occur in montane cloud forest habitat. The single collection was from a Winkler sample of sifted litter and rotten wood from the forest floor.
Discussion. This species is superficially very similar to L. gracilis and at first sight might be considered an allopatric population of that species. However, molecular data obtained for ten nuclear gene regions from both morphotypes shows a very large amount of sequence divergence, making it extremely unlikely that the ants belong to the same species (Phil Ward, unpublished data).
Leptanilloides gracilis sp. n. urn:lsid:zoobank.org:act:9D7C6CE3-5E0D-4D2B-B7AC-58CECD516225 http://species-id.net/wiki/Leptanilloides_gracilis Figures 1F, 2J Diagnosis. Worker relatively slender and small compared to most species in the genus, promesonotal connection complete and articulated, abdominal segment III large relative to petiole, lateroclypeal tooth present, sculpturing moderate, parafrontal ridge present, flange overhanging metapleural gland opening pointed posteriorly. L. gracilis is unique in the modified petiolar spiracle, opening to a conspicuous pit larger in diameter than propodeal spiracle opening ( Figure 3G), maxillary palpus with only one segment and mid and hind tibia with two simple spurs. In general habitus and size it is most similar to Leptanilloides femoralis but can be distinguished (in addition to traits mentioned above) by the pointed flange over the metapleural gland (blunt in L. femoralis) and relatively slender femur. Both L. gracilis and L. femoralis are similar to L. biconstricta from Bolivia and L. improvisa from Ecuador, but can be distinguished by the distinctly bulging sternite of the petiole, with the bulge most prominent medially (versus indistinctly broadened anteriorly in L. biconstricta and L. improvisa).
Worker description. With characters of Leptanilloides (see Diagnosis of Leptanilloides based on worker caste, above). Head elongate and rectangular with lateral margins nearly straight and parallel. Posterior corners rounded and posterior border concave. Parafrontal ridge distinct. Clypeus laterally with blunt tooth distinctly pointing outwards. Mandible short, masticatory margin with small teeth and basal margin crenulate. Basal and masticatory margins distinct, but separated by a rounded angle. Maxillary palp apparently fused to form one segment, although weakly constricted and similar in length to two-segmented labial palp (in situ count). Scape short and clavate. Antennal joints submoniliform, gradually increasing in size toward apex but not forming an antennal club. Mesosoma long, slender and flattened, with a flexible promesonotal suture. Metanotal groove absent. Propodeum unarmed. Propodeal declivity very short and rounding into the dorsal face. Propodeal spiracle round, situated posteriorly on the sclerite. Metapleural gland flange conspicuous, translucent and posteriorly pointed. Femur not conspicuously enlarged, relatively slender. Mid and hind tibia each with two small and simple spurs. Metatibial gland absent. Petiolar spiracle opening to conspicuous depression, in diameter exceeding propodeal spiracle. Petiole smaller than abdominal segment III (postpetiole) in dorsal view. Petiole rectangular, uniformly wide across its length in dorsal view and with straight sides and abdominal segment III dilating posteriorly. In lateral view, petiolar tergite with differentiated anterior and posterior faces, posterior tubulated portion short. Petiolar sternite distinctly bulging medially. Abdominal sternite III evenly rounded. Metasoma long and slender. Abdominal segments IV-VI subequal in length in dorsal view and separated by strong constrictions. Pygidium small and mostly concealed by the preceding segment, U-shaped.
Head with abundant punctures with smooth interspaces on average equaling puncture diameter, except on sides where punctures sparser. Mesosoma and abdomen more finely and sparsely punctate. Laterally on mesopleuron, propodeum and petiole fine microreticulate sculpture present. Head, body and appendages with abundant, rather coarse, short and erect hairs. Body color yellowish.
Male. See discussion under Leptanilloidinae male 1. Biology. The type series was collected in second growth mesophyll cloud forest. A few dozen workers were in a single "miniWinkler" sample, which is litter sifted from a 1m 2 plot on the forest floor. Two additional workers were collected in a similar mini-Winkler sample approximately 1 km distant. The species occurred in two of 100 mini-Winkler samples taken at the site. The Guatemala collection was made under similar circumstances, in which a small series of workers occurred in one of 100 miniWinkler samples from a mature cloud forest habitat.
Discussion. L. gracilis is similar in general habitus to some other small species of the genus, especially L. femoralis. It is unique, however, in some traits that may be considered autapomorphies of this species. It has the segments of the maxillary palpus fused to form one, instead of the two-segmented palpus seen in other species where the palp formula is known. The petiolar spiracle opening is situated in a conspicuous pit of diameter larger than the propodeal spiracle opening; in all other species of Leptanilloides the petiolar spiracle opening is simple and subequal or smaller than that of propodeal spiracle. There are two minute, simple spurs on the mid and hind tibia, while other species of the genus are known to have one simple spur on the mid tibia and a single conspicuously pectinate spur on hind tibia. Figure 6A Description. Head broader than long, with large convex eyes that occupy almost half of the sides of head. Mandible slender and falcate with blunt apex, without differentiated masticatory margin, edentate. External margin of mandible more or less evenly curved along its length. Mandible tips crossing at closure, mandible longer than eye length. Lateroclypeal teeth and hypostomal teeth lacking, clypeus short and transverse, without visible clypeal lamella (apron). Antennal sockets horizontal and exposed, located at the anterior clypeal margin that is projecting anteriorly beyond ventral articulation with labrum. Antenna 13-segmented, each segment longer than wide, with second, third and fourth segments subequal in length. Scape of moderate length, subequal to the length of ultimate antennal segment. Scape about twice the length of the second antennal segment, and about the combined length of the second and third antennal segments. Lateral ocellus separated from median ocellus by little more than its diameter. Distance between lateral ocelli little greater than between median and lateral ocellus and ocelli forming almost equilateral triangle. Mesosoma with distinctive pronotum: U-shaped in dorsal view and reduced anteromedially to a thin horizontal strip, set below the level of the dorsally protruding mesonotum and triangular in lateral view, with pointed posterior apex directed towards the wing base. Mesoscutum lacking notauli and parapsidal lines not discernable. Axillae depressed, not meeting medially, connected by a narrow furrow. Tegula very small and inconspicuous. Mesopleuron lacking oblique transverse sulcus and hence not divided into anepisternum and katepisternum. Mesoscutellum prominently bulging, as seen in lateral view. Metapleural gland not discernable. Propodeum with dorsal and declivous surfaces not differentiated, evenly rounded. Propodeal spiracle small, circular, positioned slightly below midheight of propodeum and slightly posterior to the midlength. Legs slender, mesotibia and metatibia each with two simple spurs, pretarsal claw lacking preapical tooth. Wing with extremely reduced venation. Fore wing with C present, tubular and weakly pigmented. Sc+R very closely approximated to the wing margin, very narrow, compressed vertically, the most apparent vein on forewing. Sc+R1 region not differentiated in absence of Rs·f1 but differing from rest of vein by not being conspicuously vertically compressed; in line with Sc+R, nebulous. Pterostigma not marked. R1·f3 absent. M+Cu nebulous and inconspicuous, slightly curved towards posterior wing margin before division. Rs·f1 absent. M·f1, Rs+M, Rs·f2 and Rs·f3 all joined, not differentiated, tubular or partially nebulous. 1r-rs absent. 2r-rs present, spectral. Rs·f4 and Rs·f5 joined and not differentiated in the absence of 2rs-m. Rs·f4&f5 nebulous and poorly visible, terminating before wing margin. Free abscissae of M absent. Abscissae of Cu joined, initially nebulous, continuing throughout most of the length as spectral. Vein A tubular, joining cu-a at a very obtuse angle and confluent with Rs+M, apparently absent beyond cu-a, although weak flexion at the posterior wing margin can be interpreted as spectral A·f2&f3. Posterior margin of fore wing with narrow, conspicuous fold where hamuli attach. Hind wing with C present, tubular, reaching about fourth of wing length. Anterior margin of hind wing past midlength with a conspicuous dark stigma. Two hamuli originate in the region of the stigma. Jugal lobe absent. Metasoma slender in lateral view, obovate in dorsal view, widest at abdominal segment IV. Petiole (abdominal segment II) subquadrate in lateral view, about as long as high or wide, and only weakly constricted posteriorly, the helcium thus apparently quite broad. Petiolar spiracle located on anterior third of the segment, near anterodorsal extremity; abdominal segment III larger than petiole, and not developed as postpetiole nor separated from abdominal segment IV by a marked constriction. Abdominal spiracle III located on anterior third of tergite. Petiole and abdominal segment III with tergosternal fusion. Abdominal segment IV and succeeding segments lacking tergosternal fusion. Segment IV with weakly differentiated presclerites. Spiracle present on anterior third of tergite IV. Abdominal segments V and VI lacking well differentiated presclerites, and not separated from succeeding segments by constrictions. Abdominal spiracles V and VI not discernable in specimens examined but possibly present at anterior margins of respective tergites. Abdominal tergite VIII (pygidium) small and simple but visible dorsally, not wholly covered by abdominal tergite VII. Pygostyli absent. Abdominal sternite IX (subgenital plate) with posterior margin broadly and deeply concave but not bifurcate. Basal ring present, not hypertrophied. Paramere small and slender with pointed, slightly outcurved apex of harpago. Paramere little longer than petiole length. Volsella a simple, narrow and elongate lobe, lacking differentiated cuspis, distally pointed and slightly outcurved. Aedeagus little longer than paramere and volsella, simple, narrow, distally spatulate. Body size very small. Integument mostly smooth and shiny, with scattered piligerous punctures. Pilosity common on most of body, suberect to decumbent. Color light yellowish-brown, head and posterior margins of abdominal segments IV-VII darker, appendages (antennae, mandibles, legs) lighter.
Leptanilloidinae male 1
Discussion. Project LLAMA (Leaf Litter Arthropods of MesoAmerica) is an arthropod biodiversity inventory project carrying out a structured sampling program at sites from southern Mexico (Chiapas) to Nicaragua. The focus is on mature mesophyll forest at multiple elevations. Four days are spent sampling at each study site, and one of the methods is to erect four Malaise traps for the four days. Sampling has been carried out in May and June, 2008 to 2010. The diminutive Leptanilloides male described here is surprisingly common in the Malaise samples, occurring at many of the study sites and across a great range of elevations (these specimens temporarily reside in the personal collections of Borowiec and Longino, ultimately to be deposited in major institutional collections). They have been found in the Sierra de Chiapas (near the type locality of L. gracilis), in the lower elevation Lacandón rainforests of northern Chiapas, in the Petén region of Guatemala, in both lowland and montane regions of central Guatemala, and in montane regions of Honduras. When they occur at a site, they are typically found in more than one of the Malaise traps, but usually no more than about five males per trap in a 4-day sample. They are very easily overlooked because of their similarity, in both size and degree of sclerotization, to small nematoceran Diptera that are often abundant in Malaise samples.
From hitherto described males of Leptanilloides (Donoso et al. 2006, Ward 2007, Ward & Brady 2009 and Amyrmex (Ward & Brady 2009) this morphos-pecies can be differentiated by a combination of falcate mandibles, small size and extremely reduced wing venation with Rs·f1 and pterostigma absent in fore wing and hind wing venation restricted to a short C stub, as well as the external structure of the genitalia. The falcate mandibles are similar in shape to mandibles of males of L. nubecula, but the specimens of male 1 are smaller than the males of L. nubecula and apparently have less well developed wing venation. Although Donoso et al. (2006) did not describe wing venation in detail for L. nubecula, and we have not examined the male specimens of that species, from the picture given in their treatment (Fig. 26,p. 55) it is clear that the wing venation is much better developed in L. nubecula. Rs·f1 can be seen in fore wing and hind wing has a conspicuous Sc+R running almost three fourths of the wing length, while in male 1 both these veins are apparently absent.
The largely sympatric distribution, two simple spurs on mid and hind tibia, overall small size, and relative abundance make these specimens good candidates for the male caste of L. gracilis. Figure Measurements in mm and indices (2 measured): HW 0.37-0.40, HL 0.30-0.31, EL 0.14-0.15, SL 0.13-0.14, LAII 0.07-0.08, LAIII 0.07, LAIV 0.06-0.07, LAX-III 0.14-0.15, MH 0. PL 0.16,FFeW 0.08, Description. Head broader than long, with large convex eyes that occupy almost half of the sides of head. Mandible slender, tapering to pointed apex, without differentiated masticatory margin, edentate. External margin of mandible more or less straight along its length. Mandible tips crossing at closure, mandible slightly longer than eye length. Lateroclypeal teeth and hypostomal teeth lacking, clypeus short and transverse, without visible clypeal lamella (apron). Antennal sockets horizontal and exposed, located at the anterior clypeal margin that is not projecting anteriorly beyond ventral articulation with labrum. Antenna 13-segmented, each segment longer than wide, with second, third and fourth segments subequal in length. Scape of moderate length, subequal to the length of ultimate antennal segment. Scape length about twice the length of the second antennal segment, and about the combined length of the second and third antennal segments. Lateral ocellus separated from median ocellus by lit-tle more than its diameter. Distance greater between lateral ocelli than between median and lateral ocellus and ocelli forming isosceles triangle. Mesosoma with distinctive pronotum: U-shaped in dorsal view and reduced anteromedially to a thin horizontal strip, set below the level of the dorsally protruding mesonotum and triangular in lateral view, with pointed posterior apex directed towards the wing base. Mesoscutum lacking notauli and parapsidal lines present, weakly marked but long, running about two thirds of mesoscutum length. Axillae depressed, not meeting medially, connected by a narrow furrow; tegula very small and inconspicuous. Mesopleuron lacking oblique transverse sulcus and hence not divided into anepisternum and katepisternum. Mesoscutellum raised above level of mesosctum but not prominently bulging, as seen in lateral view. Metapleural gland not discernable. Propodeum with dorsal surface clearly shorter than declivous. Propodeal spiracle small, circular, positioned at midheight of propodeum and slightly posterior to the metanotum. Legs slender, mid tibia with one simple and hind tibia with one pectinate spur, pretarsal claw lacking preapical tooth. Wing with relatively well developed venation (for Leptanilloides). Fore wing with C present, tubular and weakly pigmented. Sc+R very closely approximated to the wing margin, very narrow, compressed vertically. Sc+R1 region joining Sc+R at obtuse angle, tubular. Pterostigma well marked. R1·f3 absent. M+Cu nebulous but conspicuous, slightly curved towards posterior wing margin before division. Rs·f1 stub present, tubular but not reaching Sc+R. M·f1 pigmented, tubular. Rs+M tubular and pigmented, straight. Rs·f2 and Rs·f3 joined, not differentiated, tubular and pigmented. 1r-rs absent. 2r-rs present, tubular and pigmented. Rs·f4 and Rs·f5 joined and not differentiated in the absence of 2rs-m. Rs·f4&f5 partly tubular and partly nebulous, terminating before wing margin. Free abscissae of M present, nebulous and very weakly visible. Abscissae of Cu joined, nebulous throughout most of the length and continuing as spectral. Vein A tubular, joining cu-a at obtuse angle and confluent with Rs+M, apparently absent beyond cu-a. Posterior margin of fore wing with narrow, conspicuous fold where hamuli attach. Hind wing with C absent. Rc+R present, tubular but compressed, reaching about third of wing length. Anterior margin of hind wing with little differentiated pigmentation. Three hamuli originate in the pigmented region. Jugal lobe absent. Metasoma slender in lateral view, obovate in dorsal view, widest at abdominal segment IV. Petiole (abdominal segment II) ovate in lateral view, longer than high or wide, and weakly constricted posteriorly, the helcium thus apparently quite broad. Petiolar spiracle located on anterior third of the segment, near anterodorsal extremity. Abdominal segment III larger than petiole, and not developed as postpetiole nor separated from abdominal segment IV by a marked constriction. Abdominal spiracle III located on anterior third of tergite. Abdominal segments II and III with tergosternal fusion. Abdominal segment IV and succeeding segments lacking tergosternal fusion. Segment IV with weakly differentiated presclerites. Spiracle present on anterior third of tergite IV. Abdominal segments V and VI lacking well differentiated presclerites, and not separated from succeeding segments by constrictions. Abdominal spiracles V and VI not discernable in specimens examined but possibly present at anterior margins of respective tergites. Pygostyli absent. Abdominal sternite IX (subgenital plate) was hidden and not observed. Basal ring present, not hypertrophied. Paramere relatively broad, not tapering, apically harpago truncated. Paramere little longer than petiole length. Volsella simple, lacking differentiated cuspis, tapering suddenly at midlength and distally pointed, forming ventrally directed hooks. Aedeagus apparently very short, could not be observed directly without dissection. Body size moderate. Integument mostly smooth and shiny, with scattered piligerous punctures. Pilosity common on most of body, suberect to decumbent. Color light brown, head, and mesoscutellum darker. Antennal segments I-III light, the rest light brown. Other appendages (mandibles, legs) lighter.
Leptanilloidinae male 2
Discussion. These two males are from 1200m elevation wet forest, at the Wilson Botanical Garden in southern Costa Rica. They were collected by Marc Pollet in yellow pan traps on the forest floor, in late August, 2010.
These large male specimens can be recognized by sublinear, evenly tapering mandible without differentiated basal and masticatory margins, moderate size and relatively well developed wing venation. From Leptanilloidinae male 3 they differ in subequal dorsal and declivous faces of propodeum (dorsal surface shorter in male 3), shorter petiole and free abscissae of M joining Rs+M. From L. mckennae they can be distinguished by arched propodeum (flattened in mckennae) and sublinear mandibles (subtriangular in mckennae). Figure Description. Head broader than long, with large convex eyes that occupy almost half of the sides of head. Mandible slender, widest at midlength but without differentiated masticatory margin, tapering to pointed apex, edentate. External margin of mandible more or less straight along its length. Mandible tips crossing at closure, mandible length subequal to eye length. Lateroclypeal teeth and hypostomal teeth lacking, clypeus short and transverse, with narrow clypeal lamella (apron). Antennal sockets horizontal and exposed, located at the anterior clypeal margin that is not projecting anteriorly beyond ventral articulation with labrum. Antenna 13-segmented, each seg-ment longer than wide, with third segment the shortest. Scape of moderate length, subequal to the length of ultimate antennal segment. Scape length less than twice the length of the second antennal segment, and less than the combined length of the second and third antennal segments. Lateral ocellus separated from median ocellus by more than its diameter. Distance greater between lateral ocelli than between median and lateral ocellus and ocelli forming isosceles triangle. Mesosoma with distinctive pronotum: U-shaped in dorsal view and reduced anteromedially to a thin horizontal strip, set below the level of the dorsally protruding mesonotum and triangular in lateral view, with pointed posterior apex directed towards the wing base. Mesoscutum lacking notauli. Parapsidal lines present, long, running about the third of mesoscutum length. Axillae depressed, not meeting medially, connected by a narrow furrow; tegula very small and inconspicuous. Mesopleuron lacking oblique transverse sulcus and hence not divided into anepisternum and katepisternum. Mesoscutellum raised above level of mesoscutum and prominently bulging, as seen in lateral view. Metapleural gland not discernable. Propodeum with dorsal surface somewhat shorter than declivous. Propodeal spiracle small, circular, positioned slightly above midheight of propodeum and slightly posterior to the metanotum. Legs slender, mid tibia with one simple and hind tibia with one pectinate spur, pretarsal claw lacking preapical tooth. Wing with relatively well developed venation. Fore wing with C present, tubular and pigmented. Sc+R approximated to the wing margin, very narrow, compressed vertically. Sc+R1in line with Sc+R, tubular. Pterostigma well marked. R1·f3 absent. M+Cu tubular, slightly curved towards posterior wing margin before division. Rs·f1 present, nebulous. M·f1 pigmented, tubular. Rs+M&Rs·f2&Rs·f3 tubular and pigmented. 1rrs absent. 2r-rs present, tubular and pigmented. Rs·f4&Rs·f5 tubular, terminating before wing margin. Free abscissae of M nebulous, very weakly visible and not joining to Rs+M&Rs·f2&Rs·f3. Abscissae of Cu joined, nebulous throughout most of the length and continuing as spectral. Vein A tubular, joining cu-a at obtuse angle and confluent with Rs+M, apparently absent beyond cu-a. Posterior margin of fore wing with fold where hamuli attach narrow, conspicuous. Hind wing with C apparently present, narrow and faint except basal fourth of wing length. Sc+R present, tubular along fourth of wing length, continuing as nebulous. Sc+R1 a short nebulous stub. Rs·f1&Rs·f2 nebulous, terminating at about three fourth of wing length. Anterior margin of hind wing with little differentiated pigmentation. Three hamuli originate in the pigmented region. Jugal lobe absent. Metasoma slender in lateral view, obovate in dorsal view, widest at abdominal segment IV. Petiole (abdominal segment II) elongate-ovate in lateral view, more than two times longer than high or wide, and weakly constricted posteriorly, the helcium thus apparently quite broad. Petiolar spiracle located on anterior fourth of the segment, near anterodorsal extremity. Abdominal segment III larger than petiole, and not developed as postpetiole nor separated from abdominal segment IV by marked constriction. Abdominal spiracle III located on anterior third of tergite. Abdominal segments II and III with tergosternal fusion. Abdominal segment IV and succeeding segments lacking tergosternal fusion. Segment IV with weakly differentiated presclerites. Spiracle present on anterior third of tergite IV. Abdominal segments V and VI lacking well differentiated presclerites, and not separated from succeeding segments by constrictions. Abdominal spiracles V and VI not discernable in specimens examined but possibly present at anterior margins of respective tergites. Abdominal tergite VIII (pygidium) small and simple but visible dorsally, not wholly covered by abdominal tergite VII. Pygostyli absent. Abdominal sternite IX (subgenital plate) with posterior margin broadly and deeply concave but not bifurcate. Basal ring present, not hypertrophied. Paramere relatively broad, harpago evenly rounded at apex; paramere shorter than petiole length. Volsella a simple, broad and elongate lobe, lacking differentiated cuspis, distally pointed. Aedeagus about equal in length to paramere and volsella, simple, narrow, distally spatulate. Body size moderate. Integument mostly smooth and shiny, with scattered piligerous punctures. Pilosity common on most of body, suberect to decumbent. Color light brown, head and metasoma past abdominal segment III darker. Antennal segment II light, the rest light brownish. Other appendages (mandibles, legs) lighter than body.
Leptanilloidinae male 3
Discussion. This form has been collected at two sites in the Petén region of Guatemala and one locality in Chiapas, Mexico.
This relatively large male differs from Leptanilloidinae male 2 and L. mckennae in the dorsal face of the propodeum being shorter than the declivity (subequal in male 2 and flattened in mckennae), longer petiole, and free abscissae of M not connected to Rs+M. Additionally, from mckennae it differs by the slender mandibles without well differentiated masticatory and basal margins (subtriangular in mckennae). We have examined an additional specimen from Barro Colorado Island, Panama ("Leptanilloidine genus 1 PM01"; CASENT0106194), already mentioned by Ward & Brady (2009) that may belong here. It is larger (ML 0.74) with wider head (HW 0.43) and larger eyes (EL 0.20) but with relatively shorter petiole (PW 0.10, PL 0.15). The wing venation is similar, except veins of radial sector being more approximated to the anterior wing margin and thus making the closed veins of the wing appear more flattened. There is also a stub of free abscissae of M, completely absent in the three males from Mexico and Guatemala. Genitalia in this specimen are retracted and partly obscured, but seem similar to the genitalia present in Leptanilloidinae male 3. In the absence of collections of males of similar morphotypes between Guatemala and Panama, we are unable to tell whether this form represents a geographical variant or a distinct species.
Conclusions
The leptanilloidine ants, apparently due to their presumably subterranean habits, represent a serious challenge to sampling. The ratio of collecting events to number of worker-based morphospecies continues to be high, and number of male morphotypes present in the collections from recent efforts in Central America (LLAMA project) exceeds the number of the known worker-based species from the same region. This makes it certain that new species will continue to be discovered. When molecular data become available for more workers and unassociated males of Leptanilloides and work-ers of Asphinctanilloides, it seems most probable that one of the genera is identical to Amyrmex. Given the unsatisfactory state of knowledge of the subfamily, future efforts documenting the diversity, biology, morphology, internal phylogeny, as well as phylogenetic position of Leptanilloidinae within dorylomorphs are much needed.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-04-01T00:00:00.000
|
15649997
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.mdpi.com/1420-3049/15/4/2509/pdf",
"pdf_hash": "4cb052f30ccc81a98fbe994320f3f3dc755dc8f7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42372",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "4cb052f30ccc81a98fbe994320f3f3dc755dc8f7",
"year": 2010
}
|
pes2o/s2orc
|
Synthesis and Characterization of Two Novel Organic-Inorganic Compounds Based on Tetrahexyl and Tetraheptyl Ammonium Ions and the Preyssler Anion and Their Catalytic Activities in the Synthesis of 4-Aminopyrazolo[3,4-d]- Pyrimidines
Two novel organic–inorganic compounds based on tetrahexylammonium (THA) and tetraheptylammonium (THPA) ions and the Preyssler anion, [NaP5W30O110]14-, were synthesized and formulated as (THA)7.7H6.3 [NaP5W30O110] (A) and (THPA)7.5 H6.5[N aP5W30O110] (B). The synthesized compounds were characterized by IR, UV, and TGA and used for the catalytic synthesis of 4-aminopyrazolo[3,4,-d]pyrimidine derivatives 2a-2d. Our findings showed efficient catalytic activities for A and B.
Introduction
Interest in the synthesis of large polytungstates has attracted much attention. In this area, some examples for species containing more than 18 tungsten atoms have been reported [1]. One of the largest polytungstates is [NaP 5 W 30 O 110 ] 14-, the so-called Preyssler anion. This heteropolyanion consists of a cyclic assembly of five [PW 6 O 22 ] units, each derived from the Keggin anion, [PW 12 O 40 ] 3-. This anion can be obtained by the removal of two sets of three corner shared WO 6 octahedra. We have shown that the pure acid form of this polyanion, H 14 [NaP 5 W 30 O 110 ], has excellent potential applications in catalytic reactions [2][3][4][5][6]. Usually, acid-catalyzed reactions are carried out by use of diverse conventional mineral acids such as H 2 SO 4 , HF, HCl, H 3 PO 4 , etc. The replacement of these OPEN ACCESS conventional hazardous and polluting corrosive liquid acid catalysts by solid acid catalysts is one of the key current demands in the field of catalysis. Cleaner technologies could be possible by making use of environmentally friendly catalysts involving the use of solid acids. It was shown that heteropolyacids in the solid state are pure Bronsted acids and stronger acids that conventional solid acids such as SiO 2 -Al 2 O 3 , H 3 PO 4 , HNO 3 , H 2 SO 4 , and HX and HY zeolites [7,8]. These compounds have several advantages as catalysts which make them economically and environmentally attractive [9][10][11][12][13]. If one applies the proposed principles of Green Chemistry we see that the Preyssler catalyst can be considered a promising green catalyst candidate. This solid acid catalyst is "green" with respect to corrosiveness, safety, quantity of waste, and separability. Moreover, while some of the acidic catalysts such as HCl, HNO 3 , H 2 SO 4 , etc, can produce chlorated, nitrated, sulfated, etc, by products, Preyssler acid does not produce any of these by products. Thus, it can reduce quantity of waste formed. In our studies it has been found that Preyssler's anion catalyzes oxidations of organic substances without any structural degradation. This leads to the recovery and recyclibility of this catalyst, which is very important in catalytic processes, especially in industry.
We are interested in development of applications of the Preyssler anion in other forms such as organic-inorganic forms. Recently, we have developed a series of reactions catalyzed by Preyssler and nano Preyssler acidic or inorganic salt forms [14][15][16][17].
Research based on new type organic-inorganic molecular multifunctional materials is continuously attracting the considerable interest in organic synthesis [18]. Along this line, heteropolyacids are excellent molecular acceptors and can form novel organic-inorganic complexes with a number of organic substrates containing N, S and O atoms. Such organic substrates often include several types of cationic organic species [19].
Tetrahexyammonium (THA) and tetraheptylammonium (THPA) cations are examples of excellent electron donors that possess high charge and suitable size which makes them very good inorganic blocks for the construction of organic-inorganic hybrid compounds with the Preyssler anion. In some cases heteropolyacids can be partly reduced, which offers the opportunity to prepare organic donorinorganic acceptor materials with a mixed valance state in the organic and inorganic counterparts. The presence of strong electron delocalization in the organic chains suggests electron transfer between organic donors and inorganic acceptors [20].
To the best of our knowledge, there are no reports of the reaction products of the Preyssler heteropolyacid with THA or THPA as organic cations. Herein, we wish to report two novel organicinorganic compounds based on THA and THPA and the Preyssler anion that display good catalytic activity in organic media.
Results and Discussion
Synthesis of two novel compounds were carried out based on the reactions of the Preyssler anion with THA and THPA, by neutralization of the Preyssler acid. In this case, diluted solutions were generally required to avoid the precipitation of mixed salts. The products were studied and characterized by potentiometeric and conductometeric titrations, IR, and UV spectroscopy and thermogravimetric method (TGA).
IR Spectrocopy
The Preyssler anion is made up of five PW 6 units arranged in a crown, so that the whole anion has an internal fivefold symmetry axis. Perpendicular to this axis is a mirror plane that contains the five planes containing the five phosphorus atoms. The tungsten atoms are distributed in four parallel planes perpendicular to the axis. A PW 6 unit consists of two groups of three corner-shared WO 6 octahedra. Two pairs of octahedral of each group are joined together by sharing on edge located in the mirror plane. Each WO 6 octahedron shares a vertex with the central PO 4 tetrahedron [21]. All tungsten atoms are octahedrally surrounded by oxygen atoms.
The anion contains W=O double bonds which are directed toward the exterior of the polyanion, W-O b -W bonds (inter bridges between corner-sharing octahedral), W-Oc-W bonds (intra bridges between edge-sharing octahedral) and one XO 4 tetrahedron. The XO 4 tetrahedron is surrounded by MO 6 octahedron sets linked together through oxygen atoms.
The IR spectra for A and B exhibited prominent bands for the polyoxoanion and THA and THPA at 600-1200 cm -1 and 1,300-3,000 cm -1 , respectively (Figures 1 and 2). The symmetric and asymmetric stretching for M-O bonds are observed in the following spectral regions for Preyssler anion: W=O d bonds (960 cm -1 ), W-O b -W bridges (920 cm -1 ), and W-O c -W bridges (795 cm -1 ). The P-O stretchings are observed in 1,000-1,165 cm -1 . The sharp peak in 1,165 cm -1 is a characteristic signal of the Preyssler anion that cannot be observed in other heteropolyanions. With respect to the fingerprint region of the Preyssler anion (600-1,200 cm -1 ), the IR spectra showed that the polyanion retains the Preyssler structure in both A and B, and has electronic interactions with the organic moieties in the solid state.
UV spectrum
The UV spectra of A and B exhibit two peaks at 217 and 280 nm, ascribed to O t →W and O b,c →W charge transfer bands, respectively ( Figure 3). The maximum wavelength in 280 nm confirms that the polyanion retains the Preyssler structure in both A and B.
Thermogravimetric analysis
Thermogravimetric analysis (TGA), coupled with spectroscopic measurements could unambiguously elucidate the structure of the compounds. The TGA of the two compounds was performed in the range of 50-600 °C. The TG curves generally show two steps. The first one occurs at temperatures lower than 120 °C, and corresponds to the loss of crystallization water. This leads to the corresponding anhydrous heteropolycompound. The second step occurs at temperatures higher than 200 °C, dependoing on the metal, heteroatom and the counterion. It corresponds to the decomposition of the heteropolycompound.
For the acids, the protons come off with oxygen, as n/2H 2 O. In the case of a mixed salt with acidic hydrogen and tetraalkylammonium cations, the weight loss is expressed as x (tetraalkylammonium) 2 O+n/2 H 2 O to take into account the departure of the organic material and the protons.
From room temperature to 130 °C, A remains stable and there is no weight loss ( Figure 4). This means that this compound is anhydrous and contains no crystallization water. In the range of 180-400 °C, the total weight loss of A is 28.15% (calculated value: 28.21%), corresponding to 7.7/2 (THA) 2 From room temperature to 130 °C, there was no weight loss for compound B ( Figure 5). Thus, this compound is anhydrous. In the range of 200-440 °C, the weight loss was 30.30% (calculated value: 30.35%), corresponding to 7.5/2 (THPA) 2 O+ 6.5/2 H 2 O. Therefore the compound B is formulated as (THPA) 7
Potentiometeric titrations
The pH scale in an aqueous solution is governed by k w (10 -14 ) and the equilibrium in water predicts a potential change of 0.059 V for each unit change in pH for the linear region of equation E = k + 0.059 pH [22]. In a similar way, the pH scale in non-aqueous solvents is governed by the autoprotolysis constants of the solvent. The solvents with small autoprotolysis constants were thought to be advantageous for titrations. They provide a better opportunity for precise titrations, because of their longer milivolt scales. In our study, we used solvents such as acetone, N,N-dimethylformamide and acetonitrile with longer milivolt ranges for the potentiometric titrations [22].
However in every case, the potentiometeric titration curves of A and B showed two or three protons were dissociated. This is attributed to the behavior of the glass electrode. A glass electrode is widely used as the indicator electrode in non-aqueous as well as in aqueous titrations. Its response to pH has been shown to be Nernstian in various non-aqueous solvents [23] and many acid dissociation constants in aprotic solvents have been obtained by using a glass electrode as pH sensor [24].
In non-aqueous solutions media, these electrodes show certain undesirable features. For example the potential response of a glass electrode in non-aqueous solutions is often very slow, so in some cases, it takes many hours before the equilibrium potential is reached. Some efforts have been made to improve the response speed, but without sufficient success [25]. In addition, the solvents dehydrate the glass membrane, thereby reducing its affinity for, or response to, hydrogen ions.
Thus, it is difficult to obtain reliable information concerning the number of acidic protons with a glass electrode. It is suggested that solvent as well as glass electrode can affect the number of titrated protons.
Conductometric titration
For A and B, determination of acidic hydrogens was carried out via conductometric titration in acetone, N,N-dimethylformamide and acetonitrile. In all cases the conductometric curves showed two or three protons were dissociated. It is well known that the proton conductivity of heteropolyacids is strictly related to the number of coordinated water molecules to the heteropolyanions. According to Kreuer, the heteropolyacid acts as a Bronsted acid toward the hydration water, which is loosely bound in the structure, resulting in proton conductivity [26].
Consequently, the conductivity of heteropolyacid is strictly related to the number of water molecules coordinated to the anion and a large uptake of water is essential for fast proton conduction. As indicated by the obtained thermogravimetric results, there are no water molecules in A and B. This leads to the absence of proton conduction.
Another reason can be explained based on hydrogen bonding. We have recently developed a systematic vibrational study of Keggin heteropolyacids with aminoacids [27]. Our study showed that intermolecular interactions such as hydrogen bonds between external oxygens and water molecules and aminoacids led to a change and split in the frequencies of the metal-oxygen stretching for polyanion. In Figures 1 and 2 we can see the characteristic bands of Preyssler anion are unchanged. This is attributed to absence of water molecules, resulted in absence of hydrogen bonding, so can lead to absence of proton conductivity.
Catalytic activity
In connection with our earlier work using Keggin heteropolyacids in the synthesis of 4-amino [3,4d] pyrimidines [28], we wish to report the results of our study on the use of A and B in this reaction. When a mixture of 5-amino-4-cyano-1 substituted-1H-pyrazoles and formamide in acetic acid was refluxed in the presence of A and B as catalyst, 4-amino [3,4-d]pyrimidines were obtained (Scheme 1). The results are summarized in Table 1. The results show that the catalysts can catalyze this reaction in mild conditions. All compounds were characterized by mass, IR, and 1 H-NMR spectra ( Table 2).
Chemicals and apparatus
Preyssler anion was synthesized according to our earlier work [6]. Pyrazole derivatives were prepared according to literature procedures [29,30]. All of the reagents used were obtained from commercial sources. All yields were calculated from crystallized products. IR spectra were obtained with a Bruker 500 scientific spectrometer. 1 H-NMR spectra were recorded on a Bruker 100 MHz Aspect 3000 FT NMR spectrometer. Mass spectra were obtained with a Varian CH-7 instrument. The UV spectra were obtained with a double beam UV-Vis spectrophotometer (Philips 8800). The TG curves were recorded using a TGA-1500 instrument. Melting points were obtained on an Electrothermal type 9100 apparatus. All of the titrations were performed with tetrabutylammonium hydroxide.
General procedure for synthesis of compounds A and B
To a stirred aqueous solution of H 14 : 1:14). The mixture was stirred for about 6 hours and induced the formation of a viscous solid and an oil for THA and THPA, respectively. The separated compounds were purified in a mixture of water and acetonitrile. Recrystalization was performed in water and acetonitrile.
Catalytic test
An appropriate pyrazole (0.1 mmole) mixed with formamide (0.2 mmol) and catalyst (0.05 mmol) in acetic acid (10 mL) and refluxed for 6 hours. After the reaction was completed, the catalyst was separated. The residue was evaporated and water was added. The product was filtered and purified from a mixture of water and ethanol (Table 1). All compounds were characterized by mass, IR, and 1 H-NMR spectra ( Table 2).
Conclusions
Two novel organic-norganic heteropolyanions were prepared for the first time and showed catalytic activity in the synthesis of pyrazolopyrimidines. Thermogravimetric analysis showed that A and B consist of 7.7 and 7.5 tetraalkylammonium cations, respectively. This work demonstrates the applicability of the Preyssler anion for some reactions that require bifunctional catalysts with hydrogen ions and organic section along with catalytic activity and functionality over a wide range of pH.
|
v3-fos-license
|
2018-04-03T01:26:32.761Z
|
2013-08-20T00:00:00.000
|
14241368
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2013/878930.pdf",
"pdf_hash": "9884d45ecf4de5380c6ba335b73e19ebe721a129",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42375",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"sha1": "eedec7a87ccb2fd21460ec6e4e9cbb2ac9c762d8",
"year": 2013
}
|
pes2o/s2orc
|
Evaluation of a Thiolated Chitosan Scaffold for Local Delivery of BMP-2 for Osteogenic Differentiation and Ectopic Bone Formation
Thiolated chitosan (Thio-CS) is a well-established pharmaceutical excipient for drug delivery. However, its use as a scaffold for bone formation has not been investigated. The aim of this study was to evaluate the potential of Thio-CS in bone morphogenetic protein-2 (BMP-2) delivery and bone formation. In vitro study showed that BMP-2 interacted with the Thio-CS and did not affect the swelling behavior. The release kinetics of BMP-2 from the Thio-CS was slightly delayed (70%) within 7 days compared with that from collagen gel (Col-gel, 85%), which is widely used in BMP-2 delivery. The BMP-2 released from Thio-CS increased osteoblastic cell differentiation but did not show any cytotoxicity until 21 days. Analysis of the in vivo ectopic bone formation at 4 weeks of posttransplantation showed that use of Thio-CS for BMP-2 delivery induced more bone formation to a greater extent (1.8 fold) than that of Col-gel. However, bone mineral density in both bones was equivalent, regardless of Thio-CS or Col-gel carrier. Taken together, Thio-CS system might be useful for delivering osteogenic protein BMP-2 and present a promising bone regeneration strategy.
Introduction
Current approaches for bone regeneration such as autografts and allografts face significant limitations [1]. Various factors including limited supply, risk of immune rejection, and chronic immune responses have prompted interest in bone graft substitutes. Many growth factors for bone formation have been reported. Bone morphogenetic protein-2 (BMP-2) is generally acknowledged due to its superior activity. It has been used in dental and orthopedic biomaterials to promote bone formation because of its strong osteogenic activity. BMP-2 induces bone formation in vivo [2][3][4][5][6][7], presumably by stimulating mesenchymal stem cell differentiation into an osteoblast lineage and by increasing the number of differentiated osteoblasts capable of forming bone [8]. This stimulatory effect of BMP-2 on osteoblastic differentiation is of major importance during bone healing. Despite its strong osteoinductive activity, the systemic delivery of BMP-2 can be impractical and undesirable because it may have uncontrolled adverse effects, such as unwanted ectopic bone formation. In addition, clinical use of BMP-2 has been limited by the lack of suitable delivery systems. Systems evaluated as carriers to localize BMP-2 include porous hydroxyapatite (HA) [9], absorbable collagen [10], polylactic acid [11], polylactic-co-glycolic acid [12], demineralized bone powder, and bovine collagen type sponges [13]. Although HA is a biocompatible material, it is not biodegradable. Therefore, it remains at the defect site. Collagen gel (Colgel) can be immunogenic, and demineralized bone powder suffers from insufficient supply and poor characterization as a delivery system. Thus, an efficacious delivery system (i.e., scaffold) is still required to localize BMP-2 at the desired site.
Natural biomaterials are widely used for scaffold fabrication in tissue engineering because they facilitate cell attachment and maintenance of the differentiation function. Chitosan (CS), obtained by alkaline deacetylation of chitin, is one of the most abundant polysaccharides in nature. It has received considerable attention in a variety of areas such as pharmaceutics [14], tissue engineering [15], antimicrobial agents [16], and chromatography [17] because of its properties, which include enzymatic biodegradability, nontoxicity, and biocompatibility, even when used in human and animal models [18][19][20]. However, CS suffers from limited solubility at physiological pH and causes presystemic metabolism of drugs in the presence of proteolytic enzymes [21]. These inherent drawbacks of CS have been overcome by forming derivatives such as carboxylated CS [22], adding various conjugates [23], thiolated CS [24] or acylated CS [25]. Among these various CS derivates, thiomer technology has a range of advantages for drug delivery such as sustained drug release [26] and high stability [24]. The usefulness of thiolated chitosan (Thio-CS) as a scaffold for controlled drug release has been demonstrated by means of model drugs such as clotrimazole [27], salmon calcitonin [28], insulin [29], and tobramycin [30]. However, most of the research has focused on systemic drug delivery such as neural tissue [31], peroral peptide delivery [32], and nasal administration [33]. Despite the advantages of Thio-CS for tissue engineering, the potential application of this material for bone tissue has not been investigated. The aim of this study was to evaluate the physicochemical properties of Thio-CS for BMP-2 delivery and bone formation in vitro and in vivo.
Fabrication of Thio-CS.
To obtain a 1% (w/v) solution, 500 mg of CS (average molecular mass: 400 kDa, Fluka GmbH, Buchs, Switzerland) was dissolved in 50 mL of 1% acetic acid by stirring the mixture for 1 h. Traut's reagent (2iminothiolane-HCl, 2-IT) was used for the immobilization of thiol groups to primary amino groups of proteins and the modification of CS. We have previously reported the optimal conditions for fabricating Thio-CS [34]. In brief, the pH of a mixture containing a 1% solution of CS and 0.1 mg/mL of 2-IT was adjusted, ranging from 4 to 12. The mixtures were then incubated for 30 min at room temperature. To investigate the time effect of the air oxidation on disulfide bond formation, the samples were incubated for 3-day intervals at pH 7 under stirring. To remove unreacted agent, the resulting mixture was dialyzed with several exchanges of the dialyzing solution.
To prevent more oxidation of samples, 5 mM or 0.4 mM of HCl solution was used as a dialyzing solution depending on dialysis step. The samples were freeze-dried at −80 ∘ C and 0.01 mbar (Christ Beta 1-8 K; Germany) and stored at 4 ∘ C until further use.
Determination of the Thiol Group and Disulfide Bond.
Ellman's reagent (3 mg of 5,5 -dithiobis (2-nitrobenzoic acid)) (Sigma, St. Louis, MO, USA) was used to quantify the amount of thiol groups on the modified CS, as described previously [34]. Briefly, 5 mg of the freeze-dried samples was dissolved in 2.5 mL of demineralized water. Then, 250 L of the samples, 250 L of 5 M phosphate-buffered saline (PBS, pH 8.0), and 500 L of Ellman's reagent dissolved in 10 mL of 0.5 M PBS (pH 8.0) were reacted in the same tube. The reaction was allowed to proceed for 2 h at room temperature. After removal of the precipitated polymer by centrifugation (24,000 ×g; 5 min), 300 L of the supernatant was transferred to a microtitration plate, and the absorbance was immediately measured at 450 nm (Bio-Tek Instruments, Winooski, VT, USA). The amount of thiol moieties was calculated from a standard curve of absorbance obtained from solutions with increasing concentrations of Lcysteine hydrochloride hydrate (Sigma-Aldrich, Steinheim, Germany). Disulfide bond content within precipitate or reacting solution was determined after reduction with NaBH 4 and addition of Ellman's reagent as described by Habeeb [35]. The degree of cross-linking of such materials is usually determined by a chemical analysis method using 2,4,6trinitrobenzenesulfonic acid (TNBS), labeling of residual amine groups [36]. To 0.3 mL of sample, 0.3 mL of NaHCO 3 (4%) and 0.3 mL of TNBS (0.1%) were added. The solution was allowed to react for 2 h at 40 ∘ C, and then 0.3 mL of sodium dodecyl sulfate (10%) and finally 0.17 mL HCl (1 M) were added. The absorbance of the resulting solution was read photometrically at 335 nm, against a blank but with 0.3 mL of H 2 O instead of the sample.
Evaluation of the Swelling Behavior.
To understand the effect of the molecular transport of liquids into Thio-CS, the water-absorbing (i.e., swelling) capacity was determined by gravimetric methods. To measure the weight of Thio-CS after swelling, 0.1 g of Thio-CS was placed in trans-well (Corning Inc., Corning, NY, USA). The trans-well was then placed into a 24-well culture dish containing a physiological solution of 1 mL of PBS (pH 7.0) and incubated at room temperature. The swelling ratio was measured by comparing the change in the weight of Thio-CS before and after incubating. The percentage of swelling ratio was calculated by the following formula: where is the weight of the swollen Thio-CS and is the initial weight of the Thio-CS.
Scanning Electron
Microscopy. The morphologies of the samples were examined using scanning electron microscope (SEM) (Hitachi, Tokyo, Japan). As moisturized materials cannot be detected by SEM, the samples were lyophilized. Prior to imaging, the samples were fixed and dehydrated. The Thio-CS was soaked in a primary fixative of 2.5% glutaraldehyde (Sigma) for 2 h. The samples were dehydrated by replacing the buffer with increasing concentrations of ethanol (from 40 to 100%) for 10 min each. They were then dried at room temperature for 24 h and subjected to SEM at voltages ranging from 5 to 15 kV after the samples were sputter coated in white gold.
Delivery of BMP-2 Using Thio-CS.
The freeze-dried and sponge-shaped Thio-CS was placed in 0.1 mg/mL of BMP-2 (R&D Systems, Minneapolis, MN, USA) solution. The mixture of Thio-CS and BMP-2 was gelated within a few of minute at the room temperature, and we designated it as Thio-CS-B2. Type I collagen gel (Col-gel), which is widely used as a drug delivery system, was employed as a control for BMP-2 delivery. Col-gel was prepared from acid solubilized type I collagen stock solution, which is extracted from rat tail tendon (BD Biosciences, San Jose, CA, USA). According to the recommendation of manufacturer, the stock solution was adjusted to final 3 mg/mL of collagen solution containing 0.1 mg/mL of BMP-2, and it was gelated at 37 ∘ C (Col-gel-B2). For the evaluation of BMP-2 kinetics, Thio-CS-B2 or Colgel-B2 was placed in trans-well, and subsequently the transwell was assembled with 24-well plate filling with alphaminimum essential medium ( -MEM, Gibco, Gaithersburg, MD, USA). Both gels were incubated for the designated time at 4 ∘ C while shaking gently. The amount of BMP-2 in medium was measured by using the BMP-2 enzyme-linked Immunosorbent assay kit (Invitrogen, Carlsbad, CA, USA).
Cell Culture and Proliferation Assay.
Preosteoblast MC3T3-E1 cells (1 × 10 4 cells/cm 2 ) were cultured in -MEM, containing the BMP-2 released from scaffolds as described previously, 10% fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 g/mL of streptomycin (Gibco) in humidified air containing 5% carbon dioxide at 37 ∘ C. In case of inducing osteoblast differentiation, 50 g/mL of ascorbic acid and 5 mM of -glycerophosphate were added, and the culture media was changed every 3 days. To investigate the cytotoxicity of the Thio-CS on MC3T3-E1, the XTT assay was performed by using an EZ-cytox cell viability assay kit (Daeil lab service Co., Seoul, Republic of Korea) for 1, 4, 7, 14, and 21 days. Briefly, 10 L of EZ-cytox reagent was added to the cell culture dish. By the action of mitochondrial dehydrogenases, XTT was metabolized to form a formazan dye, which was spectrophotometrically determined by measuring the absorbance at 450 nm using a microplate reader (Bio-Tek Instruments, Winooski, VT, USA). The amount of formazan salt formed corresponds to the number of viable cells contained in each well.
Sodium Dodecyl Sulfate Polyacrylamide Gel Electrophoresis and Western
Blotting. The structural integrity of BMP-2 in the Thio-CS was detected by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE). In brief, BMP-2 containing Thio-CS was degraded by chitosanase (10 mU/mL) for 1 h at 37 ∘ C. The samples were then mixed with the loading buffer without a reducing agent. The SDS-PAGE was performed with 15% separating gel at a constant voltage mode (100 V). Finally, the gel was stained with 1% Coomassie brilliant blue solution and was destained with an aqueous solution of 10% methanol and 10% acetic acid. To detect the BMP-2, Western blot analysis was performed. The samples were underwent 15% SDS-PAGE and were electrotransferred onto polyvinylidene fluoride membranes.
The blots were blocked with a buffer containing 0.05% Tween-20 and 5% skimmed milk and reacted sequentially with primary and secondary antibodies. The primary antibody against BMP-2 (Santa Cruz Biotechnology) and horseradish peroxidase-conjugated secondary antibodies (KPL, Gaithersburg, MD, USA) were used at 1 : 1,000 and 1 : 2,000 dilution, respectively. The antigen-antibody complexes were visualized using the enhanced chemiluminescence image analyzer LAS 4000 mini (Fuji Film, Tokyo, Japan).
Measurement of Alkaline-Phosphatase Activity and
Calcium Mineral Deposition. The MC3T3-E1 cells were cultured for 7, 14, and 21 days as described above. The cells were lysed, and the lysates were then used to measure the alkalinephosphatase (ALP) activity at 1, 2, and 3 weeks. In brief, the cell homogenates reacted with the ALP assay mixtures containing 0.1 M 2-amino-2-methyl-1-propanol (Sigma, St. Louis, MO, USA), 1 mM MgCl 2 , and 8 mM pnitrophenyl phosphate disodium. After 10 min incubation at 37 ∘ C, the reaction was stopped with 0.1 N NaOH, and the absorbance of the resulting solution was measured photometrically at 405 nm. Quantitative double-stranded DNA in the solution was measured using a picogreen dsDNA quantification kit (Molecular Probes, Inc., Eugene, OR, USA) as instructed by the manufacturer. To measure the level of calcium mineral deposition, alizarin-red staining (AR-S) was performed. After 3 weeks in culture, the cells were fixed with 70% ethanol, rinsed five times with deionized water, treated for 10 min with 40 mM of AR-S solution at pH 4.2, and then washed with 1 ×PBS for 15 min with gentle agitation. Stained samples were photographed, followed by a quantitative eluting procedure using 10% (w/v) cetylpyridinium chloride in 10 mM sodium phosphate (pH 7.0) for 15 min at room temperature. The AR-S concentration was determined by comparing it to an AR-S standard curve with an optical density of 540 nm.
Animal Preparation. The ethics committee of Chonnam
National University approved the animal study for this research. The effects of the Thio-CS scaffold on ectopic bone formation induced by BMP-2 were studied in mice (C57/BL6, 8 weeks of age, obtained from Damool Science, Daejeon, Republic of Korea). Before transplantation, the mice were anesthetized with a mixture of rumpun (20 mg/kg) and ketamine hydrochloride (20 mg/kg) intramuscularly. The area of transplantation at the dorsum of the mice was shaved and disinfected. Either a Thio-CS-B2 or a Col-gel-B2 scaffold was subcutaneously transplanted into the dorsum of the mice. Transplantations of Col-gel and Thio-CS scaffolds without BMP-2 were used as controls. After transplantation of the scaffolds ( = 6 per group), the mice were given access to food and water. Six weeks after injection, the animals were sacrificed by intracardiac injection of KCl, and the implants were isolated and fixed in 10% formaldehyde solution for subsequent analysis. Skyscan, Kontich, Belgium) and histological analysis. The bone volume (BV) and the bone mineral density (BMD) of the isolated implants were determined by using microCT in the cone-beam acquisition mode. The X-ray source was set at 50 kV and 200 A with a pixel size at 17.09 m. The exposure time was 1.2 sec. Four hundred fifty projections were acquired over an angular range of 180 ∘ (angular step of 0.4 ∘ ). The tomographic acquired images were transformed into sliced volumetric reconstruction using the Nrecon program (Skyscan) and analyzed using 3D CT analyzer software (CTAN, Skyscan). The BMD of the isolated samples was calibrated from Hounsfield units (HU) of 0.25 and 0.75 mg/cm 3 of the hydroxyapatite density phantom. For histological analysis, isolated specimens were serially sliced, decalcified in 8% formic acid, and embedded in paraffin wax. Five micrometer-thick sections were stained with hematoxylin and eosin (H&E) for histological assessment.
Statistical Analysis.
The difference between the Thio-CS and the Col-gel in the release kinetics of BMP-2 was statistically compared by using the ANOVA test. The statistical differences in biocompatibility, ALP activity, and AR-S were analyzed by the Student's t-test. The results are expressed as the mean ± standard deviation (SD) from three or more separate experiments. A value of * < 0.05, * * < 0.005 was considered statistically significant.
Fabrication of Thio-CS and Physicochemical Properties.
The reagent 2-IT has been widely used for the immobilization of thiol groups to primary amino groups of proteins [37]. Since moisturized materials cannot be detected by SEM, unmodified CS and Thio-CS were lyophilized. As shown in Figure 1 occurred by increasing molecular weight of CS through disulfide bonding each other. This is consistent with previous report that pore size of substance has direct relationship with the molecular weight of materials [38]. As we reported previously, the optimum ratio between 2-IT and CS for the formation of disulfide bonds is 0.1 mg/mL and 1% (w/v) [34].
However, this ratio does not take account of the air-oxidation time, which can affect the formation of disulfide bonds between CS polymers. Therefore, in this study, we investigated the variation of the disulfide bond and the free sulfate at the conjugate or the supernatant, according to the airoxidation time. As shown in Figure 1(b), the free thiol content in the supernatant was decreased with the air-oxidation time. However, the free thiol and disulfide groups in the conjugate were increased. Moreover, the amount of residual amino groups in CS was decreased in the Trinitrobenzene sulfonate assay (data not shown). These results suggest that the thiol moiety of 2-IT was transferred to the amino group of CS, that it can be formed by cross-linking between CS polymers, and that these phenomena depend on the air-oxidation time.
The cross-linking of the polymeric chains such as inter-or intramolecular disulfide bonds might result in high stability for drug delivery systems. Moreover, the formation of the disulfide bond was increased with pH and was saturated at pH 7 (Figure 1(c)). The swelling property of the scaffold plays crucial roles in cell growth, cell adhesion, nutrient perfusion, and tissue regeneration [39]. Many researchers have attempted to measure the swelling ratio of materials using the gravimetric method [40]. As the purpose of this study was to evaluate the potential use of Thio-CS for BMP-2 delivery, the swelling property of Thio-CS was compared with that of Thio-CS-B2, which contained BMP-2. Unmodified CS was used as a negative control. The weight of the Thio-CS increased significantly, up to 3.5 times within 10 min compared with its initial weight, and this was maintained continuously. A similar phenomenon was observed with Thio-CS-B2, although the rate was slightly lower than that of the Thio-CS. However, the weight of unmodified CS was lower (approximately 65% at 60 min) than that of the others (Figure 1(d)). These results suggest that BMP-2 does not affect the swelling property Thio-CS.
Interaction between Thio-CS and BMP-2.
As the swelling property is insufficient to explain the interaction between the Thio-CS and BMP-2, the structural integrity was investigated. Thio-CS-B2 was incubated with chitosanase, and the reactant was subjected to SDS-PAGE and Western blotting. It was difficult to assay Thio-CS without chitosanase because its viscosity was too high to subject to SDS-PAGE (data not shown).
The results pointed to several bands in the chitosanase and the Thio-CS groups (Figure 2(a)). Basically, Thio-CS is a polysaccharide polymer, but it may contain other impurities. However, BMP-2-like molecules were not observed in the Thio-CS. This was confirmed by Western blotting, which detected the specific BMP-2 antibody (Figure 2(b)), indicating that Thio-CS does not contain BMP-2-like molecules. The Western blot analysis showed that the BMP-2 signal was observed only in the BMP-2 and the Thio-CS-B2 group. The size of the control BMP-2 peptide that was detected was approximately 25 kDa. However, the signal at the lane of Thio-CS-B2 was detected in the stacking gel area, indicating that it was not mobilized. In other words, if there was not any interaction between BMP-2 and Thio-CS, BMP-2 should be mobilized to the separating gel area. These results suggested that BMP-2 interacted with Thio-CS, although the mechanism is unknown, and that BMP-2 was located in Thio-CS. It is likely that this interaction between BMP-2 and Thio-CS may affect delaying the release of BMP-2 from Thio-CS rather than simply absorbing.
In Vitro Release of BMP-2.
Based on previous physicochemical properties of Thio-CS (Figure 1 and [34]), it is expected to delay the velocity of the release of BMP-2. Therefore, the release kinetics of BMP-2 from Thio-CS were measured for 28 days and compared with those of the Col-gel, which is widely used in the delivery of BMP-2 ( Figure 2(c)). As expected, the release velocity of BMP-2 delayed in the Thio-CS group (70% within 7 days) compared with that of the Col-gel (85% within 7 days). The cumulative BMP-2 released from the Thio-CS and the Col-gel almost reached a plateau at 21 and 14 days, respectively. These results suggest that the release velocity of the Thio-CS group showed a sustained pattern of release compared with that of the Col-gel.
Biocompatibility of Thio-CS.
Biocompatibility of Thio-CS or CS was evaluated with an XTT assay. The assay system was based on the absorbance of cell lysate and widely used to measure cell viability or proliferation because it relies on the reagent binding only to the mitochondria of living cells. Figure 3 shows viability of the osteoblast cells, which were cultured with Thio-CS or CS gel for 21 days. No treated cells were used as a control. Cell population in the control was continuously increased with the cultivation time. And pattern of cell proliferation in Thio-CS group was similar to that in the control. However, cell proliferation in CS-treated group was decreased after 14 days. The decreases seem to be related to the gelation or degradation property of CS gel, because the CS gel was more soluble or fragile in the culture medium than Thio-CS gel. Although CS gel itself negatively affected cell viability, Thio-CS gel did not. Based on the results, in vivo study progressed with Thio-CS.
Induction of Osteoblast
Differentiation with BMP-2 Delivery Using Thio-CS. The ALP activity and the level of calcium deposition are important considerations for evaluating osteoblast differentiation. ALP activity, which is widely used as a marker for early differentiation of osteoblastic cells and generally expressed before mineralization [41], was measured after 1, 2, and 3 weeks in the MC3T3-E1 cell culture with Thio-CS-B2. The Thio-CS and the Col-gel-B2 were used as controls. As shown in Figure 4(a), the ALP activity was significantly increased in the cells cultured with the Thio-CS-B2 and the Col-gel-B2 compared with that of Thio-CS only. These results suggest that BMP-2 in Thio-CS still retains its biological ability to enhance ALP activity. Calcium mineral deposition is a marker of late differentiation of osteoblastic cells [42]. The level of calcium mineral deposition after 3 weeks in culture was investigated by AR-S. The results showed that calcium deposition in Thio-CS-B2 treated group was increased 4.2 fold compared with that of Thio-CS (Figure 4(b)). Thus, BMP-2, which delivered into the Thio-CS, has a stimulatory effect on the differentiation of osteoblastic cells and on matrix mineralization.
Induction of In Vivo Ectopic Bone Formation by Thio-CS-B2
. BMP-2 is well known to produce ectopic bone formation. This study also determined whether Thio-CS-B2 produces ectopic bone in vivo in 8-week-old C57/BL6 mice. Col-gel containing BMP-2 was used as a comparison group. When Thio-CS-B2 was separately implanted in left and right sides of the dorsum in mice, newly formed ectopic bones were observed in both implant sites after 6 weeks. The new bones in Thio-CS-B2 group were big, and each side bone was fused to one. Total bone volume in Thio-CS-B2 group was higher (1.8 fold) than that in the Col-gel-B2 (Figures 5(a) and 5(b)). The enhanced bone formation in Thio-CS-B2 might result from more superior property of Thio-CS than Col-gel for bone formation. In our previous report [34], Col-gel showed the limited swelling property with scant physical change of the porosity, suggesting limiting the capacity to absorb BMP-2. The present study also showed that Col-gel-B2 has the burst release pattern of BMP-2 compared with Thio-CS-B2. On the other hand, Thio-CS produced the sustained release of BMP-2 due to interaction between the gel and the protein (Figure 2). These findings represent that Thio-CS has some advantage for bone formation compared to Col-gel in that Thio-CS could absorb and maintain a greater amount of BMP-2. However, the bone mineral density of the ectopic bone formed by Thio-CS-B2 was not significantly different from that by the Colgel-B2 ( Figure 5(c)). The result suggests that bone quality between both groups would not be different. To yield more accurate observation for bone formation, histological analysis was performed with H&E stain. There was less ectopic bone formation in the Thio-CS group (i.e., without BMP-2, Figure 6(a)). However, new bone with residual scaffolds was observed in Col-gel-B2 and Thio-CS-B2 group (Figures 6(b) and 6(c)).
Conclusions
In this study, we developed Thio-CS scaffold for BMP-2 delivery and bone formation. The Thio-CS was made by the modification of CS with 2-IT. The 2-IT contributed to in situ gel formation of CS via disulfide bonding between the 2-IT-derived thiol groups of the CS polymers, and this disulfide bonding was affected by the air-oxidation time and the pH. The degree of swelling was not affected by BMP-2 addition. Moreover, SDS-PAGE and Western blotting analysis revealed an interaction between BMP-2 and Thio-CS. This interaction may contribute to delaying the release of BMP-2 from Thio-CS. Due to the aforementioned properties of Thio-CS, the release velocity of BMP-2 from Thio-CS was slightly delayed compared with that of the Col-gel. The BMP-2 released from Thio-CS induced osteoblastic differentiation of MC3T3-E1. And the activity of ALP and the level of calcium mineral deposition were also increased. The Thio-CS study did not show any cytotoxicity in vitro in XTT assay studies in MC3T3-E1 osteoblastic cells. Based on our results, in vivo bone formation studies were performed, and the results showed that BMP-2 containing Thio-CS induced ectopic bone formation to a much greater extent than either the BMP-2 containing Col-gel or the control (no BMP-2). Collectively, these results suggest that the Thio-CS delivery system might be useful for delivering osteogenic protein BMP-2 as a biocompatible synthetic polymer and that it may represent a promising application in bone regeneration strategies.
|
v3-fos-license
|
2018-12-12T03:48:03.141Z
|
2014-01-01T00:00:00.000
|
55928391
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://psiholoska-obzorja.si/arhiv_clanki/2014/blumen.pdf",
"pdf_hash": "2c3e41dd8d189832d5b0fa6b6c26eac300c5ab5a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42377",
"s2fieldsofstudy": [
"Education",
"Sociology",
"Psychology"
],
"sha1": "2c3e41dd8d189832d5b0fa6b6c26eac300c5ab5a",
"year": 2014
}
|
pes2o/s2orc
|
Learning motivation and giftedness in sociocultural diverse Latin America and the Caribbean societies
This theoretical review aims to integrate state-of-the-art learning motivation theoretical concepts within the context of gifted and talent development models for native children living in Latin America and the Caribbean sociocultural diverse societies. Motivation as a determinant factor and a promoter of gifted achievement is analyzed. Also the relation between motivation, outstanding performance and underachievement is discussed and tendencies found in social-emotional development of the gifted linked to motivation are explored. Final remarks are given on the significant role of motivation in the achievement of gifted and talented children living under diverse socio-cultural influences that bias their performance on standardized measures. Recommendations highlight the importance of further research, in order to reach a convergence of theoretical and practical elements needed to promote Latin American children ś talent.
Since the birth of Psychology as a science, psychologists have tried to understand gifted individuals (Galton, 1869(Galton, /1976;;Terman, 1954).Taking in consideration the unitary conception of intelligence, prevalent until the beginning of the 20th century, intelligence was considered the determinant factor in the conceptualization of giftedness (Sternberg & Davidson, 2005), and onedimensional IQ measurements were established.Later, Guilford (1959) and Cattell (1971) stated that intelligence could not be understood from a one-dimensional approach, and a multi-dimensional conceptualization was considered, involving abilities, aptitudes, personality characteristics and environmental conditions.Also, in relation to the giftedness concept, Renzulli (2005), Gagné (2013), and Heller (2010) proposed a multi-factorial model called the three-ring model.The model followed a proposal centered on the interaction of high levels of general ability, creativity, and task commitment.All of these traits could be developed in children beginning in primary education, if they are granted with opportunities for self-study, in which they might learn proper methodology and creative strategies (Renzulli & Reis, 1985).
As new methods and techniques of genetic research evolved, and research paradigms changed, the naturenurture debate also returned to the gifted field (Heller, 2010).Afterwards, Sternberg and Davidson (2005) reaffirmed the multidimensionality of the giftedness concept, in their Conceptions of Giftedness, including the analyses of 17 models about giftedness which had related variables among them.Sternberg and Davidson (2005) also clustered theories on giftedness, as they were implicit or explicit.
Implicit theories describe the beliefs that guide individuals' attitudes and behaviors, related to the impression formation process that is context-dependent.Social cognition theory states that beliefs might determine the attitudes and the willingness to be engaged in certain behaviors (Job, Dweck, & Walton, 2010).Implicit theories consider the need to identify the talent domain as the base of individual or social development.Moreover, they underline the essential role of motivation in the development of giftedness, as well as the importance of the developmental path of the talent, taking into consideration the social forces of the context (Sternberg & Davidson, 2005).Dweck's model of motivation (Dweck, 1986;Job et al., 2010;Olson, Dunham, Dweck, Spelke, & Banaji, 2008) states that there is a relation between the implicit theories of students and their self-motivational process, specifically in the kind of goals that they set for themselves (Blackwell, Trzesniewski, & Dweck, 2007;Cabezas & Carpintero, 2007;Job et al., 2010;Valenzuela, 2007).Heller, Finsterwald, and Ziegler (2010) revealed that teachers' implicit theories formed the basis for their ideas about their pupils' personality traits, qualities, attitudes, and abilities, which can easily contain pre-judgments and stereotypes.Moreover, according to Dweck's (1986) Motivation Process Model, implicit theories are suspected of exhibiting a significant influence on the development of gender role stereotypes among both girls and boys.Among these are the implicit personality theory of intelligence, theories about the motivational orientation, and student attributions of success and failure.All these components of the motivational processes of students are influenced by teachers and their implicit theories.Heller and colleagues (2010) revealed that there is a strong resemblance between math and physics teachers' implicit theories, whereby both groups of teachers demonstrated comparable gender-role-typical cognitions, due to the tendency to give socially desirable answers.The genderrelated differences in teachers' implicit theories were even stronger.
Explicit theories included formal models of giftedness formulated by psychologists, including approaches from cognitive theories, development theories, and theories centered on the specific domain.Common topics among explicit theorists involve questioning the cognitive basis of giftedness, in terms of what the gifted person can do in order to be identified as gifted (Sternberg & Davidson, 2005), and emphasizes the importance of empirical studies as determinants in the understanding of giftedness and talent.Explicit psychological approaches towards giftedness are based on developmental theories.Gruber (1986) emphasizes the need to monitor the development from infancy to adulthood, in order to have a better understanding of gifted individual development.He stated that it is necessary to study a small number of these people, in order to determine the type of talent that could be transformed on an effective creative product for the aesthetical enrichment of the human experience, for the improvement of our understanding of the world, or for the improvement of the human condition, in order to improve our survival possibilities as a species.
Moreover, Gruber (1986) stated that individual interests and behaviors are essential for the development of a gifted or talented individual; and that each personal resource is needed in order to cope with the difficulties along their development.Gruber also underlines the importance of the moment and the social and historical time, as well as do Abuhamdeh & Csikszentmihalyi (2012).Renzulli (2005) recognized that we might not consider giftedness as an absolute concept.For Li & Csikszentmihalyi (2014) giftedness is an ability that emerges along the lifespan, and talent development is a sequential movement by stages.They propose that it depends on the domain, taking into consideration that reaching certain excellence levels on a particular domain might not be reached by everybody.Also for Feldman (1986) giftedness is the result of the sustainable coordination among intersecting groups of forces, including historical, cultural, and social forces, together with individual characteristics.Walters & Gardner (1986) added the concept of crystalized experiences, derived from Gardner's (1999) theory of multiple intelligences.According to Gardner (1999) every individual with average abilities is able to develop nine forms of intellectual performances: linguistic, S. Blumen musical, logic-mathematics, spatial, corporal-kinesthetic, interpersonal, intrapersonal, existential, and naturalistic.These multiple forms of intelligent performances are presented in early ages as ability types for processing information.Moreover, during the so-called crystalized experiences, latent abilities of non-used intelligence might be activated, modifying the activities in the individual's life.Albert (1992) considered that giftedness exhibits a biological and experiential nature, focus on the family and the background, and his longitudinal study on families of exceptionally gifted boys provided information about talent development in relation to both, biological and context influences (Runco, 2014).Bloom (1985) also focused on a study on talent development in children, examining the processes along which they achieved excellence as adolescents and adults.The studied groups included piano performers, sculptors, mathematicians, neurologists, Olympic swimmers, and tennis champions.All of them achieved excellent performances before reaching 35 years of age.Results of Bloom's study provided strong evidence that a long and intensive process of encouragement, nurturance, education, and training is necessary in order to achieve extreme levels of talent in the particular fields.Therefore, if we could reproduce the favorable learning and support conditions that led to the development of extremely talented individuals, we could produce great learning almost everywhere.He stated that the basic differences among human beings in terms of learning are relatively small.But, for the types of learning that require enormous time, motivation, and the like, the difference is significant (Runco, 2014).
These developmental theorists underline the importance of motivation on talent development along the life-span, taking into consideration the type of specific domain.In this sense, gifted individuals are those that can usually excel in a specific domain, under environmental factors which support the excellence performance.Explicit theorists in the line of specific domain are Stanley & Benbow (1986) and Bamberger (2006), among others (Schlaug, Forgeard, Zhu, Norton, & Winner, 2009;Simonton, 2009).Stanley and Benbow (1986;Kell, Lubinski, & Benbow, 2013) studied precocious youth who excelled in mathematics and were identified at early ages due to their high scores in math achievement tests, and who also participated in enrichment programs.Bamberger (2006) studied individuals with musical talents because they were identified as musical prodigies until adulthood, focusing on internal representations they had about the musical structure.During the childhood (musical prodigy) -adulthood (talented adult) transition analysis, Bamberger studied adolescents with excellent achievements who found their way to coordinate the different representation network from precocious infancy to adulthood, including a combination of perspectives from cognitive and developmental psychology.The criteria for cognitive developmental progress was characterized as transformations that occur over time in how individuals organize their perceptions and the strategies they bring to bear in constructing their understandings of the world around them.Bamberger (2006) stated that musical developmental studies have typically focused on progress as meaning the capacities of children to abstract, name, measure, and hold musical elements constant across changing contexts.However, as an educator and as a researcher, Bamberger proposed that rather than trying to reach a consensus about what counts as progress in the course of musical development and what determines a hearing that counts as better than another, it is more productive to continue refining the debate on the meaning of developmental progress in relation to music.
Motivation, giftedness, and talent development
The promotion of talent development in poor, sociocultural diverse countries, such as those of Latin America and the Caribbean region, tends to close the historically established gaps in education.These gaps are related to exclusion variables, such as gender, economic income, education, ethnic origin, and diversity (CEPAL/ UNICEF TACRO, 2010).However, psychologists working with gifted and talented school-age children in Latin America and the Caribbean have also dealt with children attending schools of excellence who do not take advantage of the educational opportunities they are being offered (Blumen, 2007).Apparently, a gifted girl or boy needs more than a stimulating environment to develop their talents.It is also necessary to be aware of the context, as well as to be intrinsically motivated to interact with it.If the child is passive in his or her environment, or does not pay attention to the external stimuli, he or she might not develop talents, becoming a problem related to intrinsic motivation.
Talent development may only take place when the individual actively interacts with the environment and is open to the stimuli.Actually, it is thought that development is the result of reciprocal interactions between the organism and the environment, which actualizes the genetic potential of the organism.Therefore, high interaction between the organism and the environment leads to high genetic potential realization.Less interaction will lead to a latent genetic potential which cannot be developed (Bronfenbrenner & Ceci, 1994).Therefore, in order to maximize the genetic potential, children need not only a supportive context which provides opportunities to develop and grow, but also motivation in order to interact with the environment and take advantage of the opportunities offered (Blumen, 2009).
Today, the role of learning motivation in the development of special talents is recognized.Some authors consider motivation as an essential factor in giftedness (Mönks & Katzko, 2005;Renzulli, 2005), while others consider it as a separate factor which determines the amount of energy Learning motivation and giftedness in Latin America and Caribbean societies which is directed to learning activities in a determined domain (Gagné, 2013).The analysis of both positions is presented.Renzulli (2005) and Mönks (Mönks & Katzko, 2005) are among those that consider motivation as an associate factor in giftedness.Among the components of the threering model, Renzulli is one of the first theorists that focus attention on a manifestation of the motivation variable, naming it task commitment.Although motivation is often defined in terms of a general energetic process that provokes answers in an organism, task commitment represents the energy which emerges while coping with a problem, task, or performance area in particular.The terms often used to describe task commitment are perseverance, hard work, dedication, self-confidence, belief in one's own ability to develop an important product, and applied action on an object or situation that generated individual interest.Mönks & Katzko (2005) stated that Renzulli's components are personality dispositions which need a social context in order to be stimulated and developed.In this context are the family, the school or labor place, and the community.
Motivation as a determinant factor
Scientific studies with people exhibiting exceptional achievements consistently demonstrated that the precursors of an original and unique work are a special fascination and compromise with the selected topic in the domain field, together with perceptive ability and with the ability to identify significant problems (Albert, 1992).This motivation to become involved in an activity based on self-interest is generally called intrinsic motivation.When somebody exhibits self-determination and competence towards a certain task, intrinsic motivation emerges and monitors the action.
Main critics towards traditional approaches about giftedness establish that they suppose the dichotomy mind-context in its interpretation (Sternberg & Davidson, 2005), and sustain the polarity student-context, implicit or explicitly, trying to explain the impact of the individual over his or her context, or vice versa.However, Barab and Plucker (2002), together with Snow (1997) stated that this dual perspective is non-adequate to explain the interaction between the person and the situations, as integrated systems.Moreover, Snow emphasizes that a more productive analysis might be to examine the processes connecting persons and situations, those which operate at the interphase.
Studies in the past 20 years show the weakness of traditional approaches towards ability and talent based on learning styles and thinking styles, the importance of context, and other factors (Runco, 2014;Simonton, 2009).Nowadays, we know more about achievement motivation than we knew a generation ago.However, teachers continue using learning strategies based on old conceptions (García Cepero & McCoach, 2009), which lead to perceive the gifted as the teacher assistant in the classroom, without visualizing the need to organize special enrichment or acceleration programs in order to contribute to their own development.
Motivation as a promoter
A central element in the criticism towards the conceptualization of motivation as a determinant cognitive factor of giftedness and talent is the conviction that giftedness cannot be characterized in pure cognitive terms as an internal stable trait, and does not have a purely environmental explanation.Giftedness is the visible result of the interaction between the individual with her or his environment.In this line, Pea and colleagues (2012) believe that the ability to act intelligently is achieved more that possessed.This perspective strongly leads to ecological psychology studies, which incorporates situational cognition to the distributive cognition, and student learning.
Moreover, this perspective seems to follow the creativity systemic theory line, such as Csikszentmihalyi (1988) who proposed a systemic theory of creativity which emphasizes the individual roles, as well as the area and domain they try to create.He proposes that taking into consideration how an individual operates in a certain domain or field, more than how he or she operates in a specific area, constitutes a fundamental change in relation to how he or she thinks and acts.
From a motivational perspective, the motivation level of a child determines the frequency and persistence of his or her interactions with the immediate environment, and the actualization of his or her genetic potential.Taking into consideration that motivation for competence in a certain field orients the person towards interactions that might provoke his or her ulterior development, motivation towards competence might be considered as the primary motto of development (Zevalkink, Riksen-Walraven, & Bradley, 2008).At school, lack of motivation for academic tasks is the most common cause for low achievement among gifted students.Differences between interest and motivation among children are obvious for school teachers, and might be detected since preschool years.Moreover, studies by Zevalkink and colleagues (2008) state that motivation towards competence is significantly affected by early experiences.Therefore, early experiences might play a determinant role in the development of talents, because they might bring a motivational base for the interactions between children and their environment, as well as for the actualization of their genetic potential.
In this sense, the quality of the parents' nurturing might be considered emotional support, as well as respect towards child autonomy structure, limits of behavior, as well as high quality of instruction (Zevalkink et al., 2008).The first two elements are basic, since they promote the sensation of security and competence, and will motivate children towards future interactions with the environment.
In the case of children living in Latin America and the Caribbean, 45% of children are affected by at least one moderate to severe deprivation, which means that S. Blumen almost 81 million people aged under 18 suffer from child poverty (CEPAL/UNICEF TACRO, 2010).However, children living under the poverty line in the region are not just deprived from general standards of well-being established in their societies, but they are also largely unable to meet their basic needs, which endanger their ability to take advantage of future opportunities.There are approximately 200 million people under 18 years of age in Latin America and the Caribbean, and poverty affects approximately 81 million children aged 0 to 18 (CEPAL/UNICEF TACRO, 2010).Moreover, in Latin America, extreme poverty affects 51% of children aged 6 to 12 in rural areas.
Therefore, poor children in Latin America and the Caribbean are often not detected as gifted or talented, since their potential talents are not actualized at early ages, and it is probable that they might stay latent throughout their life-span (Blumen, 2013b).However, if these children exhibit personal resources, such as resilience and high intellectual ability, it is possible that they might find their way to develop their talents as adolescents, in their way to adulthood.Although some theorists maintain that children living in poverty do not develop exceptional talents, studies in Peru (Blumen, 2013a(Blumen, , 2013b;;Fleith & Soriano de Alencar, 2007) show that some youth manage to develop them, thanks to the support of a teacher, mentor, or specialized school which assumes the challenging responsibility of helping them.
Motivation and outstanding performance
Understanding the gifted and talented from the individual analysis exhibits limitations.However, the theory of attribution suggests that stable internal attributions (i.e.I have success because I am intelligent and talented) might be difficult to maintain in changing environments, where unstable internal attributions (i.e.I have succeeded or failed depending on my own effort) brings a sense of responsibility in certain situations, and leads to achievement motivation.However, if the students who do not achieve, and tend to perform below the expected, believe that they are not talented, they might not reach success, independently of their effort level (Job et al., 2010).The establishment and maintenance of stable internal attributions of success and failure produces complications when the label good student or bad student is established for a student.Moreover, teachers tend to treat students based on their own expectations or the perceived ability of the students, significantly influencing the increasing or diminishing of their achievement, as states the self-fulfilling prophecy (Rosenthal & Jacobson, 1992).Necka (1986) proposes the following causes as energizers of productive-creative behavior: (a) instrumental motives, where creative behavior is a way to reach an end; (b) motives of play, by which creative behavior leads to an internal satisfaction state; this type of motivation is also seen as an aspect of the self-actualization process; (c) intrinsic motives, in which creative behavior increases the competence level in the person, and strengthen the sensation of having the external world under control, and (d) expressive motives, by which creative behavior makes possible communication of own thoughts with others feelings (p.137).We will illustrate Necka's proposal with an example a verbal gifted Nobel laureate adult, who seeks fame and fortune through the composition of literary works (an instrumental motive), but at the same time has a strong sense of mission (intrinsic motivation) or the desire of reaching others to communicate something (expressive motivation).Also, it is possible that people show different combinations or motivational patterns, with weights in different areas, following their inclinations, with an individual motivational structure.Moreover, the structure might change as time passes.Therefore, the initial motive of the novelist -make money, is replaced with the sensation of doing something important for humanity.As Necka stated, different types of motivation towards creative production include the combination of external and internal factors.Runco (2014) proposal of personal creativity, closed to the motivation theories, states that a girl or boy may not choose to invest her or his maximum effort in building an original interpretation of something, unless she or he is motivated to do so.Moreover, he considers Piaget's (1976) theory as a theory of ability or of potential, since it describes what children are able to do, but that may not guarantee that children might necessarily do it, establishing differences between potential ability and actual performance.Rubenson and Runco (1995) stated that there are theorists who focus on creativity and talent and include motivation as intrinsic motivation, more than as extrinsic, although both are present.The question is whether this motivation depends on cognition or in cognitive evaluation.Although there is controversy in this proposal, it makes sense that the individuals are not motivated by things that they do not understand, and that comprehension requires a cognitive evaluation (Lazarus, 1991).Moreover, they state that Piaget (1976) stated that children may adapt since they were intrinsically motivated towards understanding.In this case, motivation precedes and starts the cognitive effort.
Applying this to the role of assimilation in the creative work, it might occur that certain situations attract the attention of creative persons, and as a result he or she may orient towards the task, and even continue exploring it, putting effort in building significant interpretations or reinterpretations.This posture is consistent with the studies that show that children with creative talent are generally deeply interested, and constantly thinking on the topic that really attracts them (Rubenson & Runco, 1995).Gifted children tend to be highly persistent and occasionally are so interested in a certain domain or problem that they invest all of their time in it.Consequently, they reach a Learning motivation and giftedness in Latin America and Caribbean societies solid knowledge base as domain specific competencies, which allow them to become creative productive adults.
Although we do not have a unitary theory of motivation to explain all motivated behaviors, we may use some theoretical models to explain talented productions.For Lens, Vansteenkiste, and Simons (2009) one of the most important motivational construct related to the gifted and talented is intrinsic motivation (Ryan & Deci, 2000), which reflects the natural human propensity to learn and assimilate.Intrinsic motivation is proposed by a number of authors as determinant of outstanding achievement, since it affects curiosity, competence, and efficacy, as well as achievement motivation.
One of the best manifestations of intrinsic motivation is intellectual or epistemic curiosity (Lens et al., 2009), which is manifested in gifted children, but seems to diminish along schooling, due to the absurd tendency to decontextualize the topics studied in the regular curriculum (Blumen, 2013b).Stanley (Kell et al., 2013;Stanley & Benbow, 1986) linked curiosity with the so called academic hunger, by which gifted students are able to tolerate uncertainty, and even need to seek new challenges, assuming risks while abandoning comfortable positions.This factor is also defined as the power to create (Treffinger, 2008), since gifted and talented children exhibit high levels of persistency and do not rest until they finish something of interest.Treffinger (2008) also included curiosity among the variables which facilitated the emergency of the creative behavior, which are the following: curiosity, the aim to answer freely in stimulating situations, openness to new or unusual experiences, the aim to take risks, sensibility towards the problems and willing to solve them, tolerance towards ambiguity, and self-confidence.
Another significant factor in the emergence of outstanding productions is the necessity of competence and efficacy in tasks solutions, which constitutes a challenge and a significant factor for both school and labor work.In this sense, tasks perceived as too easy or too difficult might not become motivating, and the girl or boy may not expect to feel competent or efficient in his performance.It is important to state that stimulating academic experiences for self-concept and self-efficacy might be those that may be internally attributed to their own abilities or self-effort.Moreover, social comparison provides feedback, in the sense that they are more able than their peers, and have fewer difficulties in understanding and solving problems, both relevant aspects for self-efficacy (Bandura, 2012;Graham, 2011).
In this sense, gifted and talented children not only learn fast, but also differently from their non-gifted peers of chronological age.They seem to invent new and creative ways to solve problems.For instance while solving an algebraic problem they seem to intuitively see the relationship between the numbers to solve the problem, instead of solving it following the algorithmic way (Feldhusen, 1998).Making progress on their own rhythm means less need of adult support in the domain area, but also more time to learn by them.Also, they might need support in unexpected ways.We have mathematically gifted children who learn trigonometry for fun, but exhibit difficulties on the basic arithmetic operations of multiplication or division.Winner (2000) uses the term rage to master to describe gifted children's need for competence and efficacy, as key elements of intrinsic motivation to master the area of interest.It is a term of obsessive nature, which orients the child to focus intensely on a certain topic, and to consume the information and develop new competencies.Winner (Forgeard, Winner, Norton, & Schlaug, 2008;Schlaug et al., 2009;Winner, 2000) states that the intellectual gifted show intense levels of concentration, and an obsessive interest in the area of domain.His students work on after school projects not because of getting a good grade, but because they are intrinsically interested.Work and play are inexplicably connected for them.It is very difficult to get them out of their job.However, if the school curriculum does not satisfy their interest area, or it is perceived as too easy, it will be very difficult to motivate their interest.
For many native children in Latin America and the Caribbean, motivation may be derived from social organization.Therefore, top-down classroom organization is often found to be ineffective for children belonging to native cultures that depend on a sense of community, purpose, and competence in order to engage.Horizontallystructured, community-based learning strategies often provide a more structurally supportive environment for motivating native children, who tend to be driven by social/affective emphasis, harmony, holistic perspectives, expressive creativity, and nonverbal communication (Blumen, 2009;Maynard, 2004).
In ethnic-linguistic diverse native communities, children can often portray a sense of community-wide expectations of participation in the activities and goals of the greater groups, rather than becoming engaged on individualized aspirations of success or triumph.They can also exhibit their parent-like interactions with siblings to assist their younger counterparts without being prompted by authority figures (Maynard, 2004).Moreover, through observation techniques and integration methods children learn from a more skilled other (Olson et al., 2008), such as a big sibling.The older child will guide the younger learner.Learning through play encourages horizontallystructured environments through alternative educational models such as Intent Community Participation (Rogoff, 2011).
Formal Westernized schooling is reshaping the traditionally collaborative nature of social life in native communities, with variations in motivation and learning (Lillemyr, Søbstad, Marder, & Flowerday, 2010).Taking into consideration the low performance of Latin American children on international academic assessments, we could infer the dramatic situation that native gifted children are experiencing on a daily basis.They are forced to attend schools in which the poor teacher training level, the inadequate motivational techniques, and the low S. Blumen educational quality diminishes their motivation towards school at alarming levels.Hence, there are Secondary school students exhibiting lower creative productions, than those exhibited in Elementary years (Blumen, 2007).Therefore, we pose the classic question "why do they need to attend school if at home they can learn more"?
Achievement motivation is also a determinant element in the emergence of outstanding productions.For students oriented towards success, achievement motivation generates a positive tendency that takes them to action.And for students who exhibit anxiety towards evaluation, or fear of failure, the presence of this variable widens the inhibitory tendency (Blumen, 2009).Therefore, for the majority of gifted and talented children that attend regular classes, the tasks might be too easy to become motivating, since they may excel over their peers.However, this situation does not constitute an incentive for the gifted and talented students, since it is not perceived as an achievement from their self-effort, since the effort needed to achieve is minimum.Therefore, it is suggested to cluster by ability, with peers with equivalent ability, in order to improve their achievement motivation towards goals in contexts that constitute a challenge for them.For Lens et al. (2009) another important element to take into consideration on achievement motivation is the future-time perspective, since gifted children tend to present different future-time perspectives than those of their peers.They tend to easily perceive the instrumental value of their actions in the present, which increases their motivation.
Although intrinsic motivation is relevant for giftedness and talent development, extrinsic motivation is also important, particularly at the Secondary education level, due to the need to exhibit outstanding academic performances, in order to enter college studies.In this sense Dweck (1986) differentiates among three types of achievement goals: (a) the aim to develop competence, (b) the achievement of competence, and (c) to exhibit competence.For Pintrich & Schunk (1996), the first two types of goals are learning goals (intrinsic), while the third one is a performance goal, reached with extrinsic motivation.
However, the influence of extrinsic motivation in talent development generated controversy in the beginning.Amabile (1990) even adopts an extreme position while stating that extrinsic motivation is absolutely negative for creative performance, stating that the best way to promote children creativity was immunizing them towards extrinsic motivation.For her, the crucial element for creative production was intrinsic motivation, since it provides internal satisfaction, as well as a sense of wellbeing.Moreover, extrinsic motivation is generally caused by factors such as money or gifts, and might undermine the sense of autonomy if it is perceived as externally controlled (Amabile, 1990).However, later, Amabile & Kramer (2011) stated that for any extrinsic factor which is the basis of the sense of competence or which provokes the deep commitment with the task, there might be a reinforcement effect of the intrinsic motivation.This positive combination of apparent opposite motivational types, might be called extrinsic at the service of the intrinsic (Amabile & Kramer, 2011), more information about the synergic effect of extrinsic motivators over the intrinsic, is still necessary, since the high commitment to the task might be the result of this synergic effect.
For Lens et al. (2009), the optimal motivational level for giftedness and talent development is achieved based on the connection between a high orientation towards the learning goal, and less performance orientation.The orientation towards performance goals through the competency with others, does not exclude working towards learning goals.In this sense, double goals are generally used by college students who choose their subjects in terms of their future career.For better understanding of the importance of motivation in the performance of the gifted and talented, we will analyze the case of an atypical talent population from the performance level, although very common in our schools: the gifted underachiever.
The gifted underachiever
Gifted children tend to face a crisis while reaching school age, because schools have difficulty meeting their needs.This establishes a gap between them and their peers, who perceive gifted children as superior in abilities and interests.Should gifted children be placed in a regular classroom in order to share with their age peers?Or should they skip school grades in order to be with their mental age peers?Should schools provide special classrooms for the intellectually gifted children?Or would it be enough to offer after school programs for the gifted?
John's story (Blumen, 2013a) tells what usually happens to children with extreme intellectual giftedness in Peruvian schools.When John was in Kindergarten the teacher referred him for psych-educational assessment, since the school suspected mental retardation.John's family lived in a rural area, and the mother contacted a psychologist of Lima, the capital city of Peru.Surprisingly John scored in the very superior range on intellectual ability, exhibited higher emotional resources than his peers, and was recommended for academic acceleration.As Primary schools in rural areas do not provide this kind of support, John was placed on a pull-out program, one hour per week, together with three other gifted children.However, he spent most of his school time in his regular classroom and started to show behavioral problems, constantly interrupting the teacher at class-time.Finally he refused to complete his homework.
John became the classic gifted underachiever: advanced in comparison with his age peers, bored at school, and exhibiting behavioral difficulties.Afterwards, his mother decided to homeschool him.Nowadays, John is a successful graphic designer, and runs his own business.This is a child that could not find a space at school, but found a place in life.
Learning motivation and giftedness in Latin America and Caribbean societies
John's case shows the importance of motivation in the achievement of giftedness and talent development.When John's intellectual functioning was assessed, his motivational levels were optimal, and showed his best performance.However, John was not motivated for the routine school work, and therefore his results were below the expected.Since he exhibited motivational difficulties, his school performance was poor, and he ended up exiting the regular educational system.This situation is common to many gifted and talented children that end up functioning as low or low average students in schools of Latin America and the Caribbean region.Lack of motivation hides possibilities of higher achievements.Gifted children's parents fight to find proper education for their children, although generally, they are perceived as selfish parents, with an unreal perspective on their children's abilities (Blumen, 2013a;Fleith & Soriano de Alencar, 2007;Webb, Gore, Amend, & DeVries, 2007).
Social-emotional development and giftedness
Most of school-age children have different profiles, since some perform better than others in certain areas.Every child has strengths that should be identified and promoted along the regular educational system.Moreover, we have outstanding students with exceptional potential for academic excellence in one or more areas that should be promoted.With or without the label gifted, some are atypical students in the classroom (Mönks & Katzko, 2005).And, the more atypical they are, the less might their possibilities cover their cognitive and affective needs from the standard curriculum that establishes the Secretary of Education (MINEDU, 2011).They not only need something else, but they might need something different.In social, personality, and emotional terms, gifted and talented children are also different and might exhibit the following tendencies (Fleith & Soriano de Alencar, 2007;Webb et al., 2007): Introversion: They exhibit more tendency to introversion than their peers, and tend to spend some periods of time alone, for different reasons: (a) First, they have difficulties in finding peers similar to them, with whom to share their interests and mood; (b) Second, they tend to be isolated from their peers, since they are perceived as nerds, and (c) Third, they tend to become so focused on their own projects and have less time to socialize.As their mental internal lives are so rich, they understand loneliness from a different perspective than other children.However, they would rather have friends than be alone, and suffer in their loneliness (Li, & Csikszentmihalyi, 2014).A mother of a gifted boy from Huacho-Peru stated "…I want him to play with other children, but it does not work… he has friends, although he does not like to go partying… just once in a while is more than enough for him… and that, here, in such a small town is a problem… here you have to go out… they have to see you to be invited…" (Blumen, 2009, p. 109).
Independence: Gifted students are highly independent, self-monitored, stubborn, and less conformist (Job et al., 2010).Independent thinking of some gifted students allows them to ignore temptations and signs of culture, in order to focus on their talents.While others attend social gatherings, they stay at their desks, play piano, or program their computers.They can be so involved in their activities that they could not be interested in others' opinions; they might follow their own way (Piirto, 2014;Webb et al., 2007).
Emotional difficulties: Extremely gifted children tend to exhibit higher emotional difficulties than moderately or highly gifted (Fleith & Soriano de Alencar, 2007;Janos & Robinson, 1985), especially in social relations.Moreover, they are at risk for ruminating thoughts about existential problems (Piirto, 2014;Webb et al., 2007) or exhibit difficulties in their adjustment to school, although their educational needs are properly attended.They might seem oppositionist or might develop dysfunctional behavior at school, such as absenteeism, headaches, or chronic stomach aches.They might even refuse to do academic work (Webb et al., 2007).Gifted children are part of a minority; they know this, and their age-peers too.They feel different, lonely, and it is difficult for them to find friends.Some strengthen their social networks while taking part in popular activities, such as socially accepted sports or play in musical bands.They also seek friendship from older peers, and tend to underestimate their social status (Blumen, 2009;Fleith & Soriano de Alencar, 2007).
High and low self-esteem: Gifted students exhibit an unusual combination of high and low self-esteem.They exhibit low self-esteem in relation to their social life, since they do not feel comfortable to achieve excellence in their talented area.This is particularly observed in younger children (Csikszentihalyi, 1988).These are the forms in which gifted students are qualitatively different from average students, even different from those outstanding students due to their responsibility, perseverance, and adult support.
Final Remarks
Developmental theorists underline the importance of motivation on talent development across the life-span, depending on the type of specific domain.And talent development may only take place when the individual actively interacts with the environment and is open to the stimuli.However, most children living in sociocultural diverse Latin America and the Caribbean societies exhibit low achievement due to the poor performance on reaching learning goals, as international comparisons suggest (Mullis, Martin, Foy, & Arora, 2012).This is also a problem for the gifted and talented children, since they tend to develop on lower levels than those of their gifted peers living in more advanced societies.
In order to maximize the genetic potential, children need a supportive context with opportunities to develop S. Blumen and grow, as well as motivation to interact with the environment and take advantage of the opportunities offered.Therefore, the educational standards of the Latin America and the Caribbean countries, actually lower than those from Western Europe or Eastern Asia (CEPAL/ UNICEF TACRO, 2010), need to improve for all students.If expectations are higher, then most of the moderately gifted, who are actually bored in class, and might exhibit behavioral difficulties, will receive proper stimulation.After all, countries such as Finland, Singapore, or South Korea that have higher educational exigency levels need few services for their gifted students (Tirri, Tallent-Runnels, Adams, Yen, & Lau, 2002).
Giftedness is the visible result of the interaction between the individual with her or his environment.And it could be that the higher 1% of the children living in the Latin America and the Caribbean region is developing in equivalent levels of those from more developed countries.However, it is also probable that 5% exhibit academic underachievement and their talents are hidden from society.This situation might be the result of using inadequate motivational techniques in the class time (Blumen, 2007; CEPAL/UNICEF TACRO, 2010).Therefore, horizontally structured, community-based learning strategies which provide a more structurally supportive environment for motivating native children, driven by social/affective emphasis, and expressive creativity, might be considered.These students might be placed in advanced math, sciences, or social studies, both at the Primary and Secondary level.This approach by specific domain should be consistent with Stanley's proposal (1979;Kell et al., 2013).Elementary school children, whose schools cannot provide them with advanced studies, should have the possibility to assist classes in Secondary education.And those in Secondary education might be able to take college classes.
Children exhibiting extreme giftedness, so-called prodigies, to which advanced classes might not be enough, should be given special treatment.They might ideally need special schools for the gifted in which they can interact with other extreme gifted peers.This condition might not always be possible, particularly in rural areas.However, other options might be explored, such as academic acceleration, and homeschooling, with the supervision of specialized tutors.These options might be explicit in the norms of Special Education in every country.Also, monthly gatherings for gifted students, networking, and interactive television might bring support in the consolidation of a network among extremely gifted students.
Finally, it is necessary to establish that it is not enough to do our best to identify our gifted children.Instead, we must improve our understanding about what it means to be a gifted or talented child in a sociocultural diverse society, and how is the socio-emotional development flowing, assuming that many of them might be developing in poverty contexts, lacking opportunities to develop their best abilities.It is important to propose formal lineaments for attention to the gifted and talented in its different manifestations, with the commitment of different agents of the civil society and the state, including participation of the academic centers, and enterprises, in order to insure a communal, social, and working place; to promote talent development with social responsibility.
|
v3-fos-license
|
2022-06-24T12:35:20.736Z
|
2021-01-01T00:00:00.000
|
236298307
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://doi.org/10.21203/rs.3.rs-591135/v1",
"pdf_hash": "671ba25f44d2ac1293cb22e8e31cc9196e3ba90d",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42379",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "671ba25f44d2ac1293cb22e8e31cc9196e3ba90d",
"year": 2021
}
|
pes2o/s2orc
|
Effects of Early Sub-Therapeutic Antibiotic Administration and its Subsequent Withdrawal on Body Composition, Gut Microbiota and Metabolite Proles in a Pig Model
Background: Antibiotic exposure in early life has shown to be a signicant risk factor for later fat accumulation in human. However, whether early sub-therapeutic antibiotic (STA) exposure affects body composition and its mechanisms remains unclear. The present study used a combination of comparative slaughter method, microbiota, and metabolomics measurement to investigate the effects of early STA administration and its subsequent withdrawal on body composition, colonic microbiota and metabolite proles in a pig model. The piglets were fed the same basal starter diet added with STA (STA) or without STA (CON) for two weeks during the administration period, and then all piglets were switched to the same nursery diet without STA during the withdrawal period until they reached approximately 25 kg body weight. Results: Results showed that STA did not signicantly improve piglet growth performance during the administration period and the withdrawal period. Piglets treated with the STA had a lower body water deposition (g/d) during the withdrawal period, and tended to have increased body lipid deposition (g/d) during the withdrawal period and the whole period than CON group. It was found that STA was initially effective in decreasing the abundance of pathogenic bacteria during the administration period, such as Alloprevotella, Bacteroides, Solobacterium, and Sutterella. However, they could not continue the effect during the withdrawal period, leading to the rebound of pathogenic bacteria such as Alloprevotella and the increase of the abundance of other pathogenic bacteria like Oscillibacter. Remarkably, STA treatment decreased the abundance of Blautia that play a potential protective role against obesity either during the administration period or the withdrawal period. Metabolomic analysis indicated that STA mainly altered amino acid metabolism, lipid metabolism, and carbohydrate metabolism during the two periods. Furthermore, Spearman's correlation analysis showed that the gut microbiota was highly correlated with microbial metabolites changes. Conclusion: These results suggest that STA administration may alter tissue deposition through reshaping the their
It has been reported that antibiotics can alter gut microbiota and its metabolism [7], and the causal role of gut microbiota in modulating fat accumulation has been demonstrated by coloning the gut microbiota from obese mice [8] or humans [9] in germ-free mice. Gut microbiota functions as an organ with many metabolism, immunology, and endocrine-like effects that crucial for human health [10,11]. The pathogenesis of gut microbiota involved in the fat accumulation may be through its in uence on energy balance, nutrient absorption, in ammatory pathway, and the gut-brain axis [12]. However, the effects of early STA administration and its subsequent withdrawal on gut microbiota and bacterial metabolites are poorly understood.
We utilized piglets as a model to determine the relationship between gut microbiota of host and STA exposure due to their similarities in anatomy and size to human infants [13,14]. In the present study, an integrated approach combination of comparative slaughter method, 16S rRNA gene sequencing, and liquid chromatograph-mass spectrometry (LC-MS) technique were utilized to investigate the impact of early STA administration and its subsequent withdrawal on body chemical composition, gut microbial composition and metabolome of a piglet's model.
Materials And Methods
The procedures of the present study were followed the Chinese guidelines for animal welfare, and approved by the Animal Care and Use Committee of the Guangdong Academy of Agricultural Sciences (GAASIAS-2016-017).
Animals, diets and experimental design
Fifty 21-day-old Duroc × Landrace × Yorkshire weaned piglets with average initial body weight (BW) of 6.39 ± 0.02 kg were randomly allocated into a control group (CON) and a STA group with 5 replicates in each, comprising 5 piglets in each replicate. The entire experiment was divided into two periods: the administration period and the withdrawal period. The schematic diagram for the experimental design was shown in Fig. 1. During the administration period, piglets in the CON group were fed a basal starter diet, and those in the STA group were fed a basal starter diet supplemented with 30 mg/kg bacitracin methylene disalicylate, 75 mg/kg chlortetracycline, and 300 mg/kg calcium oxytetracycline for 2 weeks.
During the withdrawal period, all piglets were switched to the same nursery diet without STA until they reached an average target BW of approximately 25 kg. Diets were formulated to meet or exceed the National Research Council recommendations [15]. The piglets had ad libitum access to feed and water, and the nursery diet is different from the starter diet. The ingredient and nutrient composition of the nursery diet and starter diet are presented in Table S1.
Sampling
Prior to implementing dietary treatments, an additional ve weaned piglets with similar initial BW as the experimental pigs were slaughtered to determine initial body composition. Piglets were weighed at the start and the end of administration period and withdrawal period to calculate average daily gain (ADG), average daily feed intake (ADFI), and feed conversion ratio (FCR). After fasting for 12 h, ve piglets (every group with 5 replicates in each, 1 piglet form each replicate) were anesthetized with sodium pentobarbital and slaughtered by exsanguination at the end of the administration period and the withdrawal period. Colonic digesta was immediately snap-frozen in liquid nitrogen and stored at -80℃ until further microbiome and metabolome analysis. The empty gastrointestinal tract, visceral organs, blood, and carcass were stored at -20°C for experimental detection. The frozen body components were sheared into small parts in a double-shaft crusher (model L-SP380, LiWill Co. Ltd., Zhengzhou, China) and put in a commercial grinder with an 18-mm die (model SG-130, Yusheng Co., Langfang, China) and then minced in a tiny grinding mill (model GN-130, Yusheng Co., Langfang, China). After homogenizing the body components with a kitchen mixer, subsamples were obtained for chemical analysis.
Growth performance and body composition analysis During the whole feeding trial, ADG, ADFI, and FCR were calculated. Body composition subsamples were analyzed according to AOAC [16] for water content determined using a convection oven at 105 ℃; crude protein content was calculated as total N content × 6.25 and total N content was determined in a Kjeltec analyzer (model 8400, FOSS Analytical AB, höganäs, Sweden); crude fat content were measured by automatic extractor analyzer (model XT 15i, Ankom Technology Co., Macedon, NY); ash content was determined in a mu e furnace at 550 ℃. The deposition of water, protein, lipid, and ash in the piglets' body was calculated by dividing the difference in body chemical composition between the end and beginning of each trial period by the corresponding trial day [17]. For example: body lipid deposition (g/d)=[( nal body lipid composition content (%) × nal BW) -(initial body lipid composition content (%)× initial BW)]/(corresponding trial day).
Microbiome analysis
Total genomic DNA was extracted from colonic digesta by QIAAMP Powerfecal DNA Kit (Qiagen, Hilden, Germany) follow the manufacturer's instructions and its concentration was determined by the Nanodrop2000 (Thermo Fisher Scienti c, Waltham, MA, USA). The 16SrRNA gene of V3-V4 region was ampli ed by the universal forward primer 338F (5'-ACTCCTRCGGAGGCAGGCAG-3') and reverse primer 806R (5'-GGACTACCVGGATCTAAT-3'). The PCR amplicon was extracted by the QIAGEN Gel Extraction Kit (Qiagen, Hilden, Germany) following the manufacturer protocol. The TruSeq Generated Sequencing Library ® by using a free sample preparation kit for DNA PCR that following the manufacturer's instructions and index code (Illumina, San Diego, CA, USA). The library quality was evaluated by Qubit@2.0 uorimeter and AgilentBioanalyzer2100 system. Finally, the library was sequenced on the Illumina-Novaseq platform to produce 250bp paired end readings. Bioinformatics analysis performed based on the description of previous studies [18].
Microbial Metabolite Measurement
The procedure of microbial metabolite measurement as our previously describe [19] using Vanquish UHPLC system (Thermo Fisher) coupled with an Orbitrap Q Exactive series mass spectrometer (Thermo Fisher). Brie y, the homogenate of grounded colonic digesta was centrifuged. The obtained supernatant was diluted with LC-MS grade water and centrifuged for 10 min, and nally injected into the LC-MS/MS system analysis. Raw UHPLC-MS/MS data was analyzed by Compound Finder 3.1 (CD3.1, Thermo Fisher Scienti c Waltham, MA, USA) to peak alignment, peak pick-up, and quanti cation of each metabolite.
Statistical analysis
The data were analyzed by Student's t-test if the date tted a Gaussian distribution, using SAS 9.4 (SAS Inst., Inc., Cary, NC). or by Wilcoxon test if the data were not normally distributed. Data are expressed as means ± standard deviation (SD). Correlations between gut microbiota and metabolite pro les were analyzed by Spearman's correlation test. Signi cant differences were declared at P < 0.05 and tendencies declared at 0.05 < P < 0.10.
Results
Effects of STA administration and its subsequent withdrawal on growth performance Effects of STA administration and its subsequent withdrawal on growth performance of piglets were given in Table 1. We found that STA treatment had no in uence (P > 0.05) on ADG and ADFI during the administration period, withdrawal period and the whole period than CON group, except that the FCR within the withdrawal period tended to be higher (0.05 < P < 0.10) in the STA group than in the CON group. Moreover, the experimental days to reach target BW between the two groups is insigni cant (P > 0.05).
Effects of STA administration and its subsequent withdrawal on the body chemical composition (% of empty BW) The average body chemical composition of piglets slaughtered at the start of the experiment on a percentage basis was (% of empty BW) 69.7 ± 1.82, 15.6 ± 0.35, 10.8 ± 1.85 and 3.04 ± 0.13 for water, protein, lipid and ash, respectively. Effects of STA administration and its subsequent withdrawal on body chemical composition of piglets are presented in Table 2. At the end of the administration period and the withdrawal period, no difference (P > 0.05) between the two groups was observed on the body chemical composition of piglets.
Effects of STA administration and its subsequent withdrawal on tissue deposition (g/d) Effects of STA administration and its subsequent withdrawal on tissue deposition are presented in Table 3. During the administration period, no differences (P > 0.05) for tissue deposition were observed between the two groups. Only when calculated as tissue deposition per day, there is a signi cant effect on water deposition (P < 0.05) in the withdrawal period and tendencies for increased lipid deposition (0.05 < P < 0.10) drawn by the effect seen in the withdrawal period and the whole period.
Effects of STA administration and its subsequent withdrawal on gut microbiota structure To evaluate the in uence of STA administration and its subsequent withdrawal on gut microbiota structure, the sequences of the 16S rRNA gene were ampli ed. After administration period, no signi cant difference (P > 0.05) in α-diversity indices, including observed species, Ace, Shannon and Simpson index, was found between the two groups ( Fig. 2A). PCoA and NMDS plots based on Bray Curtis distance were performed to assess the differences in beta-diversity, and the results showed that the two groups were well-separated ( Fig. 2C). At the end of the withdrawal period, the STA group had lower species richness and diversity indices compared with the CON group, as re ected by the decreased (P < 0.05) observed species and Ace index (Fig. 2B). However, the Shannon and Simpson index did not differ (P > 0.05) between the CON and STA groups (Fig. 2B). For beta-diversity, the results indicated signi cant differences between the two groups ( Fig. 2D).
Venn analysis identi ed 262 and 205 unique operational taxonomic units (OTUs) of CON and STA group, respectively, and 463 shared OTUs in the two groups at the end of the administration period (Fig. 3A); and 395 and 222 unique OTUs in the CON and STA group, respectively, and 783 shared OTUs in the two groups at the end of the withdrawal period ( Fig. 3B). At the end of the administration period, compared with the CON group, the relative abundance of Alloprevotella, Sphingomonas, Bacteroides, Solobacterium, Blautia, Massilia and Sutterella were dramatically declined (P < 0.05) in the STA group (Figs. 3C). After the withdrawal period, STA treatment enhanced (P < 0.05) the relative abundance of Alloprevotella and Oscillibacter, and decreaced (P < 0.05) the relative abundance of Blautia, Succinivibrio, Corynebacterium, Methanosphaera, Desulfovibrio and Holdemanella than CON treatment (Fig. 3D).
Effects of STA administration and its subsequent withdrawal on metabolite pro les
To further explore the impact of STA administration and its subsequent withdrawal on the gut microbiota, LC-MS was used to analyze the metabolite pro les in the CON and STA groups. The PLSDA model showed that the STA group separated from the CON group either at the end of the administration period ( Fig. 4A) or at the end of the withdrawal period (Fig. 4B). Based on the criteria of VIP > 1 and P < 0.05 and fold change ≥ 1.20 or ≥ 0.83, 25 and 36 metabolites were identi ed at the end of the administration period (Fig. 4C) and the end of the withdrawal period (Fig. 4D), respectively.
In the present study, MetaboAnalyst (http://www.metaboanalyst.ca/) was used to perform metabolic pathway enrichment analysis. Results showed that STA administration had signi cant effects on the glycerophospholipid metabolism, ascorbate and aldarate metabolism, taurine and hypotaurine metabolism, vitamin metabolism, amino acid metabolism and galactose metabolism (Fig. 6A); antibiotic withdrawal mainly affected the purine metabolism, amino acid metabolism, vitamin metabolism, galactose metabolism and biosynthesis of unsaturated fatty acids (Fig. 6B).
Realtionship between microbiotal and metabolites
To detect the relationship between the colon microbiome and its metabolites, we utilized the Spearman's correlation analysis for metabolites with VIP > 1 and bacterial genera with signi cant differences between the STA and CON groups. At the end of the administration period, the relative abundance of Alloprevotella and Sutterella was positively correlated with inositol. The relative abundance of Bacteroides and Solobacterium had a positive correlation with biocytin, tomatidine, 7α-hydroxytestosterone, inositol, picolinamideand and feruloylcholine, while they had a negative correlation with phosphatidylglycerol
Discussion
It has been reported that early antibiotic exposure might program later body composition and therefore might be a determinant of obesity risk, which was associated with alteration in gut microbiota composition and metabolites. Since antibiotics have been used to promote growth in animal [20] and have been proposed as therapeutic regimes for malnutrition in humans [21], we rstly studied the effect of STA on pig growth. It was found that under well-controlled environmental conditions, STA did not signi cantly improve piglet growth performance during the administration period, which was consistent with other studies [22,23]. It is generally assumed that growth-promoting mechanism of antibiotics is related to its ability to reduce clinical and subclinical infections under sanitary challenges [20]. A metaanalysis involving more than 900 infants also showed that the positive effects of antibiotics were most prominent in the youngest and most malnourished children, but they were often less dramatic and not statistically signi cant in children without the disease [24]. A previous report [25] found that STA affect subsequent performance negatively, and this may be that early STA administration increased their susceptibility to pathogens during the withdrawal period as discussed later. While, in the present study, STA administration didn't affect subsequent performance during the withdrawal period.
In the present study, STA did not signi cantly alter body composition and tissue deposition of piglets at the end of the 2-week administration period, which contrasts with the previous report [26] that STA increased fat mass in young mice after 7-week exposure. This may be attributed to differences in the STA type and dose or the shorter STA administration time, which have not yet been shown to be different in body composition between the groups. During the withdrawal period and the whole period, piglets in the STA group tended to have a higher body lipid deposition than those in the CON group.
It was found that during the administration period, STA treatment increased abundances of several harmful bacteria/pathogenic bacteria (Alloprevotella, Bacteroides, Solobacterium and Sutterella) compared with CON group. Alloprevotella was considered an opportunistic pathogen microorganism that causes infections in the host [27,28]. It was reported that individuals with the Bacteroides enterotype increased susceptibility to disease [29]. Solobacterium was positively correlated with colorectal cancer [30]. Previous studies showed that rather than directly induce substantial in ammation, Sutterella can degrade IgA to impair the functionality of the intestinal antibacterial immune response [31,32]. However, during the withdrawal period, STA treatment increased the abundances of several harmful bacteria, for example, Alloprevotella and Oscillibacter and decreased abundances of several bene cial bacteria such as Succinivibrio and Desulfovibrio. For example, Oscillibacter has been reported that promotes metabolic diseases and gut dysbiosis [33]. Succinivibrio was reported to be lower in humans with environmental enteric dysfunction (a causative factor of childhood stunting) [34]. Desulfovibrio is signi cant in sugar metabolism and is negatively associated with in ammation markers [35,36]. These results suggested that STA were initially effective in decreasing the abundance of pathogenic bacteria during the administration period, but they were not able to continue the effect during the withdrawal period, leading to the rebound of pathogenic bacteria such as Alloprevotella and the increase of the abundance of other pathogenic bacteria. Evidence has shown that antibiotic administration, especially in early childhood, increases susceptibility to intestinal infections after antibiotic cessation [37].
Remarkably, we found that STA treated decreased the abundance of Blautia that play a potential protective role against obesity either during STA administration or its subsequent withdrawal period. Blautia, a common acetic acid-producing bacterium, may suppress insulin-mediated fatty deposits in adipocytes and promote the metabolism of unbound lipids and glucose in other tissues by activating the G protein-coupled receptors GPR41 and GPR43, thereby alleviating obesity-related diseases [38]. A previous study found that Blautia is the only intestinal microorganism negatively correlated with visceral fat accumulation, and adiposity biomarker for metabolic disease risk [39]. In a study of differential microbiota between lean and fat line chickens, Blautia was signi cantly reduced in the latter [40]. Similarly, signi cant depletion of Blautia was observed in obese children [41,42].
Metabolomics, an effective method to detect the variant metabolites and biochemical pathways [43,44], was utilized to further explore gut microbiota metabolism in response to STA administration and its subsequent withdrawal. The PLS-DA model was a clear separation of colonic metabolites between the STA and CON groups either within the administration period or within the withdrawal period, suggesting signi cant differences in the metabolic pro les due to different treatments. During the administration period, STA signi cantly altered glycerophospholipid metabolisms, as re ected by the increased concentrations of phosphatidylglycerol (3:0/18:1) and lysophosphatidylethanolamine 22:5. Previous studies showed that glycerophospholipid is vital in the strengthening intestinal barrier [45,46]. The compounds involved in amino acid metabolisms, such as L-cysteinesul nic acid, phenylacetylglycine, phenylacetylglutamine and 5-hydroxylysine, were dramatically decreased in the STA group compared to the CON group, which suggests lower nitrogen sources left for the microbial fermentation of the large intestine. A previous report found that antibiotics could upregulate the gene expression of amino acid transporters and receptors in the small intestine and thereby improve the absorption of amino acids [47]. However, during the withdrawal period, piglets in the STA group showed higher amino acid relatives like proline, L-lysine, L-cystathionine and S-adenosylhomocysteine in comparison with the CON group, indicating an increased amount of protein derived substrate for microbial fermentation in the colon, which was consistent with a previous study [48]. This could be harmful to host health due to the possible formation of a range of toxic and harmful products from protein fermentation, such as ammonia, indoxyl sulfate and trimethylamine oxide [49]. Meanwhile, in the present study, STA signi cantly altered the compounds involved in [50]. Previous studies have shown that palmitic acid esters of hydroxy stearic acids and the family of polyunsaturated FAHFAs have antiin ammatory and immunomodulatory effects [51,52]. Besides, several new short-chain FAHFAs (SFAHFAs) of acetic acid or propanoic acid esteri ed long-chain hydroxy fatty acids tended to be lower in mice fed with a high-fat diet than those fed with a regular diet [53].Similarly, STA treatment decreased the concentrations of several SFAHFAs like FAHFA (2:0/24:2) and FAHFA (4:0/22:0) during the withdrawal period, which coincided with higher content of body lipid and lower abundance of Blautia, a bacterium with a protective role against obesity. These results suggest that STA plays bene cial roles in gut health during the administration period, but it may exert harmful effects on gut health during the withdrawal period. Simultaneously, most compounds involved in carbohydrate metabolisms like D-arabinose, Dmannitol and coniferin were increased by STA treatment than the CON treatment during withdrawal period, indicating that most carbohydrates can be fermented by gut microbiota in the colon.
The gut microbiota and microbial metabolites are vital for promoting intestinal immunity balance. In this study, we found correlations between the gut microbiota and microbial metabolites. However, whether the effects of STA induced on gut microbiota and microbial metabolites would in uence the host immunity and its mechanism was still unknow. In the future, we will further study after STA administration the relationship between gut microbiota, microbial metabolites, corresponding metabolite receptors and host immunity.
Conclusions
This study utilized the comparative slaughter method, microbial and metabolite measurement and found that STA administration may alter tissue deposition through reshaping the gut microbiota and their metabolite pro les. These results may be helpful for the future application of the STA on human nutrition.
Availability of data and materials
The datasets supporting the conclusions of this article are included within the article. Tables Table 1. Effects of STA administration and its subsequent withdrawal on initial BW, ADG, ADFI and FCR of piglets 1 Figure 1 A schematic diagram for the experimental design. Fifty 21-day-old Duroc × Landrace × Yorkshire weaned piglets with average initial body weight (BW) of 6.39 ± 0.02 kg were randomly allocated into a control group (CON) and a STA group with 5 replicates in each, comprising 5 piglets in each replicate. The entire experiment was divided into two periods: the administration period and the withdrawal period. During the administration period, piglets in the CON group were fed a basal starter diet, and those in the STA group were fed a basal starter diet supplemented with 30 mg/kg bacitracin methylene disalicylate, 75 mg/kg chlortetracycline, and 300 mg/kg calcium oxytetra-cycline for 2 weeks. During the withdrawal period, all piglets were switched to the same nursery diet without STA until they reached an average target BW of approximately 25 kg.
Figure 2
Effects of STA administration and its subsequent withdrawal on gut microbiota diversity in piglets. The αdiversity indices, including observed species, Ace, Shannon and Simpson index at the end of the administration period (A) and the end of the withdrawal period (B). Data are expressed as means ± SD.
Each point represents one sample. The β-diversity visualized in PCoA plot and NMDS plot at the end of the administration period (C) and the end of the withdrawal period (D). CON, control group. STA, subtherapeutic antibiotic group.
end of the withdrawal period (D). Data are expressed as means ± SD. Each point represents one sample. *, P-value < 0.05. CON, control group. STA, sub-therapeutic antibiotic group. visualizing the identi ed differential metabolites between groups at the end of the administration period (C) and the end of the withdrawal period (D). CON, control group. STA, sub-therapeutic antibiotic group.
Figure 5
Classi cation of metabolites with signi cant difference between the STA and CON groups at the end of administration period (A) and the end of the withdrawal period (B). CON, control group. STA, subtherapeutic antibiotic group.
Page 23/25 Figure 6 Metabolic pathway enrichment analysis. Overview of metabolites that were enriched in the colon of piglets fed the STA diet compared to piglets fed the CON diet at the end of the administration period (A) after withdrawal period (B). CON, control group. STA, sub-therapeutic antibiotic group.
Figure 7
Relationship between colonic microbiota (at the genera level) and metabolites of piglets fed the STA diet or the CON diet at the end of the administration period (A) and the end of the withdrawal period (B). The circle border and circle lling are colored according to Spearman's correlation coe cient distribution and sized based on the correlation coe cient value. Red-lled circle rep-resents signi cantly positive correlation (P < 0.05), blue-lled circle represents signi cantly negative correlation (P < 0.05) and whitelled circle represents no signi cant correlation (P > 0.05). CON, control group. STA, sub-therapeutic antibiotic group.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download.
|
v3-fos-license
|
2019-05-04T13:03:13.823Z
|
2019-04-13T00:00:00.000
|
144207226
|
{
"extfieldsofstudy": [
"Computer Science",
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-4292/11/8/904/pdf?version=1555148668",
"pdf_hash": "c24058013bdb5fb671adeaa2d94ff8289734dc76",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42380",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"sha1": "c24058013bdb5fb671adeaa2d94ff8289734dc76",
"year": 2019
}
|
pes2o/s2orc
|
Assessment of Physical Water Scarcity in Africa Using GRACE and TRMM Satellite Data
The critical role of water in enabling or constraining human well-being and socioeconomic activities has led to an interest in quantitatively establishing the status of water (in)sufficiency over space and time. Falkenmark introduced the first widely accepted measure of water status, the Water Scarcity Index (WSI), which expressed the status of the availability of water resources in terms of vulnerability, stress, and scarcity. Since then, numerous indicators have been introduced, but nearly all adopt the same basic formulation; water status is a function of “available water” resource—by the demand or use. However, the accurate assessment of “available water” is difficult, especially in data-scarce regions, such as Africa. In this paper, therefore, we introduce a satellite-based Potential Available Water Storage indicator, PAWS. The method integrates GRACE (Gravity Recovery and Climate Experiment) satellite Total Water Storage (TWS) measurements with the Tropical Rainfall Measuring Mission (TRMM) precipitation estimates between 2002 and 2016. First, we derived the countries’ Internal Water Storage (IWS) using GRACE and TRMM precipitation data. Then, the IWS was divided by the population density to derive the PAWS per capita. Following the Falkenmark thresholds, 54% of countries are classified in the same water vulnerability status as the AQUASTAT Internal Renewable Water Resources (IRWR) method. Of the remaining countries, PAWS index leads to one or two categories shift (left or right) of water status. The PAWS index shows that 14% (~160 million people) of Africa’s population currently live under water scarcity status. With respect to future projections, PAWS index suggests that a 10% decrease in future water resources would affect ~37% of Africa’s 2025 population (~600 million people), and 57% for 2050 projections (~1.4-billion people). The proposed approach largely overcomes the constraints related to the data needed to rapidly and robustly estimate available water resources by incorporating all stocks of water within the country, as well as underscores the recent water storage dynamics. However, the estimates obtained concern potential available water resources, which may not be utilizable for practical, economic, and technological issues.
Introduction
Concerns regarding the effects of climate change and climate variability have combined with greater awareness of the food-energy-water-nexus to intensify interest about the real and perceived risk of water scarcity [1][2][3].The term "water scarcity" is a relative concept defined as "a gap resources".For example, most methods do not account for all forms of "available water", notably, soil moisture and groundwater due to lack of data [16,23].
To support a general framework and methodology for quantitatively measuring "available water", the Food and Agricultural Organization (FAO) established a global water information system known as "AQUASTAT" to collect, analyze, and disseminate data and information by country.According to AQUASTAT, a country's (CTRY) total renewable water resources (TRWR) consist of the renewable water resources generated within the country, plus the net difference between the internally generated water resources leaving the country and the externally generated water resources entering the country.Arithmetically, where TRWR natural : total renewable water resources; IRWR CTRY : internal renewable water resources, and ERWR natural : external renewable water resources.Details of the methodology, data requirements, and underlying assumptions are contained in [24].IRWR is calculated as, where R: surface runoff calculated as the long-term average annual flow of surface water generated by direct runoff from endogenous precipitation; I: groundwater recharge generated from precipitation within the country; (Qout − Qin): the difference between base flow or groundwater contribution to rivers and seepage from rivers into aquifers.Similarly, ERWR is calculated from [24] as, where SW IN : surface water entering the country; SW PR : the amount of water entering the country through rivers measured at the border; SW PL : the portion of water in shared lakes belonging to the country; GW IN : groundwater entering the country.While this approach streamlined the process of determining water scarcity at country or basin level, constraints related to data availability and reliability remain.Even for precipitation and stream discharge, in-situ data may not be available, accessible (due to conflict or wars), or of acceptable quality due to differences in standards and procedures, including, for example, how frequently critical rating curve equations are updated.Additionally, many countries do not have reliable, temporally continuous, and spatially representative groundwater monitoring programs.As a result, groundwater is often ignored or assumed to be negligible even though it may account for as much as 70% of water withdrawal and use, especially in the rural areas in developing countries [25].Additionally, the IRWR estimates are updated infrequently, possibly due to difficulties associated with data.For example, for most countries in the database, IRWR has been fixed at 1962 estimates.
In this paper, therefore, we introduce the concept of Potential Available Water Storage (PAWS) derived by integrating the monthly Total Water Storage (TWS) from GRACE (Gravity Recovery and Climate Experiment) satellite data with Tropical Rainfall Measuring Mission (TRMM) precipitation estimates.The proposed index is used to assess "potentially available water" resources for 48 African countries.The proposed approach circumvents many of the limitations related to data unavailability and reliability in data-scarce regions, such as Africa.In fact, Africa's 2017 estimated population of 1.2 billion is projected to double by the year 2050 to 2.4 billion people.Such rapid population growth will exert considerable stress on the continent's available water resources, worsening the already acute water scarcity situation [26].Therefore, Africa can benefit from a methodology for rapidly and reliably estimating the status of water resources vulnerability.Additionally, this study contributes to expanding the range of applications and beneficial impacts of GRACE and the GRACE Follow-on mission (GRACE-FO), as well as the global satellite gridded precipitation products, such as TRMM data.It also represents a reliable methodology of water vulnerability assessment, especially to risky conflict zones and regions where hydrological observations are inaccessible.Finally, the proposed PAWS index produces proxy estimates of the potentially available water resources, including groundwater component in the study domain, which is especially valuable given the lack of groundwater monitoring sites in many parts of the study area.
Materials and Methods
Despite the recent advances in satellite-based hydrological measurements (e.g., TRMM, Global Precipitation Mission (GPM), Moderate Resolution Imaging Spectroradiometer-Evapotranspiration (MODIS-ET)), blended and reanalysis grids (e.g., Global Precipitation Climatology Centre (GPCC), Climatic Research Unit Time Series (CRU TS), National Centers for Environmental Prediction (NCEP), Noah Land Surface Model (Noah LSM)), our understanding of the water balance for data-poor regions remains limited.Satellite-based and gauge corrected hydrological grids provide a valuable data source that fills the gaps of the in-situ observations over space and time.Table 3 summaries the data utilized in this research; the temporal coverage of the data is between April 2002 to December 2016.
GRACE TWS Anomalies
Since it first launched in 2002, GRACE has provided unprecedented hydrological information about the changes in water budget components [32,33].GRACE sums the total variation in TWS (i.e., the water mass contained in different hydrological reservoirs, including surface, soil moisture, groundwater, and snowpack component [34][35][36][37][38] as, where SW: surface water, SM: soil moisture, GW: groundwater, and SN: snowpack.GRACE-derived TWS may be considered analogous to the traditional water budget storage (∆S).
By removing the surface water and soil moisture components using either in-situ data, remote sensing observations, or Land Surface Model (LSM) outputs, the GWS can be isolated [33,39] as, Besides, at the basin scale, solving the water balance equation can lead to isolating either the runoff (river discharge) [32,40,41] or the evapotranspiration [42][43][44][45].
The spatial resolution of the GRACE data is around 300 km either using spherical harmonics (SH) or Mass Concentration blocks (mascons) solutions.This is intrinsic to the data acquisition or the original GRACE satellites footprint, ~200,000 km 2 [46].Generally, SH solutions are applicable to study changes in TWS at basin scale [40,47,48] or areas greater than 4-degree resolution.In 2012, Landerer and Swenson introduced a global gridded product of SH of 1-degree grid scale (~100 km) [34].However, the SH products are strongly affected by leakage and spurious noise known as north-south striping.
The mascons, however, allow better estimation of the TWS anomaly by reducing these problems.Historically, the mascon technique was first developed and applied by the gravity group at National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), (GSFC-M) [49,50].In 2015, NASA Jet Propulsion Laboratory (JPL) introduced a new mascon product, JPL-M solutions, which made available by [51].The JPL-M solves the gravity field functions within a fixed mass block of 3 × 3-degree resolution [51].In 2016, the Center for Space Research (CSR) at the University of Texas at Austin introduced another mascons product, the CSR-M [27].The CSR-M data were estimated using the same standards as the preceding GRACE-SH [27].However, CSR-M data have the advantage of retaining location information that can be used in smaller areas (~100 km) [52], reducing residual noise and minimizing spatial leakage error.The CSR-M based TWS data can be integrated directly without applying any scaling factor.This research utilizes the CSR-M data at a 1-degree resolution to comply with the original GRACE-footprint, and TWS data were extracted for Africa at the country level.The CSR-M data can be accessed via http://www2.csr.utexas.edu/grace/RL05_mascons.html.
TRMM Precipitation Estimates
The Tropical Rainfall Measuring Mission (TRMM) is a joint mission of NASA and the Japan Aerospace Exploration Agency that began in January 1998.TRMM monthly precipitation observations products are computed as quasi-global grids of 0.25 • resolution combining microwave-IR-gauge estimates of precipitation.The TRMM research product is recommended for global and regional water balance studies and hydrological model simulation.In this research, we utilized the TRMM 3B42 research product.Precipitation data were co-registered to a fixed 1-degree resolution grid similar to the aggregated CSR-M estimates.The TRMM data were sampled at the country level using individual country shapefile.
Ancillary Data
Other ancillary data utilized include the four-soil moisture (SM) estimates, as well as the canopy water content (CWC) from the Global Land Data Assimilation System (GLDAS) Noah-LSM.Summing the average of SM and CWC estimates leads to calculate the Land Water Content (LWC) required for GW storage estimation according to Equation (6).The LWC anomalies were constructed using the same GRACE baseline by subtracting the averaged grids from January 2004 to December 2009 from all monthly grids.The GLDAS-Noah datasets are available at 1 • resolution grids via (https://disc.sci.gsfc.nasa.gov/datasets?page=1&keywords=gldas%20noah).
The data were co-registered similar to GRACE grids.The IRWR data for Africa were acquired from the AQUASTAT database at (http://www.fao.org/nr/water/aquastat/data/query/index.html?lang=en).The lake level altimetry observations for four major African Lakes (Tana, Victoria, Malawi, and Tanganyika) and two reservoirs (Volta and Nasser) were obtained from the HYDROWEB portal (http://hydroweb.theia-land.fr/);noteworthy, these lake level altimetry observations have good agreement with in-situ water levels observations according to [37,53].Additionally, through personal communications, we acquired the time series for the depth to groundwater for twenty observational wells in North Ghana from December 2005 to December 2012.The groundwater data were collected as part of project BRAVE (Building understanding of climate variability into the planning of groundwater supplies from low storage aquifers in Africa), headed by the British Geological Survey (BGS).Potential evapotranspiration (PET) estimates were acquired from the Climatic Research Unit (CRU), at the University of East Angelia (UEA).CRU provides monthly reanalysis datasets calculated at high-resolution (0.5 • × 0.5 • ) [29]; herein, we utilized the CRU version CRU (TS v. 4.02) for the period from 2002 to 2016.CRU grids are available via (https://crudata.uea.ac.uk/cru/data/hrg/).The annual average precipitation and PET data were utilized to calculate the aridity index (AI) according to [54] approach (i.e., AI = P/PET).Countries are classified according to AI into hyper-arid; AI < 0.05, arid; 0.05 < AI < 0.20, semi-arid; 0.20 < AI < 0.50, and humid; AI > 0.50 [54].The AI was utilized to understand the relationship between TWS uncertainties and countries' aridity (see Section 3.2).Finally, the current and future projections of population counts for 48 African countries were downloaded via the World Bank portal at (https://data.worldbank.org/data-catalog/population-projection-tables).Countries' population densities were established as the population count per unit area.
PAWS Index
We argue that ∆TWS determined from Equation ( 5) is analogous to the change in storage (∆S) calculated in classical water budget hydrology.As such, it can be used as a proxy to calculate a country's Internal Water Storage (IWS).The IWS accounts for the water availability in all forms within a country's borders (i.e., surface and groundwater storage).Conceptually, available water (or change in storage) can be estimated as the net difference between inflows (from precipitation, surface, and groundwater) and outflows (evapotranspiration losses, surface, and groundwater outflow) to the hydrologic system [55].In classical hydrology, this is expressed in terms of fluxes, that is, ∆S is obtained as the residual between input and output or (I − O = ∆S), which can be rearranged as, where R: runoff; ET: evapotranspiration; P: precipitation; ∆S: the change in storage.
In contrast, the GRACE-based approach integrates all effects of fluxes and anthropogenic factors within the system or study domain and estimates the available water as the net change in storage (Figure 1).
PAWS Index
We argue that ΔTWS determined from Equation ( 5) is analogous to the change in storage (ΔS) calculated in classical water budget hydrology.As such, it can be used as a proxy to calculate a country's Internal Water Storage (IWS).The IWS accounts for the water availability in all forms within a country's borders (i.e., surface and groundwater storage).Conceptually, available water (or change in storage) can be estimated as the net difference between inflows (from precipitation, surface, and groundwater) and outflows (evapotranspiration losses, surface, and groundwater outflow) to the hydrologic system [55].In classical hydrology, this is expressed in terms of fluxes, that is, ΔS is obtained as the residual between input and output or (I -O = ΔS), which can be rearranged as, where R: runoff; ET: evapotranspiration; P: precipitation; ΔS: the change in storage.
In contrast, the GRACE-based approach integrates all effects of fluxes and anthropogenic factors within the system or study domain and estimates the available water as the net change in storage (Figure 1).Therefore, to calculate the potential available water storage per-capita (PAWS), first, we estimate the ΔTWS between two consecutive months according to [43].
Because there generally exists a one-month lag between precipitation and TWS [56][57][58], the monthly IWS per-country is determined as the difference between TRMM precipitation estimate of the month(i) and the ΔTWS of the consecutive month(i+1).
where IWS is expressed in units of mm/month.Then, PAWS is obtained as the average monthly IWS (m/yr.)divided by country population density (population count per unit area).Therefore, to calculate the potential available water storage per-capita (PAWS), first, we estimate the ∆TWS between two consecutive months according to [43].
∑ 𝐼𝑊𝑆
Because there generally exists a one-month lag between precipitation and TWS [56][57][58], the monthly IWS per-country is determined as the difference between TRMM precipitation estimate of the month(i) and the ∆TWS of the consecutive month(i + 1).
where IWS is expressed in units of mm/month.Then, PAWS is obtained as the average monthly IWS (m/yr.)divided by country population density (population count per unit area).
The PAWS unit is expressed as (m 3 /yr.per-capita).
We recognize that a degree of difference between IRWR and IWS is inevitable first as a result of errors and uncertainties inherent in the data used to drive each index and second due to the differences in the manner in which available water is conceptualized and calculated.We hypothesize, however, that the two indices will mostly agree when available water per-capita is grouped into different vulnerability classes using the established WSI threshold of Falkenmark (see Table 1).Section 3.2 highlights the differences between the IWS, IRWR, PAWS, and WSI.
Uncertainty Estimations
The uncertainty associated with each source of data used, that is, TRMM, ∆TWS, lake level estimates, LSM, and the calculated IWS, was assessed according to [52].Specifically, we applied an additive model approach to decompose the total series (S total ) into its main constituents as follows: The standard deviation of the residual, (S residual ), was treated as a measurement error associated with each component.It is worth noting that the errors calculated in this manner may overestimate the actual error because the residual may contain sub-seasonal scale signals [59].
TWS Trend Estimation
The TWS trend was estimated using non-parametric Mann-Kendall (MK) trend test [60].MK method is widely used for trend estimation [61], and the significance of the trend was tested using the Sen's slope method.The Mann-Kendall statistic (S) for a time series x 1 , x 2 , . . ., x n is calculated as, where Sgn( The MK tests the presence and significance of a trend but not its magnitude.Therefore, we applied Sen's slope estimator, (Q i ), to determine the magnitude of a trend in each x i with the statistically significant trend.The test is calculated as, where x j and x k are as previously defined.The slope is measured at n points in the time series, N = n(n − 1)/2, Q i is the median of these N values.We accessed the Sen's slope algorithm via CARN.R-project, the spatialEco package, for spatial analysis and modeling utilities according to [62].
Temporal and Spatial Patterns of ∆TWS
To explore the temporal variation of GRACE-TWS data, Figure 2 compares the monthly TWS series against lake level altimetry observations.The results show the agreement, (R 2 ), between TWS and the lake level observations varying between 0.66 and 0.77, all strongly statistically significant (p < 0.001).This agreement is noteworthy given the small size of the lakes relative to the GRACE footprint.Other important characteristics of the observed lake level time series, such as trends (e.g., Lake Malawi and Lake Tanganyika) and abrupt shifts (e.g., Lake Victoria), are also accurately replicated in TWS observations.On the other hand, the amplitudes are not consistently perfectly matched.This is not surprising, given likely discrepancies between lake surface areas and the GRACE footprint, as well as the fact that GRACE-TWS integrates all the changes in surface and groundwater storage changes, as well as, the variation related to anthropogenic impact [59].
Comparison of IWS, IRWR, PAWS, and WSI
Figure 3 plots the magnitude of uncertainties associated with ΔTWS (TWSA), IWS, and precipitation (Precip).The data have been arranged left to right by decreasing the country's area (Figure 3A) and increasing the humidity levels according to AI (Figure 3B).The results show that the Since GRACE cannot distinguish between anomalies resulting from the surface, soil moisture, or groundwater storage, thus Noah-LSM outputs were used to remove surface and soil moisture storage from GRACE-TWS following Equation (6).The temporal variation of GWS anomalies was compared to in-situ observations to the depths to groundwater levels from twenty groundwater wells in Northern Ghana (Figure 2, plot 7).The two series show good temporal consistency with an R 2 value of 0.70 (p < 0.001), similar to the degree of agreement between TWS and lake level measurements.
Spatially, Figure 2 shows the TWS trend of evolution across Africa.Areas of significantly decreasing trend in TWS anomalies are observed in the semiarid and arid regions of North Africa (i.e., Nubian Aquifer and South Tunisia).A large southwest to the northeast oriented region of negative TWS anomalies is extending from the Congo basin to South Sudan.Lake Malawi, Southern Mozambique, and Limpopo river in Southern Africa are displaying negative TWS trend.On the contrary, areas of significant positive TWS trend cover most of Sahel region in West Africa, a large southwest to northeast oriented positive anomalies from Okavango river delta in the southwest, Lake Tanganyika, Lake Victoria, and further northeast to Lake Tana.These observations in TWS trend across Africa were confirmed as well by the temporal patterns from the lake observations.Furthermore, existing studies have concluded similar observations of the TWS trends in Africa (i.e., [38,52,59,63]).However, additional studies are needed to establish the cause(s), as well as associated impacts, of these temporal and spatial patterns of TWS trends across Africa.
Comparison of IWS, IRWR, PAWS, and WSI
Figure 3 plots the magnitude of uncertainties associated with ∆TWS (TWSA), IWS, and precipitation (Precip).The data have been arranged left to right by decreasing the country's area (Figure 3A) and increasing the humidity levels according to AI (Figure 3B).The results show that the uncertainty in TWSA and IWS data lies within three averages: ±2 cm, ± 4 cm, and ±6 cm, respectively (see different shades of red in Figure 3).Uncertainty in precipitation is low in all countries, (<± 1 cm), except for Nigeria, Côte d'Ivoire, and Congo.Significantly, the uncertainty in all data sources increases in inverse proportion to country size (R 2 = 0.23, p < 0.0001) and direct proportion with the countries' aridity (R 2 = 0.57, p < 0.0001).These findings are consistent with the result of other studies (e.g., [52]), who have also reported larger uncertainties as basin size decreases.Confounding the situation, however, is the fact that the magnitude of uncertainty in the arid zone countries, for example, Egypt, Libya, West Sahara, and Eritrea is also relatively smaller.Since some of the largest countries in Africa by area are also among the most arid, it is unclear how aridity and size affect uncertainty.This is an important area of further research because the results may have implications on the calculations and interpretation of water vulnerability and scarcity using GRACE data.
Figure 4A compares the GRACE-estimated IWS to the AQUASTAT-IRWR data by country.The result shows three sets of observations: 1-a good agreement between the calculated IWS and the IRWR in twenty-three countries with (p < 0.0001), 2-overestimation of IWS relative to the IRWR in thirteen countries and finally underestimation between the IWS compared to the IRWR in twelve countries.Spatially, most of the countries, where IWS 'overestimates' relative to IRWR, are in arid areas (e.g., Libya, Niger, Kenya, Somali, Namibia) (Figure 4B).We hypothesize that this result likely indicates that IWS includes additional groundwater resources within these countries that are not included in IRWR.Conversely, the countries, where IWS 'underestimates', generally have very large populations demanding more water resources (e.g., Egypt, Nigeria, Congo).These observations underscore the contribution of the IWS to update the water resources status of each African country.The countries' water scarcity classification based on the PAWS indicator shows that 27 countries follow similar pattern compared to the WSI index (Figure 4C).
uncertainty.This is an important area of further research because the results may have implications on the calculations and interpretation of water vulnerability and scarcity using GRACE data.Figure 4A compares the GRACE-estimated IWS to the AQUASTAT-IRWR data by country.The result shows three sets of observations: 1-a good agreement between the calculated IWS and the IRWR in twenty-three countries with (p < 0.0001), 2-overestimation of IWS relative to the IRWR in thirteen countries and finally underestimation between the IWS compared to the IRWR in twelve countries.Spatially, most of the countries, where IWS 'overestimates' relative to IRWR, are in arid areas (e.g., Libya, Niger, Kenya, Somali, Namibia) (Figure 4B).We hypothesize that this result likely indicates that IWS includes additional groundwater resources within these countries that are not included in IRWR.Conversely, the countries, where IWS 'underestimates', generally have very large populations demanding more water resources (e.g., Egypt, Nigeria, Congo).These observations underscore the contribution of the IWS to update the water resources status of each African country.The countries' water scarcity classification based on the PAWS indicator shows that 27 countries follow similar pattern compared to the WSI index (Figure 4C). Figure 5 shows the status of water availability in Africa by country based on the WSI (Figure 5A) and PAWS (Figure 5B).Both plots utilize the same water vulnerability thresholds (Table 1).The results show that both the WSI and PAWS classify twenty-six countries (54%) into the same water vulnerability class (Figure 5C).Much of this agreement is driven by the countries classified as experiencing 'no stress', out of which eighteen (69%) are classified similarly.Of the remaining, one country (Tanzania) is classified by both PWAS and WSI as "vulnerable", three countries are classified as stressed (Eretria, Malawi, and South Africa), and four are classified as scarce in both indices (Egypt, Tunisia, Rwanda, and Burundi).In twenty-two countries (45%), however, the two indices lead to a different water vulnerability status.For instance, the PAWS index leveled up twelve countries in their Figure 5 shows the status of water availability in Africa by country based on the WSI (Figure 5A) and PAWS (Figure 5B).Both plots utilize the same water vulnerability thresholds (Table 1).The results show that both the WSI and PAWS classify twenty-six countries (54%) into the same water vulnerability class (Figure 5C).Much of this agreement is driven by the countries classified as experiencing 'no stress', out of which eighteen (69%) are classified similarly.Of the remaining, one country (Tanzania) is classified by both PWAS and WSI as "vulnerable", three countries are classified as stressed (Eretria, Malawi, and South Africa), and four are classified as scarce in both indices (Egypt, Tunisia, Rwanda, and Burundi).In twenty-two countries (45%), however, the two indices lead to a different water vulnerability status.For instance, the PAWS index leveled up twelve countries in their water vulnerability status, while ten countries were leveled down compared to WSI indicator (see Figure 5).shows the changes in the IWS for 26 countries between 2002 and 2016 that are agreed on the water status level, ("no stress" 18 countries, "vulnerable" one country, "Stressed" three countries, and "Scarce" four countries).
The above patterns reveal important differences between PAWS and WSI.For example, considering both the scarcity and stressed categories, the agreement in the countries classified similarly is 54%, implying that the methods agree more than they disagree.Based on the water vulnerability levels, the PAWS index revealed that about 14% of the African population, ~160 million people, currently live under a water scarcity status.Meanwhile, according to WSI, about 20% of the African population, ~ 250 million people, currently live under water scarcity conditions.Research is needed to clarify areas of disagreement in water vulnerability status classification between the proposed PAWS and existing methods based on conventional data.Meanwhile, the differences are mainly attributed to the dynamic changes recorded by GRACE-based IWS between 2002 and 2016 (Figure 5C).Moreover, the apparent high level of agreement in the countries classified as 'no-stress' may simply be due to the fact that this category is large and unbounded on the upper end, allowing many more countries to be grouped together.As noted previously, these differences are not surprising.The PAWS estimate accounts for all forms of water, including soil moisture and groundwater in deep aquifers, while WSI relies overwhelmingly on the portion of water influx within the system.The WSI likely underestimates, especially, the groundwater component due to poor data availability and quality.Furthermore, the runoff and flow measurements are highly susceptible to measurement and calibration errors.In contrast, not all of the water available in storage as measured by PAWS is extractable for technical and economic reasons.Therefore, the method likely overestimates real or useable available water.Further research is also required to reconcile these inconsistencies to facilitate decision making and planning regarding the water vulnerability status of African countries.
The PAWS index has utilized to develop the first-order estimates of possible water scarcity levels due to projected climate change and population growth in Africa.A 10% decrease in future water resources, which is within the range of several climate projections for some countries [64], is developed for future population growth of years 2025 and 2050.Figure 6 shows that total water resources availability in the year 2025 leads to ~100% increase in the number of countries experiencing water scarcity, from five to ten countries.This implies that ~37% or ~600 million people of Africa's The above patterns reveal important differences between PAWS and WSI.For example, considering both the scarcity and stressed categories, the agreement in the countries classified similarly is 54%, implying that the methods agree more than they disagree.Based on the water vulnerability levels, the PAWS index revealed that about 14% of the African population, ~160 million people, currently live under a water scarcity status.Meanwhile, according to WSI, about 20% of the African population, ~250 million people, currently live under water scarcity conditions.Research is needed to clarify areas of disagreement in water vulnerability status classification between the proposed PAWS and existing methods based on conventional data.Meanwhile, the differences are mainly attributed to the dynamic changes recorded by GRACE-based IWS between 2002 and 2016 (Figure 5C).Moreover, the apparent high level of agreement in the countries classified as 'no-stress' may simply be due to the fact that this category is large and unbounded on the upper end, allowing many more countries to be grouped together.As noted previously, these differences are not surprising.The PAWS estimate accounts for all forms of water, including soil moisture and groundwater in deep aquifers, while WSI relies overwhelmingly on the portion of water influx within the system.The WSI likely underestimates, especially, the groundwater component due to poor data availability and quality.Furthermore, the runoff and flow measurements are highly susceptible to measurement and calibration errors.In contrast, not all of the water available in storage as measured by PAWS is extractable for technical and economic reasons.Therefore, the method likely overestimates real or useable available water.Further research is also required to reconcile these inconsistencies to facilitate decision making and planning regarding the water vulnerability status of African countries.
The PAWS index has utilized to develop the first-order estimates of possible water scarcity levels due to projected climate change and population growth in Africa.A 10% decrease in future water resources, which is within the range of several climate projections for some countries [64], is developed for future population growth of years 2025 and 2050.Figure 6 shows that total water resources availability in the year 2025 leads to ~100% increase in the number of countries experiencing water scarcity, from five to ten countries.This implies that ~37% or ~600 million people of Africa's population would be affected.Meanwhile, the number of countries under scarcity condition increased ~280% for the year 2050, from five to nineteen countries.This means that ~57% of Africa's population or about 1.4-billion people will deal with the extreme water crisis.Within the water scarcity continuum, for 2025, the number of countries experiencing water stress decreases from twelve to nine, while, thirteen are classified as vulnerable.Interestingly, the 2025 projections reveal that seventeen countries which are currently classified as "no stress" still lie under the same water scarcity category.The projections for the year 2050 show that the total number of countries experiencing "no stress" status is declined significantly from seventeen to seven countries, meaning that ~85% of Africa will face a dangerous water scarcity situation by 2050.
Remote Sens. 2019, 11, x FOR PEER REVIEW 13 of 18 twelve to nine, while, thirteen are classified as vulnerable.Interestingly, the 2025 projections reveal that seventeen countries which are currently classified as "no stress" still lie under the same water scarcity category.The projections for the year 2050 show that the total number of countries experiencing "no stress" status is declined significantly from seventeen to seven countries, meaning that ~85% of Africa will face a dangerous water scarcity situation by 2050.These future scenarios present a sobering picture of the precarious situation of water availability in Africa given rapid population growth.Fortunately, it is highly unlikely that the entire continent will experience a 10% decrease in water resources availability everywhere.Even so, for some countries, one or more of these scenarios are within the range of past experience.For example, the peak of the Sahel droughts of 1970 to 1985, precipitation decreased by 30% [65,66], suggesting that such a magnitude of change is possible again at some point in the future.Moreover, a number of climate change projection scenarios suggest a decrease in precipitation over Northern Africa and the Western parts of Africa [67], while the Eastern and Southern Africa are highly likely to experience increase precipitation by the end of the 21st century [68,69].
Conclusions
Availability of freshwater resources is critical for assuring human wellbeing, socio-economic development, and food security.This pivotal and ubiquitous role leads to great interest in determining as accurately as possible the status of freshwater resources availability as a basis for developing policies for planning and water resources utilization or allocation.Currently, the most widely used method of obtaining this information relies on the measurements of the fluxes of water entering and exiting a country.Unfortunately, the requisite data tends to be unavailable, discontinuous over space and time, inaccessible for reasons of conflict or political decisions, and frequently collected by different agencies using different references periods and standards.
In this paper, we demonstrated the use of GRACE anomalies and TRMM precipitation estimates for calculating available renewable water resources for Africa.The proposed approach overcomes many of the limitations identified above.The data are accessible, continuous over space and time, and collected based on a consistent methodology and reference period.Even so, the method is not without limitations.Critically, it estimates potential available fresh water only in a hydrologic or physical sense.That is, it does not address the political and power relations that make water actually These future scenarios present a sobering picture of the precarious situation of water availability in Africa given rapid population growth.Fortunately, it is highly unlikely that the entire continent will experience a 10% decrease in water resources availability everywhere.Even so, for some countries, one or more of these scenarios are within the range of past experience.For example, the peak of the Sahel droughts of 1970 to 1985, precipitation decreased by 30% [65,66], suggesting that such a magnitude of change is possible again at some point in the future.Moreover, a number of climate change projection scenarios suggest a decrease in precipitation over Northern Africa and the Western parts of Africa [67], while the Eastern and Southern Africa are highly likely to experience increase precipitation by the end of the 21st century [68,69].
Conclusions
Availability of freshwater resources is critical for assuring human wellbeing, socio-economic development, and food security.This pivotal and ubiquitous role leads to great interest in determining as accurately as possible the status of freshwater resources availability as a basis for developing policies for planning and water resources utilization or allocation.Currently, the most widely used method of obtaining this information relies on the measurements of the fluxes of water entering and exiting a country.Unfortunately, the requisite data tends to be unavailable, discontinuous over space and time, inaccessible for reasons of conflict or political decisions, and frequently collected by different agencies using different references periods and standards.
In this paper, we demonstrated the use of GRACE anomalies and TRMM precipitation estimates for calculating available renewable water resources for Africa.The proposed approach overcomes many of the limitations identified above.The data are accessible, continuous over space and time, and collected based on a consistent methodology and reference period.Even so, the method is not without limitations.Critically, it estimates potential available fresh water only in a hydrologic or physical sense.That is, it does not address the political and power relations that make water actually available or accessible.Additionally, the method as presented deals with water scarcity at the country level, an often-cited criticism of many existing methods.While the methodology is perfectly capable of being applied at finer temporal, political, and geographic units, we elected to focus on the country level because of the availability of the AQUASTAT-IRWR data against which we have compared our results.The major findings can be summarized as follows: 1.
Estimates of TWS derived from GRACE appear to be affected by country size and aridity.The magnitude of uncertainty associated with input data increases as the country size decreases.However, the relationship is complicated by the fact that many of Africa's largest countries inhabit the most arid zones.Either factor has a physical basis.Confidence in GRACE estimates decreases as the study domain shrinks to below 200,000 km 2 , generally accepted as the GRACE footprint.
Similarly, the small range of variability in available water typical in arid regions leads to smaller uncertainty in estimated TWS.Further research is needed to establish the relative effects of scale and aridity on GRACE anomalies.
2.
With the above caveat in mind, the PAWS approach classifies 26 out of 48 countries in the same water vulnerable category as AQUASTAT-IRWR.Of the remaining countries, a strong majority was classified in the adjoining or bordering category, suggesting that the hard thresholds contribute to some of the differences in classification.On the other hand, much of the agreement between the two methods is driven by the large no stress category, which acts as a sort of catchall group.This suggests, perhaps not unexpectedly, that the differences between the two methods are accentuated when using small ranges for categorization.Clearly, however, there are fundamental differences between WSI and PAWS, which reflect how available water is conceptualized and calculated.
3.
Compared to the IRWR, PAWS results in a more moderate assessment of water resources scarcity in the arid areas.This is not surprising, given the spatial continuity of the PAWS estimates compared to the country averaged-IRWR.Additionally, we suspect that PAWS index integrates a larger proportion of groundwater, accounting for the difference.
4.
The PAWS can be used to rapidly develop first estimates or scenarios of possible water scarcity due to climate change and population growth.A 10% decrease in future water resources, which is within the range of several climate projections for some countries, may entail a significant increase in the number of additional countries facing water scarcity.Preliminary analysis suggests that it is possible to partition GRACE signals to yield proxy estimates of groundwater measurements, although more data are needed in different climatic zones in order to develop robust calibration.Additional research is needed to expand and validate the promise shown by these preliminary estimates, including, for example, the ability to partition GRACE signals to derive proxies for groundwater level dynamics and to investigate water scarcity at finer spatial and temporal time scales.
Figure 1 .
Figure 1.A conceptual framework to estimate the net water storage using flux budgeting (A) as the difference between the input-output of the water in the system and Gravity Recovery and Climate Experiment (GRACE)-based changes in water storage (B).TRWR: Total Renewable Water Resources; Qin: inflow; Qb: baseflow; Qout: outflow; I: infiltration; Gin: groundwater-in; Gout: groundwater-out; TWS: Total Water Storage; SN: Snowpack; SW: Surface Water; GW: Ground Water; SM: Soil Moisture.
Figure 1 .
Figure 1.A conceptual framework to estimate the net water storage using flux budgeting (A) as the difference between the input-output of the water in the system and Gravity Recovery and Climate Experiment (GRACE)-based changes in water storage (B).TRWR: Total Renewable Water Resources; Qin: inflow; Qb: baseflow; Qout: outflow; I: infiltration; Gin: groundwater-in; Gout: groundwater-out; TWS: Total Water Storage; SN: Snowpack; SW: Surface Water; GW: Ground Water; SM: Soil Moisture.
Figure 2 .
Figure 2. Total Water Storage (TWS) trend across Africa between 2002 and 2016 derived using the Center for Space Research (CSR)-M data.The trend map shows a varying TWS across Africa with a remarkable decline in North Africa, Congo basin in the west, Lake Malawi, Limpopo river basin, and Madagascar in South Africa.There is a positive increase in TWS in the Sahel region in the west, Okavango river basin in the south, Lake Victoria, and Lake Tana.The TWS signals from CSR-M were compared with lake water level (LWL) observations across six major lakes.Noteworthy, the majority of the lakes have aerial coverage less than the original Gravity Recovery and Climate Experiment (GRACE) satellite footprint; however, there is good consistency between GRACE signals and lake level anomalies (p-value < 0.0001).The uncertainty bounds are computed for the TWS and LWL as introduced in Section 2.5.Additionally, groundwater storage estimates from GRACE, based on Equation (6), were compared to groundwater levels (meter below ground level-MBGL) that were averaged from 20 groundwater wells in North Ghana.The in-situ groundwater observation covers the period from December 2005 to December 2009.The GRACE-based groundwater observations show good agreement with the in-situ data (p-value < 0.0001).
Figure 2 .
Figure 2. Total Water Storage (TWS) trend across Africa between 2002 and 2016 derived using the Center for Space Research (CSR)-M data.The trend map shows a varying TWS across Africa with a remarkable decline in North Africa, Congo basin in the west, Lake Malawi, Limpopo river basin, and Madagascar in South Africa.There is a positive increase in TWS in the Sahel region in the west, Okavango river basin in the south, Lake Victoria, and Lake Tana.The TWS signals from CSR-M were compared with lake water level (LWL) observations across six major lakes.Noteworthy, the majority of the lakes have aerial coverage less than the original Gravity Recovery and Climate Experiment (GRACE) satellite footprint; however, there is good consistency between GRACE signals and lake level anomalies (p-value < 0.0001).The uncertainty bounds are computed for the TWS and LWL as introduced in Section 2.5.Additionally, groundwater storage estimates from GRACE, based on Equation (6), were compared to groundwater levels (meter below ground level-MBGL) that were averaged from 20 groundwater wells in North Ghana.The in-situ groundwater observation covers the period from December 2005 to December 2009.The GRACE-based groundwater observations show good agreement with the in-situ data (p-value < 0.0001).
Figure 3 .
Figure 3. Uncertainty estimates of all variables calculated according to Equation (10) relative to countries area (A) and the aridity index (B).The aridity is calculated as the ratio of P/PET and classified following [54].Areas of red shades indicate the average uncertainties for Total Water Storage Anomaly (TWSA) and Internal Water Storage (IWS) data.
Figure 3 .
Figure 3. Uncertainty estimates of all variables calculated according to Equation (10) relative to countries area (A) and the aridity index (B).The aridity is calculated as the ratio of P/PET and classified following [54].Areas of red shades indicate the average uncertainties for Total Water Storage Anomaly (TWSA) and Internal Water Storage (IWS) data.
Figure 4 .
Figure 4.Estimated countries' Internal Water Storage (IWS) versus the AQUASTAT-Internal Renewable Water Resources (IRWR) (A), this plot indicates the three main classes of the estimation (agreed, overestimate, and underestimate), the spatial distribution of these classes is shown in the map (B).Based on the calculated IWS, the Potential Available Water Storage (PAWS) will follow the same pattern when compared to Water Scarcity Index (WSI) indicator (C).
Figure 4 .
Figure 4.Estimated countries' Internal Water Storage (IWS) versus the AQUASTAT-Internal Renewable Water Resources (IRWR) (A), this plot indicates the three main classes of the estimation (agreed, overestimate, and underestimate), the spatial distribution of these classes is shown in the map (B).Based on the calculated IWS, the Potential Available Water Storage (PAWS) will follow the same pattern when compared to Water Scarcity Index (WSI) indicator (C).
18 Figure 5 .
Figure 5. Water Stress Index based on the current AQUASTAT water storage data (A), and the new proposed Potential Available Water Storage (PAWS) from Gravity Recovery and Climate Experiment (GRACE) data (B) using the average Internal Water Storage (IWS) from 2002 to 2016.The plot (C)shows the changes in the IWS for 26 countries between 2002 and 2016 that are agreed on the water status level, ("no stress" 18 countries, "vulnerable" one country, "Stressed" three countries, and "Scarce" four countries).
Figure 5 .
Figure 5. Water Stress Index based on the current AQUASTAT water storage data (A), and the new proposed Potential Available Water Storage (PAWS) from Gravity Recovery and Climate Experiment (GRACE) data (B) using the average Internal Water Storage (IWS) from 2002 to 2016.The plot (C) shows the changes in the IWS for 26 countries between 2002 and 2016 that are agreed on the water status level, ("no stress" 18 countries, "vulnerable" one country, "Stressed" three countries, and "Scarce" four countries).
Figure 6 .
Figure 6.Future water status based on 10% decrease in the available water resources based on Potential Available Water Storage (PAWS) index and Africa's projection populations for 2025 (A) and 2050 (B).
Figure 6 .
Figure 6.Future water status based on 10% decrease in the available water resources based on Potential Available Water Storage (PAWS) index and Africa's projection populations for 2025 (A) and 2050 (B).
Table 3 .
Sources and information about the utilized data.
|
v3-fos-license
|
2023-06-02T15:24:41.938Z
|
2023-01-01T00:00:00.000
|
259004654
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1051/e3sconf/202338906006",
"pdf_hash": "5660bb35695e241bb1b371b551fb928163de391e",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42381",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "b0c3afda05027dd58138cf6cfdef2aa277ab03e3",
"year": 2023
}
|
pes2o/s2orc
|
Coloristic properties of decorative aluminum coatings applied by cold gas dynamic spraying (CGDS)
. When making objects of environment design with metal coatings, it is important to consider not only the protective properties of the coatings, but also the decorative ones. The use of aluminum CGDS coatings with subsequent tinting on metal environment design castings will not only increase the corrosion resistance of artistic products, but also give them new aesthetic qualities.
Introduction
The use of protective and decorative coatings for environment design objects made of metal allows to solve many problems in the field of technical aesthetics and design. In the production of environment design objects made of metal, it is important to consider not only the protective properties of the coatings used, but also the decorative ones.
Materials and methods
One of the methods of applying protective coatings is the method of cold gas-dynamic spraying [1][2][3][4][5]. The advantage of this method is the low temperature during spraying, which eliminates the deformation of the product. The method makes it possible to apply coatings both on limited parts of products and on surfaces of considerable size, as well as to give them the necessary protective and decorative properties [6][7][8]. Standardized powder materials produced by Obninsk Powder Spraying Center are used for spraying. When evaluated visually, sprayed aluminum coatings are not distinguished by a wide range of colors. To enhance the color performance of the sprayed aluminum coatings, chemically active toning solutions were used. Studies have shown [9] that the use of these the toning compositions allows expanding the color palette of aluminum surfaces, but there is no information on the use of chemically active compositions on the sprayed surfaces, which confirms the relevance of this study. Toning of aluminum coating will allow to obtain a color from bronze to dark brown color depending on the time of exposure to chemical solutions [10].
The study selected a powder material of grade A-30-01 containing 70 % aluminum and 30 % corundum. The presence of corundum in the powder contributes to the best anchoring of particles on the surface. The coatings were applied to 40x20 mm metal substrates. Sputtering speed 0.5-1 m/s, heating temperature 400-500°С, powder dispersion 15-25 microns. The toning compositions used in art processing of metals have been determined for toning of aluminum CGDS coatings [6][7][8][9][10]. Detailed formulations of chemical compositions are presented in Table 1. Chrome (VI) oxide, 3 g/l Sodium silicon fluoride, 3 g/l Golden The solutions were applied to mechanically treated samples. For best results and most even tone formation, and to investigate the effect of roughness on color, the sample surface was machined to a certain roughness range of Ra, microns and Rz, microns using an abrasive tool.
X-ray structural analysis was performed to determine the structural components of the toned coatings. The combined coating samples were imaged using an ARL X'tra diffractometer (serial number 135).
Results and discussion
The effect of surface roughness Ra between 0.399 microns and 6.322 microns on gloss was investigated. On the untreated aluminum surfaces with roughness Ra 6.322 microns the gloss was 7%, on the polished surfaces with roughness Ra 0.399 microns -25%. We determined the class of surface cleanliness in accordance with GOST 2789-59 [15]. Experimental data are shown in table 2. Toning solutions was applied with a brush, at an operating temperature of 25 ° C and 60 ° C.
We determined the dependence of the effect of the temperature of the solution on the color characteristics. The results showed that the solution of cobalt acetate and potassium permanganate allowed to tone aluminum coatings in color from bronze to black and brown depending on the concentration and temperature of the solution. Tests were conducted with heated solution #1 containing 50 g/L of cobalt acetate and 25 g/L of potassium permanganate while varying the duration of exposure time on the samples.
Analysis of the results of studies after application of the toning composition showed that the use of chemically active compositions at 60 ° C tinted aluminum coatings in the color from dark brown to black. Using the Lab chromaticity coordinates, the luminosity L was determined. Analysis of the spectrophotometric results showed that the L coordinates varied from 37.32 to 12.32, which corresponds to a dense dark-colored coating.
Application of similar compositions at 25 ° C allows you to get a gold-bronze shades and vary the intensity of the duration of exposure to the solution (Table 3).
To enhance the color properties, the aluminum coatings were also treated with solutions of phosphoric acid with potassium fluoride and chrome (VI) oxide for greenish hues and with a solution of chrome (VI) oxide and sodium silica for golden hues.
Analysis of the results showed the possibility of expanding the coloristic properties of sprayed aluminum coatings by applying chemically active solutions. The results of the effect of compositions on the color of aluminum sputtered coatings, depending on the exposure time and surface roughness are presented in Table 3.
The color change of aluminum coatings upon application of chemically active solution No. 1 occurred according to the following chemical reactions (1, 2): The dark color of the coating on aluminum can be explained by the course of two redox reactions with potassium permanganate and with cobalt acetate between metallic aluminum. The formation of compounds CoOOH and MnO2 corresponds to a dark brown color. The reaction after the application of solution number 2 was according to formula (3): The formation of CrPO4 compounds corresponds to black, CrF3 to green, and AlPO4 to white flecks.
The effect of solution number 3 on the color is presented as a chemical reaction (4): green white The coloristic components were measured in Lab, XYZ and RGB color models, which determine the color value of the three components.
Quantitative color values of sprayed aluminum coatings as a function of toning solution composition, exposure time and roughness are presented in Table 4. According to the coloristic data Lab determined the effect of toning compositions on the L luminosity. The results of comparative analysis at roughness Ra = 5 -6.3 microns showed that the color lightness on the sample coated with the reagent № 3, consisting of chrome (VI) oxide and sodium silicon fluoride at different times of exposure and temperature 25 ° C gives the highest index L. The samples coated with cobalt acetate and potassium permanganate showed the lowest L values, which confirms the formation of compounds CoOOH and MnO2 on aluminum surfaces.
As a result of this experimental research, the possibility of using an alternative method of color measurement, which involves mobile measurements in the field, having only a E3S Web of Conferences 389, 06006 (2023) https://doi.org/10.1051/e3sconf/202338906006 UESF-2023 smartphone from the equipment, which is very convenient in today's realities. To adapt this method, it is necessary to develop measures to create standard conditions for measurements, as well as the ability to reproduce the required conditions. The primary sample of this method gives results comparable with those obtained with a spectrophotometer. At the same time, the nature of the deviations suggests that the results from the smartphone can be corrected, interpolated, and then serve as a valid alternative to the spectrophotometer measurements. According
Conclusion
Studies of changes in the color components RGB, Lab, XYZ (Table 4) depending on the surface roughness have shown that the surface cleanliness directly affects the color of coatings. According to the data of the table it is possible to determine the chemical composition of the solution, at the application of which the index of the color component is characterized by its maximum value. Analysis of Lab values for aluminum coatings makes it possible to determine that the L coordinates, showing the color lightness, vary depending on the tinting compositions and technological modes of their application. X-ray analysis made it possible to identify the formation of chemical compounds directly affecting the color of the coatings and identify them.
Conclusions: 1. To assess the decorative properties, the parameters of the coating that directly affect its visual perception were identified. Among these are shine, surface color, reflectivity and roughness value. These parameters were considered and evaluated on aluminum surfaces; 2. The effect of the roughness of the coated layer of environmental design objects on the reflectivity of CGDS coatings made of aluminum was established; 3. Quantification of color coordinates provided an opportunity to reliably determine color in the design and restoration of products with specified color parameters; 4. Known recipes for patinating and coloring compositions have been tested, by adjusting the exposure time and concentration of which, it is possible to achieve different effects on aluminum coatings obtained by cold gas-dynamic spraying. Thus, it becomes possible to combine the manufacturability of any metal and the noble shades of bronze in one artistic product, also providing protection for the castings from the effects of aggressive urban environment; 5. As a result of the work the basic technological parameters of the color tinting of aluminum coatings in order to obtain the color characteristics imitating brass and bronze, used in the design and restoration of artistic products have been determined.
|
v3-fos-license
|
2024-05-19T15:17:28.144Z
|
2024-05-01T00:00:00.000
|
269869947
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/24/10/3201/pdf?version=1715954914",
"pdf_hash": "2d72b1b78432723522032ca4d77afbba3db7a993",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42382",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "5a1ae73be580d43cc900519186b5af3f56e8a66e",
"year": 2024
}
|
pes2o/s2orc
|
Detecting of Barely Visible Impact Damage on Carbon Fiber Reinforced Polymer Using Diffusion Ultrasonic Improved by Time-Frequency Domain Disturbance Sensitive Zone
Based on the decorrelation calculation of diffusion ultrasound in time-frequency domain, this paper discusses the repeatability and potential significance of Disturbance Sensitive Zone (DSZ) in time-frequency domain. The experimental study of Barely Visible Impact Damage (BVID) on Carbon Fiber Reinforced Polymer (CFRP) is carried out. The decorrelation coefficients of time, frequency, and time-frequency domains and DSZ are calculated and compared. It has been observed that the sensitivity of the scattered wave disturbance caused by impact damage is non-uniformly distributed in both the time and frequency domains. This is evident from the non-uniform distribution of the decorrelation coefficient in time-domain and frequency-domain decorrelation calculations. Further, the decorrelation calculation in the time-frequency domain can show the distribution of the sensitivity of the scattered wave disturbance in the time domain and frequency domain. The decorrelation coefficients in time, frequency, and time-frequency domains increase monotonically with the number of impacts. In addition, in the time-frequency domain decorrelation calculation results, stable and repetitive DSZ are observed, which means that the specific frequency component of the scattered wave is extremely sensitive to the damage evolution of the impact region at a specific time. Finally, the DSZ obtained from the first 15 impacts is used to improve the decorrelation calculation in the 16-th to 20-th impact. The results show that the increment rate of the improved decorrelation coefficient is 10.22%. This study reveals that the diffusion ultrasonic decorrelation calculation improved by DSZ makes it feasible to evaluate early-stage damage caused by BVID.
Introduction
Diffuse waves in plate are guided waves resulting from multi-scattering of elastic waves in heterogeneous media, highly sensitive to any structural disturbances [1].Evaluating the damage level is an effective method based on the decorrelation between the disturbance signal and the reference signal [2].
Many studies have discussed many scattering wave indicators to evaluate the damage level.Pomarède et al. [3] analyze changes in relative wave velocity and the correlation of signals between reference and damage states to detect microcracks in Carbon Fiber Reinforced Polymer (CFRP) caused by the four-point bending test.Wojtczak et al. [2] uses the decorrelation of the coda signal in the time domain and frequency domain to evaluate the damage of the concrete cube under splitting conditions.Gao et al. [4] performed disbond detection of an aeronautical honeycomb composite sandwich by calculating windowed cross-correlation in time domain and local power spectral density in frequency domain for direct wave and coda wave.Spytek et al. [1] used synthetic time-reversal of diffuse Lamb waves for the mean wavenumber estimation algorithm and used ultrasonic coda waves to perform damage imaging on aluminum and CFRP plates.However, the contribution of the Sensors 2024, 24, 3201 2 of 14 vibration components at different frequencies is undetermined.Spalvier et al. [5] utilized various features extracted from the cross-correlation function of multiple scattering signals to monitor the stress state in concrete pillars.These features include signal energy, crosscorrelation amplitude, cross-correlation time and cross-correlation symmetry.Liu et al. [6] used Taylor series expansion to perform low-time-consuming cross-correlation calculations to analyze concrete cylinders' relative wave velocity changes under compression conditions.He et.al.[7] established a physics-based model for the relative velocity change of coda wave subject to the stress variation for multi-layer structures.Niederleithinger et al. [8] devised a step-wise coda wave interferometry method for tracking stress change and distribution in concrete beams.Her et al. [9] uses the normalized coda wave energy of a single piezoelectric ceramic transducer to monitor the bolt connection.Furthermore, mode conversion [10] can also be used as damage indicators to evaluate structural integrity.
The scattering wave has different sensitivity to different positions on the specimen at different times [11].The scattering wave sensitive kernel model can be used to estimate the distribution of sensitive areas in time domain and space domain [12].The defect detecting and imaging can be realized by combining the decorrelation of signals between reference and damage states with the scattering wave sensitive kernel model [13,14].However, it is difficult to establish a sensitive kernel model for a small-sized heterogeneous specimen, as the real fiber distribution is affected by processing, making it hard to obtain a complete multiple scattering model [15].
Impact damage in CFRP usually forms inside, including intra-layer matrix cracking, inter-layer cracking and fiber breakage [16,17].Impact damage on composites is commonly referred to as Barely Visible Impact Damage (BVID) [18].As damage accumulates, the stiffness of CFRP decreases and this degradation occurs in three stages.Initially, there is a rapid stiffness decrease due to matrix cracks, followed by a more gradual and slower degradation that typically accounts for the majority of the fatigue life.In the last part of the fatigue life, the material properties are drastically reduced and the stiffness loss is accelerated [19,20].Currently, there is a lack of research on the distribution of sensitive areas of scattered waves in the time-frequency domain and the use of a Disturbance Sensitive Zone (DSZ) to improve detection sensitivity.
In this paper, the impact fatigue damage on CFRP is taken as the research object, and the change of time-frequency domain decorrelation of scattering wave under different impact times is discussed.The repeatability of time-frequency domain DSZ of scattering wave and the possibility of improving subsequent damage monitoring are also discussed.The improved decorrelation calculation results are compared with the decorrelation calculation results in time domain, frequency domain and time-frequency domain.This work will provide an experimental basis for the evaluation of the BVID based on scattering wave time-frequency domain decorrelation calculation methods improved by DSZ.
The rest of this paper is organized as follows.Section 2 introduces the experimental steps and the specimens used.Section 3 analyzes the experimental results, compares the decorrelation DC in time domain, frequency domain and time-frequency domain, and verifies the feasibility of using a prior DSZ to improve DC in the time-frequency domain.Finally, Section 4 concludes the study.
Materials and Methods
In order to produce different level of impact fatigue damage in specimen, a stainless steel iron ball with a mass of m = 0.905 kg and a diameter of D = 60 mm was used to impact the specimen.It is feasible to use ball to impact the specimen to produce different level of impact fatigue damage [21].Single impact energy greater than 8 J.The specimen is CFRP with a size of 200 × 40 × 3 mm.The main properties of the specimen are shown in Table 1, and the supplier provides these property values.The impact ball falls freely from a height of H = 1000 mm and moves away quickly after it bounces up.During each impact process, only one contact occurs between the specimen and the impact ball.The impact process was carried out in a PVC guide tube with a length of L = 1000 mm and an inner diameter of Sensors 2024, 24, 3201 3 of 14 D pipe = 66 mm.The schematic diagram of the impact process and the diffusion ultrasonic propagation path is shown in Figure 1.The impact region is located in the center of the specimen.There is no obvious impact pit and damage on the surface of the impact region, and there is BVID on the back of the impact region.BVID was observed by X4D-Z03B042-D 1600× optical microscope (OM) produced by RIEVBCAU, as shown in Figure 2. only one contact occurs between the specimen and the impact ball.The impact process was carried out in a PVC guide tube with a length of L = 1000 mm and an inner diameter of Dpipe = 66 mm.The schematic diagram of the impact process and the diffusion ultrasonic propagation path is shown in Figure 1.The impact region is located in the center of the specimen.There is no obvious impact pit and damage on the surface of the impact region, and there is BVID on the back of the impact region.BVID was observed by X4D-Z03B042-D 1600× optical microscope (OM) produced by RIEVBCAU, as shown in Figure 2. The setup of the experiment is shown in Figure 3a, and the equipment wiring is shown in Figure 3b.Two PZT5A piezoelectric ceramics with a diameter of 10 mm and a thickness of 4 mm were fixed on the specimen using 801 chloroprene glue (AILIKE/801), as shown in Figure 3c.The wiring of the experiment process is shown in Figure 3d.A signal generator (Tektronix AFG3052C, Beverton, OR, USA) is used to generate a sweep signal of 200~400 kHz with a duration of 0.4 ms.The sweep signal is amplified by a power amplifier (Falco Systems WMA-300, Katwijk aan Zee, The Netherlands) and connected to the emitter probe.The receiver probe is connected to the oscilloscope (Tektronix MDO4034C).The signal is collected at a sampling rate of 500 MHz and filtered by an average of 128 times to remove the influence of random noise.The control specimen did not carry out the impact test, but the other steps were consistent with the experimental specimen, and the signal of the receiving probe was collected synchronously with the experi- Sensors 2024, 24, 3201 3 of 14 only one contact occurs between the specimen and the impact ball.The impact process was carried out in a PVC guide tube with a length of L = 1000 mm and an inner diameter of Dpipe = 66 mm.The schematic diagram of the impact process and the diffusion ultrasonic propagation path is shown in Figure 1.The impact region is located in the center of the specimen.There is no obvious impact pit and damage on the surface of the impact region, and there is BVID on the back of the impact region.BVID was observed by X4D-Z03B042-D 1600× optical microscope (OM) produced by RIEVBCAU, as shown in Figure 2. The setup of the experiment is shown in Figure 3a, and the equipment wiring is shown in Figure 3b.Two PZT5A piezoelectric ceramics with a diameter of 10 mm and a thickness of 4 mm were fixed on the specimen using 801 chloroprene glue (AILIKE/801), as shown in Figure 3c.The wiring of the experiment process is shown in Figure 3d.A signal generator (Tektronix AFG3052C, Beverton, OR, USA) is used to generate a sweep signal of 200~400 kHz with a duration of 0.4 ms.The sweep signal is amplified by a power amplifier (Falco Systems WMA-300, Katwijk aan Zee, The Netherlands) and connected to the emitter probe.The receiver probe is connected to the oscilloscope (Tektronix MDO4034C).The signal is collected at a sampling rate of 500 MHz and filtered by an average of 128 times to remove the influence of random noise.The control specimen did not carry out the impact test, but the other steps were consistent with the experimental specimen, and the signal of the receiving probe was collected synchronously with the experi- The setup of the experiment is shown in Figure 3a, and the equipment wiring is shown in Figure 3b.Two PZT5A piezoelectric ceramics with a diameter of 10 mm and a thickness of 4 mm were fixed on the specimen using 801 chloroprene glue (AILIKE/801), as shown in Figure 3c.The wiring of the experiment process is shown in Figure 3d.A signal generator (Tektronix AFG3052C, Beverton, OR, USA) is used to generate a sweep signal of 200~400 kHz with a duration of 0.4 ms.The sweep signal is amplified by a power amplifier (Falco Systems WMA-300, Katwijk aan Zee, The Netherlands) and connected to the emitter probe.The receiver probe is connected to the oscilloscope (Tektronix MDO4034C).The signal is collected at a sampling rate of 500 MHz and filtered by an average of 128 times to remove the influence of random noise.The control specimen did not carry out the impact test, but the other steps were consistent with the experimental specimen, and the signal of the receiving probe was collected synchronously with the experimental specimen.A total of 20 impact tests were carried out.The ambient temperature was between 20.8 • C and 21.2 • C during the first 15 impact tests.During the 16-th to 20-th impact tests, the ambient temperature was between 20.6 • C and 20.8 • C. The change of ambient temperature is small, so its influence can be excluded.
impact tests, the ambient temperature was between 20.6 °C and 20.8 °C.The change of ambient temperature is small, so its influence can be excluded.
Results
The signal collected by the receiver probe is shown in Figure 4.According to the propagation time, the signal can be divided into direct wave, coda wave (multiply scattered wave) and noise.The ultrasonic wave attenuates rapidly when propagating in CFRP, and the coda wave is very short.Therefore, the decorrelation calculation of the signal part before noise (0~0.62 ms) is considered.
The wavelength of the ultrasonic signal is in the same order of magnitude as the thickness of the specimen.The shear wave and the longitudinal wave will be reflected and superimposed between the upper and lower surfaces to form a special stress wave, namely Lamb wave.Carbon fiberboard is an anisotropic composite material.The speed of the ultrasonic wave propagating inside carbon fiberboard is related to the direction, and its true dispersion curve is complex [22].Taking the shear wave velocity of 3 km/s and the longitudinal wave velocity of 5 km/s as examples, the dispersion curve of the isotropic plate with a thickness of 3 mm is drawn as shown in Figure 5a.S0 and S1 represent the 0th and 1st order symmetric mode Lamb waves, A0 and A1 represent the 0th and 1st order antisymmetric mode Lamb waves.The propagation velocity of the Lamb wave changes with the change in the frequency-thickness product.In this paper, Figure 5a is only used to illustrate the dispersion characteristics of Lamb propagation, which is not the real dispersion curve of ultrasonic wave in carbon fiber plate.The distribution of the disturbance sensitive zone in the time-frequency domain is related to the dispersion of Lamb, impact
Results
The signal collected by the receiver probe is shown in Figure 4.According to the propagation time, the signal can be divided into direct wave, coda wave (multiply scattered wave) and noise.The ultrasonic wave attenuates rapidly when propagating in CFRP, and the coda wave is very short.Therefore, the decorrelation calculation of the signal part before noise (0~0.62 ms) is considered.
Sensors 2024, 24, 3201 5 of 14 damage location and the distance between transducers, etc.The spectrum of S1(t) is shown as Figure 5b.Multiple peaks can be observed in the figure and they are related to the resonant frequency of the piezoelectric ceramic and specimen.The frequency range of the excitation signal is 200~400 kHz, so according to Figure 5a, we can assume that the expected modes are A0 and S0, where direct waves could be respectively S0 at 0.03~0.04ms and A0 at about 0.06 ms.The second part of the signal mainly consists of the reflected waves (S0, A0) on edges and coda waves.The frontier between the second and third parts is more difficult to explain, but in the end of the signal we could supposed that there are mostly scattered waves.So we could supposed the third part as "Coda Waves".The wavelength of the ultrasonic signal is in the same order of magnitude as the thickness of the specimen.The shear wave and the longitudinal wave will be reflected and superimposed between the upper and lower surfaces to form a special stress wave, namely Lamb wave.Carbon fiberboard is an anisotropic composite material.The speed of the ultrasonic wave propagating inside carbon fiberboard is related to the direction, and its true dispersion curve is complex [22].Taking the shear wave velocity of 3 km/s and the longitudinal wave velocity of 5 km/s as examples, the dispersion curve of the isotropic plate with a thickness of 3 mm is drawn as shown in Figure 5a.S0 and S1 represent the 0th and 1st order symmetric mode Lamb waves, A0 and A1 represent the 0th and 1st order antisymmetric mode Lamb waves.The propagation velocity of the Lamb wave changes with the change in the frequency-thickness product.In this paper, Figure 5a is only used to illustrate the dispersion characteristics of Lamb propagation, which is not the real dispersion curve of ultrasonic wave in carbon fiber plate.The distribution of the disturbance sensitive zone in the time-frequency domain is related to the dispersion of Lamb, impact damage location and the distance between transducers, etc.The spectrum of S 1 (t) is shown as Figure 5b.Multiple peaks can be observed in the figure and they are related to the resonant frequency of the piezoelectric ceramic and specimen.The frequency range of the excitation signal is 200~400 kHz, so according to Figure 5a, we can assume that the expected modes are A0 and S0, where direct waves could be respectively S0 at 0.03~0.04ms and A0 at about 0.06 ms.The second part of the signal mainly consists of the reflected waves (S0, A0) on edges and coda waves.The frontier between the second and third parts is more difficult to explain, but in the end of the signal we could supposed that there are mostly scattered waves.So we could supposed the third part as "Coda Waves".
damage location and the distance between transducers, etc.The spectrum of S1(t) is shown as Figure 5b.Multiple peaks can be observed in the figure and they are related to the resonant frequency of the piezoelectric ceramic and specimen.The frequency range of the excitation signal is 200~400 kHz, so according to Figure 5a, we can assume that the expected modes are A0 and S0, where direct waves could be respectively S0 at 0.03~0.04ms and A0 at about 0.06 ms.The second part of the signal mainly consists of the reflected waves (S0, A0) on edges and coda waves.The frontier between the second and third parts is more difficult to explain, but in the end of the signal we could supposed that there are mostly scattered waves.So we could supposed the third part as "Coda Waves".
Time Domain Decorrelation
The decorrelation calculation method of coda wave interferometry is used to calculate the collected signals.The reference signal is the signal S1(t) corresponding to the first impact, and the disturbance signal is the signal SN(t) corresponding to the N-th impact.In the time domain, the decorrelation coefficient DCt(m,N) of the m-th window of the N-th impact is calculated as follows:
S t S t dt DC m N S t dt S t dt
where DCt(m,N) is the decorrelation coefficient corresponding to the m-th window of the N-th impact in the time domain.tm is the starting time corresponding to the m-th window and t0 = 0, TW = 6 us is the window length, the window overlap rate O = 50%, and the time
Time Domain Decorrelation
The decorrelation calculation method of coda wave interferometry is used to calculate the collected signals.The reference signal is the signal S 1 (t) corresponding to the first impact, and the disturbance signal is the signal S N (t) corresponding to the N-th impact.In the time domain, the decorrelation coefficient DC t (m,N) of the m-th window of the N-th impact is calculated as follows: where DC t (m,N) is the decorrelation coefficient corresponding to the m-th window of the N-th impact in the time domain.t m is the starting time corresponding to the m-th window and t 0 = 0, T W = 6 us is the window length, the window overlap rate O = 50%, and the time domain calculation range is between 0~0.62 ms.The DC distribution in the time domain is shown in Figure 6a, where Figure 6b is the result of DC t (m,15) − DC t (m,1).
Frequency Domain Decorrelation
The Fourier transform of the signal is as follows: The end point of calculation Te = 0.8 ms.XN(f) is the spectrum of the signal corresponding to the N-th impact.In the frequency domain, the decorrelation coefficient DCf(m,N) of the m-th window of the N-th impact is calculated as follows: fm is the starting frequency corresponding to the m-th window and f0 = 200 kHz, fW = 25 kHz is the window length, the window overlap rate O = 95%, and the frequency domain
Frequency Domain Decorrelation
The Fourier transform of the signal is as follows: The end point of calculation T e = 0.8 ms.X N (f ) is the spectrum of the signal corresponding to the N-th impact.In the frequency domain, the decorrelation coefficient DC f (m,N) of the m-th window of the N-th impact is calculated as follows: f m is the starting frequency corresponding to the m-th window and f 0 = 200 kHz, f W = 25 kHz is the window length, the window overlap rate O = 95%, and the frequency domain calculation range is between 200~400 kHz.The DC distribution in the frequency domain is shown in Figure 6c, where Figure 6d is the result of DC f (m,15) − DC f (m,1).
It can be seen from Figure 6 that DC is not non-uniform distributed in both time domain and frequency domain, and there is a sensitive area where DC value rises rapidly.There are Sensors 2024, 24, 3201 7 of 14 multiple discrete sensitive zones in the time domain DC t .There are two obvious sensitive zones in the frequency domain DC f , which are near 260 kHz and 350 kHz respectively.Many factors, such as damage location, probe position, resonant frequency of piezoelectric ceramics and specimens, etc cause the non-uniform distribution of DC in time domain and frequency domain.
Time-Frequency Domain Decorrelation
The short-time Fourier transform of the signal is as follows: where g(t − mt s ) is a rectangular sliding window with a length T W = 500 us, and its position is determined by mt s .t s = 200 ns is the sliding step size, m = 1, 2, 3, . .., k.F m,N (t,f ) is the complex amplitude of the signal S N (t) between t s and t s + T W on each frequency component.For the complex value F m,N (t,f ) of the frequency component f at time t, the form is F m,N (t,f ) = a + bi, absolute value of amplitude A = sqrt(a 2 + b 2 ), phase p = arctan(b/a).Therefore, the amplitude of each frequency component is restored as follows: where N sum is the total number of sampling points, and H N (t,f ) is the N-th impact signal amplitude of the frequency component f at time t.The calculation of decorrelation DC t,f in time-frequency domain is shown in Figure 7.The short-time Fourier transform and amplitude conversion of the reference signal S 1 (t) (Figure 7a1) and the disturbance signal S N (t) (Figure 7a2) are performed to obtain H 1 (t,f ) and H N (t,f ) as shown in Figure 7b.The time-frequency domain decorrelation DC t,f is calculated by a kernel as follows: where t h = 2 us is half of the length in the kernel time axis direction, and f h = 2 kHz is half of the length in the kernel frequency axis direction.The calculated DC t,f is shown in Figure 7c.
It can be seen from Figure 6 that DC is not non-uniform distributed in both time domain and frequency domain, and there is a sensitive area where DC value rises rapidly.There are multiple discrete sensitive zones in the time domain DCt.There are two obvious sensitive zones in the frequency domain DCf, which are near 260 kHz and 350 kHz respectively.Many factors, such as damage location, probe position, resonant frequency of piezoelectric ceramics and specimens, etc cause the non-uniform distribution of DC in time domain and frequency domain.
Time-Frequency Domain Decorrelation
The short-time Fourier transform of the signal is as follows: where g(t − mts) is a rectangular sliding window with a length TW = 500 us, and its position is determined by mts.ts = 200 ns is the sliding step size, m = 1, 2, 3,..., k.Fm,N(t,f) is the complex amplitude of the signal SN(t) between ts and ts+ TW on each frequency component.
For the complex value Fm,N(t,f) of the frequency component f at time t, the form is Fm,N(t,f) = a + bi, absolute value of amplitude A = sqrt(a 2 + b 2 ), phase p = arctan(b/a).Therefore, the amplitude of each frequency component is restored as follows: 2 ( , ) cos( 2) where Nsum is the total number of sampling points, and HN(t,f) is the N-th impact signal amplitude of the frequency component f at time t.The calculation of decorrelation DCt,f in time-frequency domain is shown in Figure 7.The short-time Fourier transform and amplitude conversion of the reference signal S1(t) (Figure 7a1) and the disturbance signal SN(t) (Figure 7a2) are performed to obtain H1(t,f) and HN(t,f) as shown in Figure 7b.The time-frequency domain decorrelation DCt,f is calculated by a kernel as follows:
H t f H t f dtdf DC H t f dtdf H t f dtdf
where th = 2 us is half of the length in the kernel time axis direction, and fh = 2 kHz is half of the length in the kernel frequency axis direction.The calculated DCt,f is shown in Figure 7c.
Disturbance Sensitive Zone
Taking the signal of the 1-st impact as the reference signal, the DC t,f of 1-st to 15th impact is calculated according to the calculation process shown in Figure 7, and the results are shown in Figure 8. DC t,f increases with the increase of the number of impact.
Sensors 2024, 24, 3201 8 of 14 The increase of DC t,f is non-uniformly distributed in the time-frequency domain.DC t,f rises rapidly in some regions, and the position of these regions in the time-frequency domain is relatively stable.Changes of time-domain decorrelation DC t , frequency-domain decorrelation DC f , time-frequency domain decorrelation DC t,f with the number of impacts in the experimental and control specimens are shown in Figures 9 and 10.Compared with DC t and DC f , DC t,f is more sensitive to impact fatigue damage and can better evaluate the evolution of impact fatigue damage.In order to further discuss the region where DC t,f rises rapidly in the time-frequency domain, the region that deviates from most values in DC t,f is regarded as the DSZ.
Disturbance Sensitive Zone
Taking the signal of the 1-st impact as the reference signal, the DCt,f of 1-st to 15-th impact is calculated according to the calculation process shown in Figure 7, and the results are shown in Figure 8. DCt,f increases with the increase of the number of impact.The increase of DCt,f is non-uniformly distributed in the time-frequency domain.DCt,f rises rapidly in some regions, and the position of these regions in the time-frequency domain is relatively stable.Changes of time-domain decorrelation DCt, frequency-domain decorrelation DCf, time-frequency domain decorrelation DCt,f with the number of impacts in the experimental and control specimens are shown in Figures 9 and 10.Compared with DCt and DCf, DCt,f is more sensitive to impact fatigue damage and can better evaluate the evolution of impact fatigue damage.In order to further discuss the region where DCt,f rises rapidly in the time-frequency domain, the region that deviates from most values in DCt,f is regarded as the DSZ.The DSZ of 2-nd to 15-th impact is superimposed, and the distribution of the number of overlaps NDSZ in the time-frequency domain is shown in Figure 11f.The region of NDSZ = 14 in the figure means that DCt,f in these regions rises rapidly in all the disturbance signals from the 2-nd impact to the 15-th impact.These regions are stable and highly repeatable DSZs in the time-frequency domain.The DSZ of 2-nd to 15-th impact is superimposed, and the distribution of the number of overlaps NDSZ in the time-frequency domain is shown in Figure 11f.The region of NDSZ = 14 in the figure means that DCt,f in these regions rises rapidly in all the disturbance signals from the 2-nd impact to the 15-th impact.These regions are stable and highly repeatable DSZs in the time-frequency domain.The DSZ of 2-nd to 15-th impact is superimposed, and the distribution of the number of overlaps N DSZ in the time-frequency domain is shown in Figure 11f.The region of N DSZ = 14 in the figure means that DC t,f in these regions rises rapidly in all the disturbance signals from the 2-nd impact to the 15-th impact.These regions are stable and highly repeatable DSZs in the time-frequency domain.
It can be seen from Figure 6c that the sensitive region can be divided into two parts in the frequency domain.The disturbance-sensitive zone DSZ l (frequency range 200~300 kHz, time 0~0.62 ms) and DSZ h (frequency range 300~400 kHz, time 0~0.62 ms) were divided by 300 kHz as the dividing line for analysis.In order to further analyze the change of the distribution characteristics of DSZ with the increase of the number of impact, LDSZ(N) = (CP f , CP t , N) is used as the weighted average position of DC t,f in the DSZ of the N-th impact.CP t and CP f are calculated as Equation (7) and Equation ( 8), respectively.For DSZ l , f s = 200 kHz, f e = 300 kHz, t s = 0, t e = 0.62 ms.For DSZ h , f s = 300 kHz, f e = 400 kHz, t s = 0, t e = 0.62 ms.The weighted center positions of DSZ l and DSZ h are LDSZ l (N) and LDSZ h (N), respectively.The distribution of the 2-nd to 5-th impact of LDSZ l (N) and LDSZ h (N) is shown in Figure 12.It can be seen from Figure 6c that the sensitive region can be divided into two parts in the frequency domain.The disturbance-sensitive zone DSZl (frequency range 200~300 kHz, time 0~0.62 ms) and DSZh (frequency range 300~400 kHz, time 0~0.62 ms) were divided by 300 kHz as the dividing line for analysis.In order to further analyze the change of the distribution characteristics of DSZ with the increase of the number of impact, LDSZ(N) = (CPf, CPt, N) is used as the weighted average position of DCt,f in the DSZ of the N-th impact.CPt and CPf are calculated as Equation (7) and Equation (8), respectively.13, where the confidence level of confidence ellipse is 95%.A confidence ellipse can show the distribution of data points.As LDSZ(N) can characterize the distribution characteristics of DC t,f |DSZ in time-frequency domain.LDSZ l (N) and LDSZ h (N) are projected onto the time domain to obtain CP l (t) and CP h (t), and LDSZ l (N) and LDSZ h (N) are projected onto the frequency domain to obtain CP l (f ) and CP h (f ).The variations of CP l (t), CP h (t), CP l (f ) and CP h (f ) with the increase of the number of impact are shown in Figure 13, where the confidence level of confidence ellipse is 95%.A confidence ellipse can show the distribution of data points.As the correlation between the two variables increases, the confidence ellipse will be elongated toward greater correlation.The equation of the confidence ellipse of the variables x and y is shown as follows: where x and y are the mean values of x and y, respectively, σ x and σ y are the standard deviations of x and y, and ρ is the correlation coefficient of x and y. c is the confidence level determined by the chi-square distribution, and c = 5.991 when the confidence interval is 95%.The confidence ellipse in this paper is drawn using Origin 2022 software.LDSZ(N) can characterize the distribution characteristics of DCt,f|DSZ in time-frequency domain.LDSZl(N) and LDSZh(N) are projected onto the time domain to obtain CPl(t) and CPh(t), and LDSZl(N) and LDSZh(N) are projected onto the frequency domain to obtain CPl(f) and CPh(f).The variations of CPl(t), CPh(t), CPl(f) and CPh(f) with the increase of the number of impact are shown in Figure 13, where the confidence level of confidence ellipse is 95%.A confidence ellipse can show the distribution of data points.As the correlation between the two variables increases, the confidence ellipse will be elongated toward greater correlation.The equation of the confidence ellipse of the variables x and y is shown as follows: Observe the scatters and confidence ellipses in Figure 13, as the number of impact increases, the DSZ shifts slightly backward in the time domain, which means that as the number of impact increases, the response of the signal part with longer propagation time to the disturbance is strengthened.As the number of impacts increases, DSZ l approaches 260 kHz in the frequency domain, and DSZh approaches 350 kHz in the frequency domain, consistent with the distribution of decorrelation-sensitive areas in the frequency domain observed in Figure 6.
DC Improving by Prior DSZ
DSZ 2-15 is the region in the DSZ of the 2-nd to 15-th impact that is stably repeated 14 times (stably repeated each DSZ), where N DSZ = 14.The DC t,f in DSZ 2-15 is extremely sensitive to the damage evolution of the impact area.Therefore, the DC t,f in DSZ is analyzed, where DC t,f |DSZ represents the DC t,f value in DSZ.
The 16-20 th impact is the later stage of the continuous impact experiment.This part of the impact experiment can be used to discuss whether the DSZ obtained in the previous impact experiment can be used to improve the detection of the subsequent evolution of the impact damage.The impact fatigue damage on CFRP is divided into three stages [18].The impact fatigue damage at the initial stage of life and after the life of 70% increases rapidly with the increase of the number of impact.The damage evolution in the second stage is gentle and not obvious.Therefore, the decorrelation between the 16-th and 20-th impact can be expected to change small.Taking the signal S 16 (t) of the 16-th impact as the reference signal, the calculation results of decorrelation DC for the signals of the 17-th to 20-th impact are as shown in Table 2. Time domain DC t , frequency domain DC f , time-frequency domain DC t,f , prior DSZ improved DC t,f |DSZ 2-15 are as shown in Figure 14.3.5.DC Improving by Prior DSZ DSZ2-15 is the region in the DSZ of the 2-nd to 15-th impact that is stably repeated 14 times (stably repeated each DSZ), where NDSZ = 14.The DCt,f in DSZ2-15 is extremely sensitive to the damage evolution of the impact area.Therefore, the DCt,f in DSZ is analyzed, where DCt,f|DSZ represents the DCt,f value in DSZ.
The 16-20 th impact is the later stage of the continuous impact experiment.This part of the impact experiment can be used to discuss whether the DSZ obtained in the previous impact experiment can be used to improve the detection of the subsequent evolution of the impact damage.The impact fatigue damage on CFRP is divided into three stages [18].The impact fatigue damage at the initial stage of life and after the life of 70% increases rapidly with the increase of the number of impact.The damage evolution in the second stage is gentle and not obvious.Therefore, the decorrelation between the 16-th and 20-th impact can be expected to change small.Taking the signal S16(t) of the 16-th impact as the reference signal, the calculation results of decorrelation DC for the signals of the 17-th to 20-th impact are as shown in Table 2. Time domain DCt, frequency domain DCf, timefrequency domain DCt,f, prior DSZ improved DCt,f|DSZ2-15 are as shown in Figure 14.It can be seen from Figure 14 that the use of a priori stable and repeatable DSZ can further improve the monitoring of subsequent DC changes.The increase rate IR is calculated as follows: It can be seen from Figure 14 that the use of a priori stable and repeatable DSZ can further improve the monitoring of subsequent DC changes.The increase rate IR is calculated as follows: The results show that using the prior DSZ to improve the subsequent DC can obtain higher sensitivity, which is helpful in further detecting the evolution of impact fatigue damage on CFRP.The improved time-frequency domain DC t,f increase rate is 10.22% on average.
Conclusions
The evaluation of impact fatigue damage on CFRP using scattering waves was studied.The scattered wave signals under different the number of impact are used as reference signals and disturbance signals.The time domain, frequency domain and time-frequency domain decorrelation calculations are performed to evaluate the evolution of impact damage.The distribution characteristics of the disturbance sensitive zone in the timefrequency domain and the feasibility of using the disturbance sensitive zone to improve the subsequent decorrelation calculation are discussed.The following conclusions are obtained: (1) The DC in time domain, frequency domain and time-frequency domain increases with the increase of the number of impact, which indicates that DC in time domain, frequency domain and time-frequency domain can be used to evaluate the evolution of impact damage.In addition, the DC in the time-frequency domain shows higher sensitivity to the damage evolution of the impact region than the DC in the time domain and frequency domain.
(2) The sensitive region where DC rises rapidly is observed in both time domain and frequency domain.The sensitive region where DC rises rapidly can also be observed in the time-frequency domain, and its distribution characteristics LDSZ is consistent with those observed in the time domain and frequency domain.
(3) Based on the prior stable and highly repetitive disturbance sensitive zone, the decorrelation calculation of the time domain DC t , frequency domain DC f , time-frequency domain DC t,f and the prior DSZ improved DC t,f |DSZ 2-15 of the 16-th to 20-th impact signals is carried out.The results show that the prior DSZ can further improve the sensitivity of the time-frequency domain DC to the damage evolution of the impact region, and the average increase rate reaches 10.22%.
The research results of this paper show that there are disturbance-sensitive zones which are extremely sensitive to the damage evolution of the impact region and are stable and repeatable in the time-frequency domain of the scattered wave.Using these DSZ to improve the calculation of time-frequency domain decorrelation DC t,f is helpful to study the evolution of impact fatigue damage on CFRP.Further research will be carried out on different types of composite materials in the future.
Figure 1 .
Figure 1.Impact process and diffusion ultrasonic propagation path.
Figure 1 .
Figure 1.Impact process and diffusion ultrasonic propagation path.
Figure 1 .
Figure 1.Impact process and diffusion ultrasonic propagation path.
is between 0~0.62 ms.The DC distribution in the time domain is shown in Figure6a, where Figure6bis the result of DCt(m,15)-DCt(m,1).
Figure 8 .
Figure 8.(a) DCt,f of the 1-st impact.(b) DCt,f of the 2-nd impact.(c) DCt,f of the 3-rd impact.(d) DCt,f of the 4-th impact.(e) DCt,f of the 5-th impact.(f) DCt,f of the 6-th impact.(g) DCt,f of the 7-th impact.(h) DCt,f of the 8-th impact.(i) DCt,f of the 9-th impact.(j) DCt,f of the 10-th impact.(k) DCt,f of the 11th impact.(l) DCt,f of the 12-th impact.(m) DCt,f of the 13-th impact.(n) DCt,f of the 14-th impact.(o) DCt,f of the 15-th impact.
Figure 8 .
Figure 8.(a) DC t,f of the 1-st impact.(b) DC t,f of the 2-nd impact.(c) DC t,f of the 3-rd impact.(d) DC t,f of the 4-th impact.(e) DC t,f of the 5-th impact.(f) DC t,f of the 6-th impact.(g) DC t,f of the 7-th impact.(h) DC t,f of the 8-th impact.(i) DC t,f of the 9-th impact.(j) DC t,f of the 10-th impact.(k) DC t,f of the 11-th impact.(l) DC t,f of the 12-th impact.(m) DC t,f of the 13-th impact.(n) DC t,f of the 14-th impact.(o) DC t,f of the 15-th impact.
Figure 9 .
Figure 9. Experimental specimen DC in time domain, frequency domain and time-frequency domain.
Figure 10 .
Figure 10.Control specimen DC in time domain, frequency domain and time-frequency domain.The calculation flow chart of DSZ is shown in Figure 11a.The value of the upper quartile region deviates from the distribution of most values, which means that the DCt,f in this region rises rapidly when the disturbance occurs.After the morphological closed operation and open operation of the upper quartile region, the DSZ is obtained.DCt,f before processing is shown in Figure 11b,c is the upper quartile region of DCt,f, Figure 11d is the result of morphological closed operation of Figure 11c,e is the result of morphological open operation of Figure 11d.Figure 11c-e are binary graph, where the red area is the target area.The DSZ of 2-nd to 15-th impact is superimposed, and the distribution of the number of overlaps NDSZ in the time-frequency domain is shown in Figure11f.The region of NDSZ = 14 in the figure means that DCt,f in these regions rises rapidly in all the disturbance signals from the 2-nd impact to the 15-th impact.These regions are stable and highly repeatable DSZs in the time-frequency domain.
Figure 9 . 14 Figure 9 .
Figure 9. Experimental specimen DC in time domain, frequency domain and time-frequency domain.
Figure 10 .
Figure 10.Control specimen DC in time domain, frequency domain and time-frequency domain.The calculation flow chart of DSZ is shown in Figure 11a.The value of the upper quartile region deviates from the distribution of most values, which means that the DCt,f in this region rises rapidly when the disturbance occurs.After the morphological closed operation and open operation of the upper quartile region, the DSZ is obtained.DCt,f before processing is shown in Figure 11b,c is the upper quartile region of DCt,f, Figure 11d is the result of morphological closed operation of Figure 11c,e is the result of morphological open operation of Figure 11d.Figure 11c-e are binary graph, where the red area is the target area.The DSZ of 2-nd to 15-th impact is superimposed, and the distribution of the number of overlaps NDSZ in the time-frequency domain is shown in Figure11f.The region of NDSZ = 14 in the figure means that DCt,f in these regions rises rapidly in all the disturbance signals from the 2-nd impact to the 15-th impact.These regions are stable and highly repeatable DSZs in the time-frequency domain.
Figure 10 .
Figure 10.Control specimen DC in time domain, frequency domain and time-frequency domain.The calculation flow chart of DSZ is shown in Figure 11a.The value of the upper quartile region deviates from the distribution of most values, which means that the DC t,f in this region rises rapidly when the disturbance occurs.After the morphological closed operation and open operation of the upper quartile region, the DSZ is obtained.DC t,f before processing is shown in Figure 11b,c is the upper quartile region of DC t,f , Figure 11d is the result of morphological closed operation of Figure 11c,e is the result of morphological open operation of Figure 11d.Figure 11c-e are binary graph, where the red area is the target area.The DSZ of 2-nd to 15-th impact is superimposed, and the distribution of the number of overlaps N DSZ in the time-frequency domain is shown in Figure11f.The region of N DSZ = 14 in the figure means that DC t,f in these regions rises rapidly in all the disturbance signals from the 2-nd impact to the 15-th impact.These regions are stable and highly repeatable DSZs in the time-frequency domain.It can be seen from Figure6cthat the sensitive region can be divided into two parts in the frequency domain.The disturbance-sensitive zone DSZ l (frequency range 200~300 kHz, time 0~0.62 ms) and DSZ h (frequency range 300~400 kHz, time 0~0.62 ms) were divided by 300 kHz as the dividing line for analysis.In order to further analyze the change of the distribution characteristics of DSZ with the increase of the number of impact, LDSZ(N) = (CP f , CP t , N) is used as the weighted average position of DC t,f in the DSZ of the N-th impact.CP t and CP f are calculated as Equation(7) and Equation(8), respectively.
Figure 11 .
Figure 11.(a) DSZ calculation flow chart.(b) DCt,f.(c)The upper quartile region of DCt,f.(d) Result of morphological closed operation.(e) Result of morphological open operation.(f) The number of overlaps NDSZ.
̅ and � are the mean values of x and y, respectively, σx and σy are the standard deviations of x and y, and ρ is the correlation coefficient of x and y. c is the confidence level determined by the chi-square distribution, and c = 5.991 when the confidence interval is 95%.The confidence ellipse in this paper is drawn using Origin 2022 software.
Figure 13 .
Figure 13.The projection of LDSZ(N) in time domain and frequency domain.Figure 13.The projection of LDSZ(N) in time domain and frequency domain.
Figure 13 .
Figure 13.The projection of LDSZ(N) in time domain and frequency domain.Figure 13.The projection of LDSZ(N) in time domain and frequency domain.
Figure 14 .
Figure 14.Time domain DC t , frequency domain DC f , time-frequency domain DC t,f , prior DSZ improved DC t,f |DSZ 2-15 .
Table 1 .
The main properties of the specimen.
Table 1 .
The main properties of the specimen.
Table 1 .
The main properties of the specimen.
• DC t, f dtd f
Table 2 .
The calculation results of decorrelation DC.
Table 2 .
The calculation results of decorrelation DC.
|
v3-fos-license
|
2022-11-05T15:10:51.151Z
|
2022-12-01T00:00:00.000
|
253341558
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://ieeexplore.ieee.org/ielx7/16/4358746/09926047.pdf",
"pdf_hash": "75aa62a93eda81bbc830664d27152015d41ea67a",
"pdf_src": "IEEE",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42385",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "cb1d295bdba2301e23c037ef82840c08fb75da97",
"year": 2022
}
|
pes2o/s2orc
|
IGBT Reverse Transfer Dynamic Capacitance
Small-signal capacitance in every datasheet of insulated gate bipolar transistor (IGBT) is not accurate for understanding IGBT’s switching because the bipolar current in the device creates abnormal depletion profiles. IGBT’s reverse transfer dynamic capacitance is extracted for the first time with a five-contact method. While small-signal capacitance does not allow any current flow in the drift region due to the ground gate, the dynamic capacitance is the output of the time-dependent bipolar carriers during the inductive switching. For this reason, the magnitude and the shape of the dynamic capacitance are quite different from the small-signal capacitance found in most of the commercial datasheet. The discrepancy between the dynamic <inline-formula> <tex-math notation="LaTeX">${C}_{\text {GC}}$ </tex-math></inline-formula> and the small-signal <inline-formula> <tex-math notation="LaTeX">${C}_{\text {GC}}$ </tex-math></inline-formula> raises a fundamental question of whether the small-signal <inline-formula> <tex-math notation="LaTeX">${C}_{\text {GC}}$ </tex-math></inline-formula> in the datasheet is useful for considering the device’s dynamic performance.
IGBT Reverse Transfer Dynamic Capacitance H. Kang
Abstract -Small-signal capacitance in every datasheet of insulated gate bipolar transistor (IGBT) is not accurate for understanding IGBT's switching because the bipolar current in the device creates abnormal depletion profiles. IGBT's reverse transfer dynamic capacitance is extracted for the first time with a five-contact method. While small-signal capacitance does not allow any current flow in the drift region due to the ground gate, the dynamic capacitance is the output of the time-dependent bipolar carriers during the inductive switching. For this reason, the magnitude and the shape of the dynamic capacitance are quite different from the small-signal capacitance found in most of the commercial datasheet. The discrepancy between the dynamic C GC and the small-signal C GC raises a fundamental question of whether the small-signal C GC in the datasheet is useful for considering the device's dynamic performance.
I. INTRODUCTION
I NSULATED gate bipolar transistors (IGBTs) have been one of the most popular devices in high-voltage power industries due to their high current capability and cheap fabrication cost [1], [2], [3], [4]. In an effort to optimize the ON-state conduction loss and the turn-off switching loss, several types of novel IGBT structures have been suggested [5], [6], [7], [8]. Since IGBTs' conducting and switching behaviors are based on bipolar (hole-electron) carrier movement, to interpret the devices' operating characteristics, a wide range of analytic modeling has been performed [9], [10], [11], [12], [13], [14], [15], [16]. Although those models were applied to very limited structures, indeed, they improved the understanding of the detailed bipolar conducting mechanism in IGBTs. In the transient modeling or the simulation of IGBTs, the small-signal reverse transfer capacitance C rss (or gateto-collector capacitance) has been the main parameter for determining switching delay and speed. However, one fundamental question has not been resolved yet: is the small-signal capacitance enough for forecasting the dynamic switching performance? Small-signal measurement is carried out when the device is in OFF-state. The gate is shorted with the emitter and they are grounded. Only the collector voltage increases continuously by applying a small ac perturbation. The measured small-signal capacitance is the same as the series combination of the oxide capacitance and the depletion capacitance. The small-signal measurement, therefore, does not allow a current flow in the drift region (conductivity modulation) simply presenting the p-n junction depletion capacitance. Most of the commercial IGBT datasheets contain the small-signal reverse transfer capacitance.
However, since the injected bipolar carriers overwhelm the doping concentration of IGBTs' drift region during the turning-on and turning-off transitions, the bipolar carriers will significantly affect the depletion profiles in the drift region as well as the switching speed. More specifically, the electrons and the holes near the depletion boundary will be rapidly swept to each contact under the saturation velocities. The carriers moving across the depletion region modulates the ion concentration in the drift region depending on the saturation velocity and the current density given by the following equations [17], [18]: where N D , n, and p are the doping concentration in the drift region, the electron density, and the hole density in the depletion region, respectively. J n , J p , V sat,n , and V sat, p are the electron current density, hole current density, electron saturation velocity, and hole saturation velocity in the depletion region, respectively. E, q, and ε S are the electric field in the depletion region, unit charge, and the permittivity of semiconductors, respectively. In other words, the drift region's charge concentration is not fixed at the doping concentration (N D ) and the depletion profiles during the dynamic switching will be way different from the small-signal measurement. During the small-signal measurement, the IGBT device is not turned on (without gate bias and current in the channel and the drift region), and only a small ac perturbation is applied to the collector terminal at a specific dc voltage. Therefore, the dynamic capacitance, which is reflected by the depletion profiles in the drift region during the switching transient, will be quite different from that of the small signal. This means that the dynamic switching (turning on and turning off) behavior is not the output of the smallsignal capacitance, but the dynamic capacitance. However, every datasheet of IGBTs has been still adopting small-signal capacitance so far and the values in the datasheets are not practically useful.
Another problem of understanding IGBTs' dynamic switching is in power electronics perspective approach with three terminals. More specifically, although power electronics have been adopting a simple device capacitance model as shown in Fig. 1(a), there is no way to conjecture the abnormal current path but exist in reality, as shown in Fig. 1(b). As it will be discussed in the main section, during the turn-off transient, a part of the collector current sequentially flows across the IGBT's channel, the gate-to-emitter (C GE ), gate-tocollector (C GC ), and collector-to-emitter (C CE ) capacitances. This complicated current flow is nearly impossible to be detected in conventional measurement methods.
This study, for the first time, investigates the reverse transfer capacitance during the dynamic switching with a state-of-theart field stop IGBT structure. For this, we applied five terminals to contact the IGBT to specify the gate-to-emitter current (I GE ), the gate-to-collector current (I GC ), the channel current (I n+ ), the hole current (I p+ ), and the collector current (I C ), as shown in Fig. 2. The GE and GC contacts for I GE and I GC are split by 1.0-nm gap at the boundary between the p-body region (p-well) and the n-charge storage region (nCS) [see the inset of Fig. 2(a)]. Since the n-CS (charge storage) region is connected to the drain side, the boundary between the GE and the GC contacts must be the interface between the p-well and the n-CS. The five-contact method will be able to provide a detailed switching mechanism, negative capacitance during the turn-on, and a deeper understanding than the conventional three contacts [19], [20], [21]. Specifically, the gate contact consists of the gate-to-emitter contact, GE, and the gate-to-collector contact, GC.
During the turn-on transient, the gate-to-emitter displacement current, I GE , will flow into the GE to control the channel of the MOS and the gate-to-collector displacement current, I GC , will flow into the GC to control the gate-to-collector voltage. From the collector voltage transition and I GC on the time domain, the dynamic gate-to-collector capacitance, C GC.Dynamic , can be extracted where V GE , V CE , and V GC are the gate-to-emitter voltage, the collector-to-emitter voltage, and the gate-to-collector voltage, respectively. The emitter contact (on the top of the device) consists of the n+ contact and the p+ contact. The MOS channel current will flow through the n+ contact (electron current only) and the hole current from the drift region will flow through the p+ contact (hole current only). From this five-contact method, the behavior of forward recovery during the turn-on and the negative capacitance during both the turn-on and the turn-off switching can be simply detected.
II. DEVICE DESIGN AND SIMULATION SETTINGS
The field stop IGBT's structure and the thermal process in the simulation are based on the practical device design and the process conditions, which are under development in Magnachip for the next generation with 650-V rating. The cell pitch of the IGBT is 1.5 μm, and the mesa width and the trench width for the trench gate channel are 0.9 and 0.6 μm, respectively. The depth of the trench is 4.8 μm. The thickness of the drift region, including buffer layer, is 55 μm. The charge storage (CS) layer is located under the p-body region and the p-collector region is formed by shallow boron ion implantation and laser annealing. The obtained breakdown voltage is 725 V. The applied trench configuration in this simulation is targeting full channel trench gate IGBTs without dummy emitter trench [22], [23], which are normally applicable to the resonant converters. The full channel gate structure features the lowest ON-state voltage drop (the highest saturation current) and the lowest short-circuit capability (poor dynamic ruggedness). Since all the trenches of the IGBT are connected to the gate terminal, the input capacitance (C iss ) and the reverse transfer capacitance (C rss ) will be the highest showing the slowest switching speed. As the ratio of the emitter trench increases (emitter replaces with the gate), the channel density of the IGBT decreases and the ON-state voltage increases due to the lowered electron injection from the channel. The lowered channel density (increased emitter trench ratio to the gate trench) leads to a lowered saturation current level and increased short-circuit capability (improved dynamic ruggedness). In practical applications, IGBT can be customized from the full channel to very low channel density, for example, 1(gate trench):19(emitter trench), depending on customers' short-circuit requirements.
For inductive switching, device-circuit mixed-mode simulation is employed in sentaurus workbench (SWB) provided by Synopsis Inc. The applied V CC is 400 V and the gate voltage is 15 V. During the turn-on, the diode's reverse recovery characteristics contribute to the peak current of the IGBT at the initial stage of Miller plateau and the dV /dt during the Miller plateau. More specifically, the peak current and dV /dt are highly dependent on the IGBT's input capacitance and the diode's reverse recovery softness. To investigate the device's pure dynamic capacitance, an ideal diode (without reverse recovery) is used on the inductive load. The external gate resistance, R G , is 10 and the stray inductance at each terminal is ignored. The operating temperature is 300 K. As the data are not shown here, we confirmed that the operating behaviors show very similar waveforms and the dynamic capacitance even under high temperatures up to 450 K. The physical models in the simulation include Shockley-Read-Hall recombination, Auger recombination, doping-dependent mobility, electric field-dependent mobility, field-effect mobility, carrierto-carrier scattering, carrier velocity saturation, and impact ionization models. With the electric field-dependent mobility and velocity saturation model, the electric field distortion by high density of holes and electrons in the depletion region during the switching phase can be observed.
III. TURN-ON SWITCHING
For turning on the IGBT, as shown in Fig. 2, the gate current, I G , and the collector current, I C , flow out through the emitter, I E Fig. 3 shows the voltage and the current waveforms during the inductive turn-on switching.
t 0 -t 1 : As soon as the external gate bias (15 V) is applied, the gate current (I G ) flows into the GE (I GE ) and GC (I GC ) terminals charging the input capacitance. The gate voltage reaches its threshold voltage, V TH . t 1 -t 2 : Once the gate channel is formed at t 1 (the threshold voltage), the channel current (I n+ ) increases rapidly showing the following relationship with the gate voltage: where g m is the transconductance of the MOS channel. One important finding is that the hole current (I p+ ) also increases quickly after the IGBT's threshold voltage. It has been believed that heretofore, the hole current just after V TH is nearly zero or very small because of the forward recovery process [15], i.e., time is not enough for the drift region to be modulated by hole carriers. Despite the low hole carrier concentration, the injected holes from the collector are able to be swept rapidly across the drift region because the drift region is fully depleted by V CC . In other words, the small amount of hole carriers with the saturation velocity presents a quite high hole current. t 2 -t 3 : When the sum of the channel current (I n+ ) and the hole current (I p+ ) reaches its driving current level (30 A), the Miller plateau phase is started. Right after t 2 , the gate-toemitter displacement current (I GE ) drops to a slightly negative value (−0.1 A) by decreasing the channel current (I n+ ). This is because of the negative capacitance in which excessive holes below the gate oxide push out the positive charges on the gate terminal by slightly lowering the gate potential as well as accumulated electrons on the channel. Therefore, the degree of the negative capacitance can be simply detected by the amount of the negative shooting of the gate-to-emitter current.
It should be noted that the slight fluctuation of I p+ is due to the negative capacitance around the gate [6], [24], [25], [26]. As an accumulation (electrons) layer is formed below the gate oxide (bottom of the trench gate), the injected holes from the collector can be easily attracted by the electrons on the accumulation layers with the help of the high V CE . The attracted holes, in turn, lead to a slight surge in hole current (I p+ ) and hole displacement current (I GC , from the accumulation layer to the GC terminal). The hole displacement current across the GC terminal lowers the level of I GC . The negative capacitance effect is finally mitigated by the lowered V CE . The detailed current flow in the gate terminals during the negative capacitance is observed for the first time. t 3 -t 4 : At t 3 , the depletion region under the trench gate is transformed to an accumulation region and the gate-tocollector capacitance becomes large because the depletion capacitance in the drift region is removed. The increased gateto-collector capacitance creates a very slow dV CE /dt slope. The MOS channel is in the saturation region and both I GC and I GE are used to remove the pinchoff region at the end of the MOS channel.
After t 4 : The plateau phase is ended when the MOS channel's operation changes to the linear region from the saturation. The gate voltage keeps increasing with the continuous inflow of I GE and I GC . The steady increase of the hole current (I p+ ) is the indication that the forward recovery in the drift Fig. 4 shows a schematic circuit configuration for the IGBT's turn-off switching. During the turn-off transition, as shown in Fig. 4. a part of the collector current flows out across the gate as a displacement current and the rest flows out to the emitter having the following relationship:
IV. TURN-OFF SWITCHING
Fig . 5 shows the simulated waveforms during the turn-off inductive switching. t 0 -t 1 : As soon as the external gate bias becomes zero at t 0 , both I GE and I GC flow out from the GE and GC, respectively. The outflow of the gate displacement current lasts until the gate potential reaches the plateau voltage. t 1 -t 2 : At t 1 , due to the decrease in the gate potential, the MOS channel's operation region changes to the saturation from the linear. Both the discharging currents, I GE and I GC , form a larger depletion region at the end of the channel (pinchoff) by pushing the MOS channel to a deeper saturation region. This phase lasts until the accumulation region under the gate disappears. t 2 -t 3 : At t 2 , the accumulation region under the gate starts to be depleted by the continuous outflowing I GC . The gateto-collector capacitance (C GC ) becomes relatively small by the series depletion capacitance under the gate and the small C GC leads to a rapid increase in V CE . Due to the increase of V CE (the expansion of the depletion region across the drift region), the hole current (I p+ ) increases rapidly because the holes in the drift region can be swept quickly across the depletion region. By the amount of the increased hole current, the channel current (I n+ ) decreases, and the channel finally turned off at t 3 . Meanwhile, the gate voltage also decreases by the amount of the channel current given by (6). It should be noted that, in the case of the MOSFETs, V CE will be V CC at t 2 (the end of the plateau phase). For IGBTs, however, due to the remained holes in the drift region, the rise of V CE is delayed until the channel current becomes zero (when V GE arrives at V TH ).
A new finding in this phase is that there is a sudden discharging and charging current between GE and GC terminals. The gate potential keeps decreasing, but the GE terminal is rather charged. The abnormal phenomenon is caused by a rapid decrease in the hole current density (I P+ ) under the gate oxide. A portion of the outflowing gate-to-emitter current (I GE ) diverts to the gate-to-collector current (I GC ), as shown in Fig. 6. This can be explained by the weakened negative capacitance effect. Negative capacitance is the increased potential under the gate oxide caused by hole carriers with a high electric field [24], [25]. As the hole current density decreases, the potential of the gate adjacent to the top drift region becomes small creating a displacement current. This effect (the decreased hole density under the gate oxide) is seen as if the gate-to-collector capacitance is charged again. It is noteworthy that the current path in the device shown in Fig. 6 is hardly depicted in the power electronics model. More specifically, in the power electronics model, I GE cannot flow into C GC due to the higher collector potential than the emitter. This is an advantageous example of the mixed-mode simulation over the explanation of the conventional power electronics model. Fig. 7 shows the dynamic C GC and the hysteresis with respect to the V CE for the turn-on and the turn-off transitions. The dynamic C GC was extracted by using I GC , V CE , and V GE waveforms shown in Figs. 3 and 5, and (4). The smallsignal C GC simulated at 100 kHz is inserted for comparison. In the case of the small-signal mode, there is no channel and accumulation layer between the semiconductor and the gate oxide because the gate terminal is grounded. Indeed, the slightly depleted interface by the n+ poly gate presents a relatively low C rss value compared to the dynamic C GC .
V. DYNAMIC CAPACITANCE AND HYSTERESIS
Phase A-B: Both the turn-on (after Miller phase) and the turn-off (before Miller phase) transitions are in the high-level injection having an accumulation layer under the gate. It is noteworthy that the dynamic C GC of the turn-on continuously increases as V CE decreases. There is a capacitance (depletion region) between the p-body (or p-well) and the CS regions. The oxide capacitance adjacent to the CS region will form a series capacitance with the capacitance (C GCS -C ECS ), as shown in Fig. 8. For example, in the initial stage of the accumulation region being formed (during the turn-on), the depletion region between the p-body and the CS region is wide having a small C ECS . As the forward recovery progresses, the depletion width for the C ECS will become narrow. This is why C GC continuously grows as the turning-on device passes the phase from B to A. The same mechanism will be applied to planar gate IGBTs as well.
Phase B-C: Around V CE = 9-27 V, both the turn-on and the turn-off switching are in the Miller plateau and the turn-on dynamic C GC is abnormally higher than the turn-off. The reason can be explained by V CE when the depletion (accumulation) layer under the gate changes to the accumulation (depletion) layer, as shown in Fig. 9. In the case of turn-off, the accumulation layer under the gate starts to be depleted around V CE = 9 V. Since the high hole density is remained in the drift region, the potential across the drift region is very small. In the case of the turn-on, however, the hole density in the drift region is relatively small because the IGBT is in an initial state of forward recovery. Therefore, when the depletion layer under the gate changes to the accumulation layer, a large portion of V CE is sustained across the drift region, i.e., the accumulation layer appears around 27 V for the turn-on and the accumulation disappears around 9 V for the turn-off.
From the extraction of the dynamic transfer capacitance (C GC ) shown in Fig. 9, one would be able to relate the dynamic C GC with Q G in a commercial datasheet. Since the time between t 2 and t 4 in Fig. 3 is normally known as the transfer period, the gate current should flow only through the collector-to-gate capacitance. If this (I G = I GC during t 2 -t 4 ) is true, it is possible to obtain the dynamic capacitance from Q G measurement. However, the ideal situation never happens even in MOSFETs because some portion of the gate current generally flows into the gate-to-source capacitance to remove the pinchoff (depletion) region in the MOS channel [27].
VI. CONCLUSION
IGBT's reverse transfer dynamic capacitance was analyzed for the first time. For this purpose, IGBT's contact was arranged five terminals to measure the channel current, hole current, gate-to-emitter current, and the gate-to-collector current. The inductive switching waveforms with five contacts provided a deeper insight into the detailed current flow mechanism in the device, especially for the dynamic hole current fluctuation and the negative capacitance. For example, the misunderstanding of the hole current behavior during the turnon and the turn-off is corrected. The misbelief has originated from the absence of the hole current extraction and the limited current path configuration in the power electronics model. From V CE , V GE , and I GC , the dynamic C GC during the switching was extracted. The magnitude and the shape of the dynamic C GC were quite different from the small-signal C GC because of the hole carrier movement in the drift region. The turn-on and the turn-off dynamic C GC show asymmetric capacitance curves due to the forward recovery process of the IGBT.
|
v3-fos-license
|
2020-06-25T09:08:38.743Z
|
2020-07-02T00:00:00.000
|
225554820
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1259/bjrcr.20200080",
"pdf_hash": "4904de1dfb7f17758dc9a457c88754399d0c7253",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42387",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "71fb32b4beca0d2ee63c1d563775c78da4cd1f65",
"year": 2020
}
|
pes2o/s2orc
|
Primary synovial sarcoma of parotid gland with intravenous extension into the heart
We are reporting a case of a 47-year-old male with primary synovial sarcoma of the right parotid gland with tumor thrombus extension in the right internal jugular vein and right atrium. The rarity of this occurrence as documented in the review of the literature provides for uncertainty about proper management. Our case represents a rare occurrence with the unique radiological finding that has implications for management.
INTRODUCTION
Tumor thrombus within neck veins is an uncommon event in the head and neck malignancies, and it has most often been reported in association with thyroid malignancies. [1][2][3][4][5][6][7] Here we report a case of primary synovial sarcoma of the parotid gland with direct i.v. thrombus extending down the right internal jugular vein into the right atrium, evident in the form of a filling defect with the expansion of right internal jugular vein documented with CT.
Our case is a probably first case report, which shows the direct i.v. extension of parotid tumor into the right atrium.
CASE REPORT
A 47-year-old male with no smoking, drinking, or no history of previous malignancy was initially referred to the department of radiodiagnosis for characterization of a growing painless right parotid mass that had been present for 1 year with rapid progression within three months.
Six month before presenting to our institution, the patient had noticed a right neck mass for which MRI was done outside, which showed a well-circumscribed, heterogeneous mass with low signal intensity on T1W images and high intensity on T2W images, involving both superficial and deep lobe of the right parotid gland, closely abutting right internal carotid artery and right internal jugular vein and extending into the right para-pharyngeal space with no evidence of tumor thrombus noted at that time ( Figure 1).
Upon presentation at our institution, physical examination demonstrated significant enlargement in the mass's size with no evidence of palpable veins, or tenderness in the neck. However, there was a history of rapid growth within 3 months. With these clinical findings, the patient underwent a CT scan, which demonstrated a fairly large (6.2×4.4 cm) heterogeneously enhancing mass lesion with internal necrotic changes arising from the right parotid gland and involving both superficial and deep lobe with perilesional fat stranding.
On further evaluation, mass is seen to extend into the right internal jugular vein, which appears expanded and showed enhancing intraluminal filling defect on post-contrast images suggestive of tumor thrombus into the vessel ( Figure 2). The tumor thrombus extends further down into the right atrium. Based on the imaging findings, the possibility of the malignant lesion was suggested.
Subsequently, histopathological analysis revealed a tumor composed of sheets of round to spindle-shaped cells with large areas of necrosis with cells showing vesicular nuclear chromatin, prominent nucleoli, ill-defined cytoplasm, and brisk mitotic figures. On immunohistochemistry, cells were positive for vimentin, suggestive of a malignant mesenchymal tumor. The final histopathology report was suggestive of monomorphic synovial sarcoma ( Figure 3).
Due to the advanced stage of the tumor, radiotherapy followed by chemotherapy was planned. However, the patient died two months after the treatment was planned owing to the severity of the disease.
DISCUSSION
Parotid gland tumors are not uncommon, the majority of them are benign. Pleomorphical adenoma is the most common benign salivary gland tumor in adults (70-80% of all salivary gland benign tumors) with parotid gland involvement is the most common. Malignant lesions of the parotid gland are uncommon seen in around 20% of cases. Mucoepidermoid carcinoma (8-15%) is the most common type of parotid malignancy, followed by adenoid cystic carcinoma (5%) and acinic cell carcinoma, other uncommon parotid malignancies include primary adenocarcinoma, salivary duct carcinoma, primary squamous cell carcinoma, lymphoma, and synovial sarcoma. 8 Synovial sarcoma is predominantly located near the joints of lower limbs, particularly knee joint and also arises from tendon sheath and bursa, however contrary to its name, it is also reported to be originated from other nonsynovial sites. Synovial sarcoma of the parotid gland is an extremely rare condition with only a few case reports are available in the literature. The hypopharynx is the most common site in head and neck with the larynx being the least common. However, in their study Al-Daraji W et al. revealed that the parotid gland is commonly involved in head and neck region 9 . Mono-phasic and bi-phasic are the two types of synovial sarcoma, the biphasic variety contains both spindle and epithelial cell, while mono-phasic type has spindle cells only. Mesenchymal cell or myoepithelial cell of terminal duct are believed to be cells of origin that undergoes synovioblastic differentiation 10 .
CT and MRI may be used to determine the site of origin, delineate tumor extension, detect lymphadenopathy, identify calcification, and to evaluate possible airway compromise. MRI because of its excellent soft-tissue resolution, is considered the investigation of choice for detecting and staging of soft-tissue tumors of head and neck. Both CT and MRI features of synovial sarcomas are nonspecific with no pathognomonic features were described in the published literature. The usual presentation is of a well-defined solid lesion with occasional cystic or hemorrhagic changes and calcification.
These lesions exhibit intermediate signal on T1 and heterogeneously hyperintense signal on T2W images, respectively. 11,12 Because of their imaging features such as smooth margins and lack of invasive nature, they were frequently misclassified as benign lesions. 13 A malignant lesion may be associated with thrombus formation in the adjoining vein, which may be a result of continuous extension from the primary mass or due to stasis of flow as a result of compression.
Differentiation of bland thrombus from malignant tumor thrombus is crucial as it affects the future course of management, such as the extent of surgical resection, radiotherapy planning, and prognosis. Bland thrombus appears homogeneous, does not show contrast enhancement, and a thrombosed vein is not disproportionately distended on CT scan. Similar to CT, on the MRI, the bland thrombus does not show enhancement and exhibits hypointense signal on T2W sequences. Malignant thrombus causes abnormal expansion of the venous lumen, adheres to the vessel wall, and shows continuity with the primary tumor. Malignant thrombus exhibit enhancement on both CT and MRI similar to the primary mass and shows intermediate to hyperintense signal on T2W sequences 14 . Few studies show the utility of diffusion-weighted MRI in differentiating bland from malignant thrombus depending upon ADC values. If the thrombus' ADC value is similar to the primary tumor, the possibility of malignant thrombus is high likely 15,16 Direct tumor extension of head and neck malignant tumor into an adjacent vein is rarely described in the literature. Few studies had described the association of thyroid cancer with thrombosis of the accompanied vein. 3,5,7,17,18 The existence of a tumor thrombus in the internal jugular vein from thyroid cancer was first described in 1991. 4 Since then there were reports of thrombosis of vein from a metastatic lesion of the parotid gland, tumors of the deep lobe of parotid causing thrombosis of the internal jugular vein and reports of acute parotitis causing thrombosis of the internal jugular vein. 7,19,20 However, we could find only one case of a 91-year-old male patient with a parotid tumor associated with tumor thrombosis of the right external jugular vein. 21 In our case, the thrombus was in the internal jugular vein extending down to the superior vena cava and subsequently right atrium. Other head and neck tumors have also been reported to invade or grow within the great vessels, among those, paraganglioma was the most common. 22,23
LEARNING POINTS
1. Sarcoma of the parotid gland is a rare neoplasm whose presence should be suspect whenever there is an association of local invasion and i.v. extension. 2. Imaging findings of synovial sarcoma are non-specific, however the presence of venous extension on imaging may indicate malignant nature. 3. Intravenous and subsequent intracardiac extension should be actively sought in the presence of locally invasive parotid neoplasm as its presence or absence might determine the appropriate management. 4. CT and MRI are both useful modalities to diagnose and assess the extension of tumor thrombus. However, for proper assessment scan area should cover the entire extent of tumor extending from the base of the skull cranially to the junction of superior vena cava and right atrium.
CONCLUSIONS
Since synovial sarcoma of the parotid gland represents a rare entity; the diagnosis and clinical management can be a challenge.
As the CT and MRI findings are nonspecific, histopathological confirmation is always needed. Interestingly our case demonstrates the rare occurrence of tumor thrombus reaching up to right atrium along with synovial sarcoma of the parotid gland, which has not been reported in any of the earlier publications and hence needs to be taken care of in the evaluation of such cases.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2015-12-05T00:00:00.000
|
5639767
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=17898&path[]=6481",
"pdf_hash": "24a8f16636d6561d6cad2def488a83910cf6391f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42388",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "24a8f16636d6561d6cad2def488a83910cf6391f",
"year": 2016
}
|
pes2o/s2orc
|
Dynamic reprogramming of DNA methylation in SETD2-deregulated renal cell carcinoma.
Clear cell renal cell carcinomas (ccRCCs) harbor frequent mutations in epigenetic modifiers including SETD2, the H3K36me3 writer. We profiled DNA methylation (5mC) across the genome in cell line-based models of SETD2 inactivation and SETD2 mutant primary tumors because 5mC has been linked to H3K36me3 and is therapeutically targetable. SETD2 depleted cell line models (long-term and acute) exhibited a DNA hypermethylation phenotype coinciding with ectopic gains in H3K36me3 centered across intergenic regions adjacent to low expressing genes, which became upregulated upon dysregulation of the epigenome. Poised enhancers of developmental genes were prominent hypermethylation targets. SETD2 mutant primary ccRCCs, papillary renal cell carcinomas, and lung adenocarcinomas all demonstrated a DNA hypermethylation phenotype that segregated tumors by SETD2 genotype and advanced grade. These findings collectively demonstrate that SETD2 mutations drive tumorigenesis by coordinated disruption of the epigenome and transcriptome,and they have important implications for future therapeutic strategies targeting chromatin regulator mutant tumors.
IntroductIon
Cancer of the kidney and renal pelvis affects > 65,000 patients annually and ranks 8 th in causes of cancer death in the United States. The most common histologic subtype is clear cell renal cell cancer (ccRCC), which accounts for the majority of RCC-related deaths. Surgery remains the standard of care for patients with early stage tumors, however ~30% of patients progress to distant metastases after surgery for localized disease. Despite some advances in systemic therapy, median survival drops to about two years after development of metastatic disease [1]. CcRCC differs from many tumor types in that it is characterized by frequent mutation of epigenetic regulators (dominated by SETD2 (10-15%), PBRM1 (33-45%), and BAP1 (15%)), while mutations in other common cancer gene pathways (e.g. RAS, BRAF, TP53, RB) are largely absent [2][3][4][5], and ccRCC is tightly linked to a distinct transcriptional signature due to inactivation of the VHL gene, which is mediated in part through deregulation of the epigenome [6]. These properties also make ccRCC an ideal tumor type to use as a model for determining how mutations in epigenetic regulator genes modulate tumor initiation and progression. www.impactjournals.com/oncotarget SETD2 is a ubiquitiously expressed SET domaincontaining histone 3 lysine 36 trimethylase (H3K36me3) that interacts with elongating RNA pol II via the RNA pol II-associated factor complex (PAF1c), to recruit H3K36me3 to transcribing gene bodies [7][8][9][10]. SETD2 is the principle mediator of H3K36me3 and has little if any role in generating H3K36me1/me2 [11][12][13]. Functions for H3K36me3 include regulation of Pol II and nucleosome density across exons [2,14], alternative splicing [15], and DNA repair [16,17]. In ccRCC, biallelic SETD2 inactivation is associated with reduced survival and earlier time to recurrence [18,19]. Metastatic ccRCC displays markedly reduced H3K36me3 levels compared to matched primary ccRCCs [13]. These findings strongly suggest that SETD2 mutations drive tumor progression, yet the underlying mechanism remains unknown.
Like H3K36me3, DNA methylation (5mC) is enriched across gene bodies [20] where it is positively linked to transcription [21] and regulates intragenic enhancer activity [22]. Four DNA methyltransferase family members, DNMT1, 3A, 3B, and 3L collectively establish and maintain genome-wide patterns of DNA methylation [23]. 5mC patterns in cancer are profoundly disrupted, with global hypomethylation affecting repetitive DNA and gene bodies accompanied by more focused promoter/CpG island (CGI)/CGI shore hypermethylation that silences the associated gene. Aberrant DNA methylation is sufficient to drive tumorigenesis in the absence of genetic mutations [24]. A direct link between DNA and H3K36 methylation was first revealed through in vitro binding studies, wherein recombinant Dnmt3a bound H3K36me2/me3containing peptides and nucleosomes via its N-terminal PWWP domain [25] and subsequent chromatin interaction assays showed that H3K36me3 co-immunoprecipitates with Dnmt3b [26]. The PWWP domain is a moderately conserved motif in > 60 proteins, many of which associate with chromatin [27], that is now recognized as a reader domain for H3K36 methylation [15,28].
The collective findings linking 5mC to H3K36me3 and SETD2 mutations to ccRCC motivated us to examine their interplay in SETD2 mutant tumors. Using cell line models we show that SETD2 loss-of-function induces global loss of H3K36me3, but also formation of ectopic H3K36me3. SETD2 inactivation also results in global redistribution of 5mC, with a predominance of hypermethylation events targeted to sites of ectopic H3K36me3, intergenic loci, and normal kidney poised enhancers. Functionally, global DNA hypermethylation events occur in large DMRs conserved across multiple tumor types with SETD2 mutations and result in upregulation of lowly expressed genes that collectively appear to drive cells toward a more undifferentiated state.
Validation of SETD2 knockout (Ko) 786-o cells as a model of SETD2 mutated ccrcc
To generate a model to study the impact of SETD2 mutations, we utilized the 786-O ccRCC cell line and targeted the SETD2 locus for inactivation using zinc-finger nucleases (ZFNs). Two independent clones were isolated and characterized. In KO1 the ZFN-nuclease generated a 4 bp deletion and in KO2 an 11 bp deletion in SETD2, both causing frameshifts ( Figure S1A). The two SETD2 isogenic KO clones derived from parental 786-O ccRCC cells were validated by Sanger sequencing and cell line authentication (ATCC, data available upon request). Altered epigenetic phenotypes were highly consistent between the SETD2 KO1 and KO2 clones, as will be described.
Since one of our goals was to determine the impact of SETD2 loss-of function on 5mC patterns, we first examined the impact of this mutation on components of the DNA methylation machinery by RNA-seq and qRT-PCR ( Figure S1B). In 786-O parental cells, DNMT1 was the most highly expressed DNMT ( Figure S1B, left axis) consistent with its role as the maintenance methyltransferase. Expression of the de novo methyltransferases was low in parental 786-O cells; DNMT3L was undetectable ( Figure S1B, left axis). Inactivation of SETD2 in 786-O cells downregulated DNMT1 and up-regulated DNMT3B to some extent; DNMT3A and DNMT3L expression did not change ( Figure S1B, right axis). Expression of the TETs in parental 786-O was variable ( Figure S1B, left axis). SETD2 inactivation resulted in down-regulation of TET1 and up-regulation of TET3 ( Figure S1B, right axis). Taken together, there were no consistent changes in expression of DNA methylation machinery components that would likely account for global changes in 5mC between parental and SETD2 KO 786-O clones. Since the DNMTs and TETs play important roles in development, we also assayed expression of pluripotency and germ layer markers upon SETD2 inactivation ( Figure S1C). SETD2 KO altered expression of these markers with upregulation of pluripotency markers and variable changes in expression among germ layer markers ( Figure S1C). Subunits of the PAF complex, which interacts with both SETD2 and the DNMT3s [8,29] remained constant ( Figure S1C).
loss of setd2 induces redistribution of H3K36me3
SETD2 KO in 786-O cells resulted in global reduction of H3K36me3 with little effect on total H3K36me1 and H3K36me2 ( Figure 1A). However, H3K36me3 was not completely depleted upon SETD2 inactivation using moderate exposures in the western blotting. We next performed chromatin immunoprecipitation sequencing (ChIP-seq) of H3K36me3 in 786-O parental and SETD2 KO clones to map its genome-wide distribution. Consistent with reduction of H3K36me3 upon SETD2 inactivation ( Figure 1A), coverage from H3K36me3 ChIP-seq (relative to the total bp covered by sequence reads) decreased to 17.6% and 15.2% for clone 1 and clone 2, respectively, from 31.4% observed in parental 786-O cells. As expected, the majority of H3K36me3 peaks observed in parental 786-O cells were enriched within gene bodies ( Figure 1B). In the SETD2 KO clones, however, a marked redistribution of the remaining H3K36me3 was observed, with gains of this mark primarily occurring in intergenic regions ( Figure 1B). Loss of H3K36me3 also occurred upon SETD2 inactivation, as would be expected, with nearly 40% fewer H3K36me3 peaks observed in gene bodies of the KOs relative to parental 786-O ( Figure 1B). Indeed, the length of peaks across gene bodies was reduced among the SETD2 KO clones relative to the parental 786-O cells, while the peak length in intergenic regions increased with SETD2 inactivation ( Figure 1C). To evaluate the possibility of non-specific binding of the H3K36me3 ChIP antibody, we performed dot blotting with peptides containing other histone modifications and determined that the antibody had high specificity for H3K36me3 and no cross reactivity with H3K36me2 or H3K36me1 ( Figure S2A). We next assayed differential enrichment of H3K36me3 by SICER-DF analysis [30] among the 786-O parental and SETD2 KO clones. Loss of H3K36me3 in SETD2 KO clones occurred predominately in gene bodies ( Figure 1D, Figure S2B). However, a small number of genes gained H3K36me3 upon inactivation of SETD2 with no marked enrichment in any particular feature ( Figure 1D, Figure S2B). Regions of the genome that gained and lost H3K36me3 were highly conserved among both independent SETD2 KO clones ( Figure 1E, Figure S2C). As gains in H3K36me3 were unexpected, we validated our H3K36me3 ChIP-seq with locusspecific ChIP-qPCR ( Figure S2D). Overall, we observed predominantly reduction in H3K36me3 as a result of SETD2 KO in 786-O cells, but also gains of H3K36me3 over gene bodies and intergenic regions.
setd2
inactivation results in dnA hypermethylation that coincides with regions of ectopic H3K36me3 Since H3K36me3 and 5mC overlap significantly in their genome-wide distribution [21,26,30,31] we next assayed DNA methylation patterns in the 786-O isogenic clones using the Illumina HumanMethylation450 BeadChip (450K array). Globally, DNA hypermethylation was observed in both SETD2 KO clones at all genomic features, but particularly intergenic regions (Figure 2A-left, Figure S3A). Quantification of total genomic 5mC content by LC-MS/MS [32] confirmed this observation, revealing that SETD2 inactivation resulted in > 20% increase in total 5mC in both SETD2 KO clones (Figure 2A-right). Analysis of the most differentially methylated CpGs (|Δβ|≥0.2) from the 450K array revealed that greater than 80% of differential methylation upon SETD2 inactivation was in the direction of hypermethylation ( Figure 2B, Figure S3B). DNA hypermethylation was focused primarily in intergenic regions while hypomethylation was enriched at gene termini where both H3K36me3 and 5mC peak under normal conditions ( Figure S3C). Independent MeDIP-qPCR analysis validated the elevated 5mC events identified by 450K array (CTNNA2, AJAP1, and SLIT2) ( Figure S3D). Additionally, we included in our MeDIP-qPCR confirmation an intergenic region on chromosome 5 that was validated for ectopic H3K36me3 ( Figure S3D) as the 450K array does not provide coverage of this locus. Although subtle (most likely due to low CpG density of this region), hypermethylation of this intergenic region was observed in both SETD2 KO clones ( Figure S3D).
Next, we integrated the genome-wide distribution of DNA methylation and H3K36me3 in the isogenic 786-O cells to determine if they were coordinated. Genes that lost H3K36me3 upon SETD2 KO did not display alterations in 5mC ( Figure 2C, S3E top panel), rather DNA hypermethylation occurred at loci that acquired ectopic H3K36me3 (bottom panels in Figure 2C, Figure S3E). To further investigate the effect that H3K36me3 distribution had on 5mC, we assigned differentially methylated CpGs to categories based on occurrence with differential H3K36me3 peaks ( Figure 2D, Figure S3F). Hypermethylated CpGs significantly overlapped with regions that gained H3K36me3 ( Figure 2D, Figure S3F) with particular enrichment in intergenic regions ( Figure 2D, Figure S3F, right). Hypomethylated CpGs significantly coincided with regions losing H3K36me3 at gene termini ( Figure 2D, Figure S3F, right). Contrary to the observation that loss of H3K36me3 does not influence global 5mC distribution, our focused analysis reveals that a subset of gene termini do in fact require H3K36me3 for proper establishment of 5mC, suggesting that the interplay between H3K36me3 and 5mC differs within the gene body domain or can be influenced by other processes (e.g. 3'-end definition versus elongation or splicing).
Poised enhancers in normal adult kidney are targeted for dnA hypermethylation and ectopic H3K36me3 in ccrcc
H3K4me1 is localized to both poised and active enhancers, while H3K27ac marks active enhancers [33]. Active enhancers are typically devoid of 5mC as these regions are hotspots for transcription factor binding [34]. To investigate the epigenetic regulation of enhancers in SETD2 mutated ccRCC, we integrated ChIP-seq data for H3K4me1 and H3K27ac from normal adult human kidney (Epigenome Roadmap) with our 5mC and H3K36me3 profiles for 786-O parental and SETD2 KO cells. We observed co-occurrence of hypermethylation in regions marked exclusively by H3K4me1 genomewide, while regions marked with H3K27ac displayed hypermethylation in intergenic regions only ( Figure 2E). Overlap analysis of the most differentially methylated CpGs in 786-O SETD2 KOs revealed significant enrichment of hypermethylated CpGs at regions marked by H3K4me1 in normal adult kidney and exclusion of differential methylation at H3K27ac-marked regions ( Figure S3G). Next, we classified genes from normal adult kidney marked with K4me1 only, K4me1+K27ac, or K27ac only ( Figure S3H). Genes containing all enhancer marks were also determined (termed "All classes"). Expression of genes in normal kidney associated with the different groups of enhancer marks in a manner consistent with their reported functionality; genes marked by K4me1 alone ("poised" enhancers) demonstrated low expression and genes marked with K27ac exhibited higher expression ( Figure S3I). Genes marked exclusively by H3K4me1 in normal adult kidney significantly overlapped with genes targeted for hypermethylation in 786-O SETD2 KO (pval < 2.089e-07), and were enriched for developmental processes ( Figure 2F). Finally, we determined the differential H3K36me3 status of the normal adult kidney enhancer classified genes in our 786-O SETD2 KO cells. Genes marked exclusively by H3K4me1 in normal adult kidney demonstrated a broad range of differential H3K36me3 in 786-O SETD2 KO clones (including ectopic gains), while genes marked with H3K27ac in normal kidney overwhelming lost H3K36me3 ( Figure S3J). The mechanism by which poised enhancers are targeted for aberrant epigenomic regulation such as gains in H3K36me3 and 5mC remains unclear, but enhancers linked to genes regulating developmental processes are a major target of this effect. This finding is also consistent with our RT-PCR data showing up-regulation of pluripotency genes and differential effects on germ layer markers upon SETD2 inactivation ( Figure S1C).
Reconfiguration of 5mC and H3K36me3 by SETD2 KO influences gene expression
Since DNA methylation, H3K36me3, and enhancer elements all play pivotal roles in gene regulation, we next examined the relationship between SETD2 KOassociated epigenome reconfiguration and changes in gene expression. First, we stratified gene expression in parental 786-O cells into two expression tiers, high and low (including genes with no expression), by RPKM values. Next, we determined the fold-change in expression for both SETD2 KO clones relative to parental cells. Overall, most differential expression (≥ 2 fold-change) occurs at genes belonging to the low expression tier, with a majority of differentially expressed genes being up-regulated upon SETD2 inactivation ( Figure S4A). Conversely, genes within the high expression tier were typically downregulated ( Figure S4A). The change in H3K36me3 among genes stratified by expression tier was then determined. H3K36me3 loss induced by SETD2 KO occurred at high expressing genes, while low expressing genes tended to gain ectopic H3K36me3 ( Figure 3A, Figure S4B). Upregulated genes from the low expression tier significantly overlapped with genes that gained H3K36me3 (pval < E-50), while genes that lost H3K36me3 were not enriched for differential gene expression ( Figure 3B, Figure S4C). Integration of DNA methylation level and how it changed with SETD2 KO revealed that genes undergoing loss of H3K36me3 do not sustain changes in 5mC or expression, and that these genes have the typical methylation profile of high expressing genes (low promoter 5mC, high gene body 5mC, Figure 3C, top). Filtering for genes in the high expression tier that lose H3K36me3 revealed the extent to which DNA methylation remains the same with SETD2 KO (Figure S4D, top). Genes that gain H3K36me3, however, show marked changes in both 5mC and expression, with hypermethylation across all regions of the gene and elevated expression ( Figure 3C, bottom). Indeed, evaluation of overall 5mC at genes within the low expression tier that gain H3K36me3 reveals hypermethylation in both SETD2 KO clones ( Figure S4D). Ontology analysis of genes that gain H3K36me3, 5mC, and expression demonstrate enrichment for processes involved in cell adhesion, signaling, and development ( Figure S4E). To determine if differential methylation at base-pair resolution correlates with changes in H3K36me3 status and gene expression, we integrated differential expression with the categories described previously in Figure 2D. Significant overlap of genes changing in expression occurred only with hypermethylated CpGs in regions of increased H3K36me3 ( Figure 3D, Figure S4F) indicating that gains, but not losses, of H3K36me3 specifically influence gene expression upon SETD2 inactivation. We validated up-regulation of genes that gain H3K36me3 and 5mC ( Figure 3E) by qRT-PCR ( Figure S4G). Finally, as epigenetic regulation of enhancer elements also influences gene expression, we determined if genes linked to a particular combination of enhancer marks (as in Figure S3H) were enriched for expression changes with SETD2 KO. Indeed, genes marked exclusively by H3K4me1 were significantly enriched for up-regulation ( Figure S4H), the same class that displayed enrichment for H3K36me3 gains ( Figure S3J) and DNA hypermethylation ( Figure 2E). Taken together, these results indicate that loss of SETD2 function induces marked redistribution of H3K36me3 and 5mC that positively influences expression of low expressing genes.
dnA hypermethylation induced by setd2 Ko occurs over large regions of the genome
Since a large proportion of differential hypermethylation occurred at intergenic regions with SETD2 inactivation (Figure 2A, Figure S3C), we next evaluated whether these were sporadic or coordinated events. Low expressing genes adjacent to hypermethylated intergenic CpGs were hypermethylated across both promoter and gene body regions ( Figure 4A). Notably, a number of the genes demonstrated elevated H3K36me3 and expression ( Figure 4A). High expressing genes adjacent to hypermethylated intergenic CpGs, in contrast, did not change their 5mC. Differentially methylated www.impactjournals.com/oncotarget Figure S5. www.impactjournals.com/oncotarget regions (DMRs) are defined as contiguous regions of the genome that undergo conserved changes in DNA methylation. Since genes adjacent to hypermethylated intergenic CpG sites also display hypermethylation, we assayed the SETD2 KO clones for DMRs (defined as eight contiguous CpGs with Δβ≥0.2) ( Figure S5A). Eighty percent of identified DMRs from one SETD2 KO clone were conserved in the other SETD2 KO clone, indicating that these loci are consistently targeted for hypermethylation with SETD2 inactivation ( Figure S5A). DMRs occurred predominately in intergenic regions ( Figure S5B), coincided with large domains that gained H3K36me3 ( Figure 4B) (pval < 2.65E-49), and genes within the DMR were typically up-regulated as a result ( Figure 4C). In addition, a significant proportion of genes within DMRs are marked by H3K4me1 in normal adult kidney (pval < 5.75E-10). Finally, almost all genes within DMRs are low expression tier genes, and typically are not expressed or are up-regulated by SETD2 KO ( Figure S5C).
Ontology analysis revealed enrichment for biological processes involved in development (likely a reflection of the genes marked previously by H3K4me1, Figure 2F), cell adhesion, and signal transduction ( Figure 4D).
setd2 sirnA knockdown (Kd) induces dnA hypermethylation in nccIt embryonic carcinoma cells
To determine if DNA hypermethylation is a common phenotype induced by SETD2 loss-of-function outside the context of an RCC background, we acutely depleted SETD2 in NCCIT embryonic carcinoma cells using siRNA as we have done previously [31]. Total H3K36me3 was decreased upon siKD of SETD2 in NCCIT cells ( Figure S6A) and did not significantly alter expression of housekeeping genes, epigenetic modifiers, and PAF complex subunits ( Figure S6B). Pluripotency genes and germ layer markers were differentially expressed with SETD2 siKD (Figure S6B), similar to the changes observed in 786-O SETD2 KO cells ( Figure S1C). Next we assayed genome-wide 5mC patterns with the 450K array. Like 786-O SETD2 KO cells, regional analysis of 5mC changes revealed hypermethylation occurring predominately in intergenic regions ( Figure 5A). Analysis of the most differentially methylated CpGs (|Δβ|≥0.1) showed that > 85% of CpGs became hypermethylated upon SETD2 siKD ( Figure 5B). Enrichment analysis of differentially methylated CpGs demonstrated DNA hypomethylation occurring predominately at gene termini, while hypermethylation events were enriched in intergenic regions ( Figure 5C), patterns similar to those observed in 786-O SETD2 KO cells ( Figure S3C). Hypermethylation of promoters and gene bodies significantly overlapped between SETD2 siKD and SETD2 KO cell models, while hypomethylation events overlapped only in gene bodies ( Figure S6C). Finally, DNA hypermethylation induced by SETD2 siKD in NCCIT cells occurred at regions of the genome conserved with those observed in 786-O SETD2 KO cells ( Figure 5D). These were also regions that demonstrated ectopic H3K36me3 in the 786-O SETD2 KO cells ( Figure 5D). Taken together, these results show that DNA hypermethylation arising from SETD2 lossof-function is conserved across cell types and occurs with both acute and long-term functional inactivation of SETD2.
setd2 mutant primary ccrcc manifests dnA hypermethylation consistent with cell line models
After identifying epigenetic patterns conserved between different cell lines resulting from depletion of SETD2, we next determined if these alterations in 5mC occur in primary ccRCCs with SETD2 mutations. Since > 90% of ccRCCs have SETD2 LOH, but evidence that monoallelic loss of SETD2 impacts global levels of H3K36me3 is lacking [13], we identified tumor samples from the Cancer Genome Atlas (TCGA) KIRC dataset with biallelic inactivation from copy number loss and concurrent SETD2 mutation (n = 29) and compared these samples to KIRC tumors with no evidence of SETD2 mutation or LOH (n = 20). To facilitate comparison with our SETD2 KO cell lines, only KIRC samples with available 450K array data were used. Consistent with genome-wide changes in 5mC observed in both of our SETD2 loss-of-function cell line models, hypermethylation in the SETD2 mutant primary ccRCCs occurred specifically at intergenic regions ( Figure 6A). Focusing on the most differentially methylated CpGs (|Δβ|≥0.1) revealed that > 80% of these loci sustained hypermethylation ( Figure 6B). Enrichment profiles for differentially methylated CpGs were also consistent with those observed in the cell line models, with hypomethylation events enriched at gene termini and hypermethylation events enriched at intergenic loci ( Figure 6C). Indeed, hypermethylated DMRs conserved among SETD2 inactivated cell lines and primary tumors were identified, illustrating the reproducibility of DNA hypermethylation that accompanies loss of SETD2 function ( Figure 6D). The overlap of hypermethylated CpGs among all genomic features, and of hypomethylated CpGs specifically across gene bodies between SETD2 mutant cell lines and primary tumors was significant ( Figure 6E). Ontology analysis of hypermethylated genes that overlap among 786-O SETD2 KOs and SETD2 mutated ccRCC tumors ( Figure S7A) revealed enrichment of similar biological process terms, including developmental-related, cell adhesion, and transport ( Figure 6F). A significant proportion of the overlapping genes were also marked exclusively by H3K4me1 in adult human kidney (pval < 1E-100), indicating that poised enhancers are targeted for aberrant epigenetic regulation in SETD2 mutant primary tumors.
To determine if changes in H3K36me3 distribution were also conserved between our cell line model and primary SETD2 mutant ccRCCs, we examined ChIPseq data derived from two metastatic primary ccRCCs, one harboring WT SETD2 and one with biallelic SETD2 loss [13]. Locations that lost H3K36me3 under SETD2 inactivation conditions were highly conserved between cell lines and primary tumors ( Figure S7B). Ectopic gains in H3K36me3, which occur less frequently than losses of H3K36me3, were also conserved ( Figure S7B), but excluded from the SETD2 WT tumor sample. Indeed, loci that consistently gain H3K36me3 and 5mC were identified among the cell lines and SETD2 mutated primary ccRCC, but not the SETD2 WT tumor ( Figure S7C). Next, we stratified gene expression from the KIRC TCGA normal kidney samples into high and low expression tiers, and evaluated their 5mC levels. Consistent with the cell lines, hypermethylation in SETD2 mutant KIRC ccRCCs was focused primarily on genes within the low expression tier, and hypomethylation on genes in the high expression tier ( Figure S7D). Finally, we examined 5mC levels in SETD2 WT versus mutant ccRCCs stratified by expression tier. Although subtle, hypermethylation in the SETD2 mutated ccRCCs was observed at genes in the low expression tier, while genes in the high expression tier maintained their DNA methylation ( Figure S7E). Taken together, results from the 786-O SETD2 KO cells were highly predictive of epigenetic phenotypes that occur as a result of SETD2 mutation in primary ccRCC.
setd2 loss-of-function mutations induce dnA hypermethylation in other tumor types
As global DNA hypermethylation was consistently induced in our models of SETD2 inactivation and in primary SETD2 mutant ccRCCs, we next investigated www.impactjournals.com/oncotarget whether inactivating SETD2 mutations in other cancer types are associated with a hypermethylation signature. We identified samples from the TCGA kidney renal papillary cell carcinoma (KIRP) and lung adenocarcinoma (LuCa) data collections that harbored SETD2 mutations (two tumor types with appreciable numbers of SETD2 mutant tumors in TCGA datasets) and analyzed them alongside the KIRC dataset [35]. SETD2 mutations were significantly associated with global DNA hypermethylation in all three tumor types ( Figure 7A-7C). Closer examination of 5mC profiles revealed that hypermethylation events in KIRC, KIRP, and LuCa were more frequent in number, magnitude (change in 5mC), and significance (p-value) in SETD2 mutant tumors relative to the wild-type counterparts ( Figure 7D-7F). Unsupervised hierarchical clustering of the top 10,000 most variably methylated CpGs from the 786-O SETD2 KO samples and the KIRC dataset segregated samples based on SETD2 genotype ( Figure S8A). We next performed unsupervised hierarchical clustering based on the 10,000 most variable CpGs on the 450K array in each tumor type ( Figure 7G-7I), revealing two major clusters based on 5mC profiles; one cluster dominated by hypermethylation that significantly coincided with high prevalence of SETD2 mutation (KIRC, p = 1.67E-7; KIRP, p = 0.0048; LuCa, p = 0.025). Specifically, 86%, 83%, and 79% of KIRC, KIRP, and LuCa, respectively segregated into the expected cluster based on SETD2 genotype. It is important to note that this analysis included nonsense, missense, and frameshift mutations, not all of which may impair SETD2 activity and/or H3K36me3 status, and this likely contributes to the "mis-classification" of some tumors. In addition, there are alternative mechanisms by which SETD2 and H3K36me3 can be deregulated, including loss of 3p21, transcriptional regulation, and loss of H3K36me3 substrate (H3K36me1/ me2) due to modulation of H3K36me1/me2 histone methyltransferases [36].
To better understand the effect of DNA hypermethylation resulting from SETD2 inactivation in primary tumors, we examined CpGs hypermethylated across all three tumor types (KIRC, KIRP, and LuCa) to determine if there was a common 5mC signature of SETD2 loss ( Figure 8A). Independent of tumor type, a 200 CpG hypermethylation signature was established, demonstrating that loss of SETD2 alters the DNA methylome. Eighty-eight percent of tumors segregated as expected based on SETD2 genotype. SETD2 mutant primary tumors derived from KIRC, KIRP, and LuCa analyzed in this study were predominantly associated with high grade (stages III-IV versus stages I-II (p = 2.16e-6)) and higher stage (p = 9.42e-6). We next interrogated the top 1,000 most differentially methylated CpGs between SETD2 wild-type and mutant tumors using Ingenuity Pathway Analysis (IPA) to better understand the underlying biological processes that may be affected ( Figure 8B). The most statistically significant pathway enriched was "Transcriptional Regulatory Network in Embryonic Stem Cells" including genes such as EOMES, MEIS1, and REST.
To determine if enhancer elements contribute disproportionally to the DNA hypermethylation observed among the three cancer types investigated, as they did in the 786-O cells, we assayed differential 5mC at CpGs within regions marked with histone modifications linked to different types of enhancers (H3K4me1 only, H3K4me1+H3K27ac, H3K27ac only) derived from the Epigenome Roadmap datasets (normal adult kidney was used for KIRC and KIRP; normal adult lung for LuCa). Hypermethylation was consistently enriched among all tumor types at regions marked by K4me1 in normal tissue but was excluded from K27ac-marked regions ( Figure S8B). Consistent with our cell line models, SETD2 inactivation induced hypermethylation at K4me1 marked regions across all genomic features ( Figure S8C) suggesting that enhancers are a common target for epigenetic deregulation in SETD2 mutant tumors.
dIscussIon
In this study, we used cell culture models and primary tumors to examine how SETD2 loss-of function mutations drive tumorigenesis. Isogenic 786-O SETD2 deficient ccRCC cells demonstrated marked redistribution of H3K36me3. While loss of H3K36me3 in gene bodies predominated, substantial ectopic H3K36me3, focused largely on intergenic regions, was also observed.
Inactivation of SETD2 resulted in marked effects on the DNA methylome dominated by genome-wide hypermethylation. DNA hypermethylation significantly co-occurred at sites of ectopic H3K36me3 indicating that this mark profoundly influences 5mC placement. Other regions of the genome without appreciable ectopic H3K36me3 were also subject to DNA hypermethylation, suggesting widespread disruption of 5mC targeting, perhaps due to loss of DNMT3 containment in H3K36me3-rich domains. Redistribution of H3K36 and 5mC resulted in up-regulation of previously non-/ low-expressed genes enriched for the poised enhancer mark H3K4me1 in normal adult kidney. Acute depletion of SETD2 in an unrelated VHL competent cell line led to a similar effect on 5mC distribution showing that the impact of SETD2 loss-of-function is independent of cell type and method of SETD2 inactivation. Changes in 5mC were conserved in primary ccRCC with biallelic SETD2 inactivation, in SETD2-mutant papillary RCC (a distinct tumor of the kidney), and in lung adenocarcinomas with SETD2 mutations, and resulted in a distinct 5mC signature that efficiently clustered tumors by SETD2 genotype and higher tumor grade and stage, consistent with findings by us [2,13] and others [37][38][39] that SETD2 mutations are generally linked to poor prognosis and/or metastasis. Taken together, our results show that SETD2 mutant tumors represent a new DNA hypermethylator class and that genome-wide redistribution of 5mC caused by SETD2 inactivation, particularly at enhancers, represents one mechanism by which this mutation may promote dedifferentiation and cancer progression.
While inactivation of SETD2 in 786-O cells resulted in large-scale losses of H3K36me3, particularly across bodies of high expressing genes that represent the major sink for H3K36me3 in the genome, we unexpectedly also observed ectopic gains in H3K36me3 across low expressed genes and intergenic regions. This finding is consistent, however, with our previous H3K36me3 ChIPseq analysis of SETD2 mutant primary ccRCC where we observed ectopic H3K36me3 in a SETD2 mutant tumor at a region that influenced an RNA splicing event [13]. In the primary tumor analysis it was difficult to rule out intratumoral heterogeneity or normal cell contamination as a cause for ectopic H3K36me3, however our 786-O isogenic model consistently shows overlapping ectopic H3K36me3 peaks with SETD2 mutant primary ccRCC, underscoring the validity of this finding. SETD2 is thought to be the sole H3K36 trimethylase in mammals [11,12], although this is largely based on lower sensitivity global quantification methods such as immunohistochemistry or total H3 western blotting. Although we cannot completely rule out the possibility of some residual activity from the SETD2 locus in our 786-O cells, we believe the most likely mediator of the ectopic H3K36me3 is another histone methyltransferase that methylates the H3K36 position, but does not typically perform trimethylation.
H3K36me1/me2 are regulated by a diverse group of proteins, including: NSD1 (KMT3B), NSD2 (MMSET/ WHSC1), NSD3 (WHSC1L), SETD3, ASHL1, SETMAR (METNASE), and SMYD2. Use of varied substrates, assay conditions, and cell types have likely led to inconsistencies in the reported substrate preferences of each enzyme [40]. The NSD family members, for example, preferentially mono-and dimethylate K36 in vivo [40], but are capable of trimethylating K36 in vitro [36]. Although we did not observe significant changes in expression of NSD family members in our SETD2 KO clones (based on RNA-seq, data not shown), it appears plausible one of them could adopt this activity in the absence of normal SETD2 activity and our isogenic 786-O cells represent a good model for identifying this activity. Given that the SETD2 inactivation-induced ectopic H3K36me3 is linked to genome-wide DNA hypermethylation and gene expression changes associated with dedifferentiation, this activity could represent a novel drug target in SETD2 mutant tumors.
Prior studies examining the relationship between DNA and H3K36 methylation focused on the impact of H3K36me3 loss to methylated regions of the genome. Hahn et al. hypothesized that 5mC and H3K36me3 were established independently since SETD2 depletion did not change 5mC at gene bodies that lost H3K36me3, and conversely H3K36me3 distribution did not change in HCT116 cells depleted of DNMT1 and DNMT3B [41]. Our results showing that highly expressed genes in 786-O SETD2 KO clones that lost H3K36me3 generally maintained their 5mC supports these observations. The TCGA consortium reported ccRCC DNA hypomethylation was enriched at sites marked by H3K36me3 in normal kidney [3], which is consistent with our findings that 5mC was lost predominantly at H3K36me3-high gene termini. In addition, the TCGA reported DNA hypermethylation focused at CpGs not previously marked by H3K36me3 in normal adult kidney [3]. This also is consistent with our results in that many regions gaining 5mC under SETD2 loss conditions are not marked by H3K36me3, rather it is these loci that gain both ectopic H3K36me3 and 5mC. Finally, our findings are consistent with those of Sato et al. who stratified differential 5mC in ccRCCs into three tiers (low, intermediate, and high) and observed that 92% of SETD2 mutant tumors were present in the intermediate and high 5mC tiers [42]. Thus collectively our findings, supported by the TCGA KIRC dataset, firmly link SETD2 loss-of-function to a global DNA hypermethylation phenotype and more aggressive disease.
Previous work from our laboratory and others showed that DNMT3B was particularly enriched at actively transcribed H3K36me3-marked gene bodies [21,30,31], and that H3K36me3 recognition by the DNMT3B PWWP domain is important for its ability to methylate these regions [25,26]. Based on these studies and our results, we hypothesize that global genome DNA hypermethylation under SETD2 loss-of-function conditions results from two mechanisms ( Figure 8C), (i) recruitment of DNMT3B to ectopic H3K36me3 regions followed by de novo methylation, and (ii) loss of normal DNMT3B tethering to gene bodies, allowing it to gain access to normally unmethylated regions of the genome (loss of 'containment'). We cannot rule out the possible involvement of other DNMTs in this process. In regions already methylated, SETD2 inactivation does not result in 5mC loss because methylation is already established and thus is maintained by DNMT1. The exception to this appears to be gene termini, where loss of H3K36me3 is linked to 5mC loss. Interestingly, ChIP-seq demonstrated DNMT3B was most enriched at gene termini [30], indicating that it might be responsible for both establishment and maintenance of 5mC at gene 3'-ends ( Figure 8C). It is therefore of interest to examine whether 5mC regulates aspects of 3'-end processing. Poised normal tissue enhancers were also a prominent target of SETD2 inactivation-induced ectopic H3K36me3 and DNA hypermethylation. Interestingly, active enhancers in human cells are enriched for H3K36me3 [43], consistent with the up-regulation of genes associated with these sequences we observe. The presence of unproductive noncoding RNA transcripts emanating from active enhancers [44] is consistent with acquisition of both H3K36me3 and 5mC, since both marks are recruited to actively transcribed loci. Thus the presence of 5mC at or flanking certain enhancers may be indicative of enhancer activation much in the same way gene body 5mC is linked positively to gene activity [21].
SETD2 mutation or down-regulation occurs across a broad spectrum of tumor types [3,35,38] although in many of these its frequency is relatively low ( < 10%) making detailed analysis of its effects feasible only with large datasets. To begin to assess whether the impact of SETD2 mutations on 5mC localization was conserved in other tumor types, we expanded our analysis to two large public datasets, papillary RCC and lung adenocarcinoma [35]. In both a distinct type of kidney cancer not characterized by chromosome 3p LOH or VHL inactivation, and a tumor of completely different cellular origin, we observed a DNA hypermethylation phenotype strongly linked to SETD2 mutation. SETD2 mutations are independently acquired within multiple parts of the same papillary RCC [37], suggesting strong selective pressure to inactivate the K36me3 pathway, and SETD2 mutations are enriched in relapsed B-ALL [39], reinforcing the link between this mutation and tumor progression. Our results support this notion as a majority of the SETD2 mutated hypermethylated tumors were associated with more aggressive stage and grade. CIMP (CpG island methylator phenotype) is now recognized in many different tumor types and in the case of glioma is caused by mutations in IDH1/IDH2. IDH mutations operate in part by inhibiting TET-mediated DNA demethylation, but also render the tumors more sensitive to DNA hypomethylating agents [45,46]. Preclincial studies have shown that the DNA methylation inhibitor 5-aza-2'-deoxycytidine (5-azadC) effectively reverses DNA hypermethylation observed in IDH1 mutant gliomas, induces tumor stem cell differentiation, and inhibits tumor growth in mouse models [46]. Our identification of SETD2 as a novel driver of a DNA hypermethylation phenotype suggests that such tumors might also be more susceptible to DNA hypomethylating agents like 5-azadC. Therefore while many chromatin regulator gene mutations are not currently targetable with specific therapies, the interplay between marks, exemplified by 5mC and H3K36me3 described here or IDH1 and 5mC in glioma, suggests that multiple epigenetic regulator mutations may converge on and deregulate 5mC patterns as a common method to promote tumorigenesis. As such, DNA demethylating agents may be more generally applicable as a therapy to target tumors with epigenetic regulator mutations. Our results identify a highly conserved DNA hypermethylation phenotype induced by SETD2 inactivation that functionally modulates the gene expression program of renal cell cancers, suggesting that DNA demethylating agents represent a potential rational therapy to target SETD2 loss of function tumors.
MAterIAls And MetHods
cell culture, setd2 depletion, dnA/rnA extraction, and quantification of 5mC content by mass spectrometry 786-O parental and SETD2 KO derivatives were grown in RPMI1640 medium supplemented with 10% heat-inactivated fetal bovine serum and 2 mM L-glutamine. Briefly, SETD2 was targeted by zinc finger nucleases for deletion and two isogenic clones with frameshifts were generated [13]. SETD2 KO1 contains a 4 base pair deletion and KO2 contains an 11 base pair deletion confirmed by Sanger sequencing [13]. 786-O parental and SETD2 KO derivatives were validated by cell line authentication provided by ATCC (data available upon request). NCCIT cells were grown in McCoy's 5A medium supplemented with 10% heat-inactivated fetal bovine serum and 2 mM L-glutamine. The On-TARGETplus siRNA SMARTpool (Dharmacon, Thermo Scientific) targeting a single gene was used against SETD2 (L-012448-00-0005). Transfection with a negative control non-targeting siRNA (D-001206-13-20; Dharmacon, Thermo Scientific) was performed in parallel. SiRNA transfection was performed with PepMute transfection reagent (SignaGen) according to the manufacturer protocol as previously described [31]. Total RNA was extracted by Trizol homogenization and purified according to the manufacturer's protocol (Life Technologies). Genomic DNA was extracted by proteinase K digestion and phenol:chloroform extraction. A portion of this genomic DNA was also used to quantify total genomic 5mC levels by LC-MS/MS exactly as described [32]. Samples for MS were run in duplicate at the Biomarker Mass Spectrometry Facility at the University of North Carolina, Environmental Sciences & Engineering Gillings School of Global Public Health. expression analysis by rnA-seq and qrt-Pcr RNA-seq data was downloaded from the Gene Expression Omnibus (GEO) and aligned to genome build h19 using TopHat v2 [47]. A value of 0.01 was added to all gene RPKM values to account for genes with no expression and prevent artificially large fold-changes in expression [48]. A cutoff of ≥ 2-fold-change in expression was considered differential expression [48]. cDNA synthesis and qRT-PCR was performed in triplicate as described [30]. Primer sequences for ChIP, MeDIP, and qRT-PCR are listed in Table S1 in Supplemental Information.
chIP-qPcr
ChIP-pull downs for H3K36me3 (Active Motif 61021) were performed using an in-house protocol as previously described [13] and detailed in Supplemental Information.
chIP-seq data analysis
ChIP-seq data processing was conducted as previously described [30] and detailed in Supplemental Information.
450K array data analysis
DNA samples were processed on the HumanMethylation450 BeadChip array (Illumina) and analyzed as previously described [31,49] and detailed in the Supplemental Information.
Gene ontology and pathway analysis
Ontology analysis was performed using GO_BP within the DAVID bioinformatics database with Benjamini correction for multiple testing [50] or Ingenuity Pathway Analysis (Qiagen) using standard program parameters.
Significance testing
The Fisher Exact test with a two-tailed p-value calculation was used for testing the significance of data set comparisons as described previously for similar data sets [51]. For added stringency, a modified EASE score was applied to all Fisher Exact tests. Chi-square testing was used to determine significance of clustering and tumor grade.
tcGA sample Ids
Gene expression, exome sequencing, 450K array, and tumor grade data was generated by the Cancer Genome Atlas and downloaded from http://cancergenome. nih.gov/. Patient sample identification numbers used for KIRC, KIRP, and LuCa 450K array analysis are provided in Tables S2, S3, and S4, respectively in Supplemental Information. Data was downloaded from TCGA on 4/23/2015.
Availability of supporting data
450K array data for 786-O parental, SETD2 KO1 and KO2, and siKD of SETD2 in NCCIT cells have been deposited in GEO (GSE70645). NCCIT no-target control (NTC) was previously deposited to GEO under accession GSE54840 (sample GSM1527531).
Previously released dataset accession numbers are provided in Table S5 in Supplemental Information.
|
v3-fos-license
|
2018-03-07T19:33:32.612Z
|
2016-07-04T00:00:00.000
|
1908303
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-016-2696-1",
"pdf_hash": "e4a2ee7ad5f5df3ea1823e7ea0c4a72560369d07",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42390",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"sha1": "e4a2ee7ad5f5df3ea1823e7ea0c4a72560369d07",
"year": 2016
}
|
pes2o/s2orc
|
Customizable orthopaedic oncology implants: one institution’s experience with meeting current IRB and FDA requirements
Background Customizable orthopaedic implants are often needed for patients with primary malignant bone tumors due to unique anatomy or complex mechanical problems. Currently, obtaining customizable orthopaedic implants for orthopaedic oncology patients can be an arduous task involving submitting approval requests to the Institutional Review Board (IRB) and the Food and Drug Administration (FDA). There is great potential for the delay of a patient’s surgery and unnecessary paperwork if the submission pathways are misunderstood or a streamlined protocol is not in place. Purpose The objective of this study was to review the existing FDA custom implant approval pathways and to determine whether this process was improved with an institutional protocol. Methods An institutional protocol for obtaining IRB and FDA approval for customizable orthopaedic implants was established with the IRB at our institution in 2013. This protocol was approved by the IRB, such that new patients only require submission of a modification to the existing protocol with individualized patient information. During the two-year period of 2013–2014, eight patients were retrospectively identified as having required customizable implants for various orthopaedic oncology surgeries. The dates of request for IRB approval, request for FDA approval, and total time to surgery were recorded, along with the specific pathway utilized for FDA approval. Results The average patient age was 12 years old (7–21 years old). The average time to IRB approval of a modification to the pre-approved protocol was 14 days (7–21 days). Average time to FDA approval after submission of the IRB approval to the manufacturer was 12.5 days (7–19 days). FDA approval was obtained for all implants as compassionate use requests in accordance with Section 561 of the Federal Food Drug and Cosmetic Act’s expanded access provisions. Conclusions Establishment of an institutional protocol with pre-approval by the IRB can expedite the otherwise time-consuming and complicated process of obtaining customizable orthopaedic implants for orthopaedic oncology patients. Level of evidence Retrospective case series, Level IV. See the Guidelines for authors for a complete description of levels of evidence.
Background
Malignant primary bone tumors are rare entities with an incidence of approximately 0.8/100,000 people per year. In the adult population, primary malignant bone tumors represent 0.2 % of all tumors, whereas in the pediatric population, they account for approximately 5 % of all malignancies (Kindblom 2009;Dorfman and Czerniak 1995). The three most common malignant primary bone tumors are osteosarcoma, chondrosarcoma, and Ewing's sarcoma, which combined, represent 75 % of all malignant primary bone tumors (Dorfman and Czerniak Open Access *Correspondence: Joeipp98@gmail.com 2 Department of Orthopaedic Surgery, Rutgers New Jersey Medical School, 140 Bergen Street, ACC Building, Suite D-1610, Newark, NJ 07103, USA Full list of author information is available at the end of the article 1995). In the US, osteosarcoma is most common with an incidence of approximately 400 cases per year, holding the sixth highest prevalence of all cancers in children less than fifteen years of age (Mirabello et al. 2009).
When a malignant primary bone tumor is diagnosed, patients often require extensive surgery and removal of large portions of their skeletal structure along with the tumor. Historically, amputation was once the mainstay of treatment of these tumors. However, more recent advances have made limb salvage surgery feasible (Lewis 1985;Link et al. 1986). These procedures require significant pre-operative planning, part of which includes obtaining an implant appropriate for the patient's size, anatomy, and defect created by the surgery.
Due to the predilection of osteosarcoma for the distal femur, proximal tibia, and proximal humerus, limb length discrepancy can be severe following tumor and concomitant physeal resection (Ottaviani and Jaffe 2009 (Tsuchihara et al. 2008;Pritchett 1992).
Over the last 2-3 decades, expandable prostheses have supplanted older devices due to their ability to simultaneously reconstruct the limb and address potential limb length discrepancies that may occur in skeletally immature patients following physeal resection (Eckardt et al. 2000;Finn and Simon 1991). Implantation of an expandable prosthesis is indicated for limb-salvage in skeletally immature patients in which wide resection includes removal of an active physis and the patient is left with a projected limb-length discrepancy of ≥6 cm (Harvey et al. 2010;Holm et al. 1994;Papaioannou et al. 1982;Song et al. 1997;Stanitski 1999). Growth remaining was determined according to the standard methods after bone age was assessed (Anderson et al. 1963;Dimeglio 2001). For use in the humerus, the Repiphysis ® was discussed with the patient and family as a limb-salvage option. As patient size, age, anatomy, and location of tumor vary greatly, approved off-the-shelf implants are not always available to fit the patient's needs. Additionally, despite the progress that has been made with respect to the design of orthopaedic devices, mechanical failure is not uncommon. Failure of one or more components of an implanted system may leave the surgeon with a unique situation that demands either customization or revision components or importation of devices used in other parts of the world, which are inherently not FDA approved. All devices used in the United States require FDA approval prior to implantation, and therefore, this creates a distinct logistical challenge for the surgeon.
Pathway overview
The FDA has multiple pathways (Table 1) in place to facilitate justified and expeditious acquisition of safe and effective implants surgeons or dentists may need. Nevertheless, without proper guidance, it has become somewhat onerous to obtain the implants required, partially because of recent increases in manufacturer scrutiny. Following a Department of Justice investigation in September 2007, four major orthopaedic companies were charged with violating anti-kickback statutes and were forced into short-term intense federal monitoring. This may have inhibited willingness of manufacturers to produce custom devices out of residual concern for FDA inquiry and assessment of corporate compliance. Additionally, some surgeons may lack understanding of the FDA protocols and when each pathway is applicable. Summarized below are the primary pathways relevant to obtaining custom orthopaedic surgical implants.
Compassionate use request
The FDA normally allows for unapproved, investigational devices to be used in clinical trials with specific criteria and specific protocols, under an Investigational Device Exemption (IDE). However, it is also possible to use a device currently under investigation, but outside of the clinical trial, in order to help a patient with a serious or life-threatening condition under 21 CFR 812.35-36 and Section 561 of the FD&C Act (Investigational Device Exemption). Emergency use of unapproved devices is allowed, but the sponsor (responsible party) must notify the FDA within 5 days following the procedure. The "compassionate use" request is a helpful pathway that allows physicians to use unapproved devices from clinical trials on patients that do not meet the study's inclusion criteria, but will benefit from the device. This is also known as the "Expanded Access" provision, which was included in the FDA Modernization Act of 1997. FDA approval is required prior to implementation of the device and can be obtained by having the sponsor submit an IDE supplement including a description of why treatment is needed, why alternatives are unsatisfactory, any deviations from the clinical protocol, and patient protection measures such as IRB approval, institutional clearance, informed consent, authorization from the IDE sponsor, and independent assessment by an uninvolved physician (Investigational Device Exemption 2014).
Custom device exemption
The Custom Device Exemption (CDE) pathway has been around since the 1976 amendments to the Federal Food, Drug, and Cosmetic Act (FD&C) (Mihalko 2015). The pathway was expanded in 2012, under the Food and Drug Administration Safety and Innovation Act (FDASIA) to allow for more flexibility in approving devices, but also to require annual industry reporting policy (Mihalko 2015). The guidance document describes that the use of custom devices "should represent a narrow category for which, due to the rarity of a patient's medical condition or physician's special need, compliance with premarket review requirements and performance standards under Sections 514 and 515 of the FD&C Act is impractical. " The pathway mandates that to be considered a custom device, the device must be created in order to comply with an order of an individual physician, sufficiently unique that clinical investigation or performance standards would not apply, not generally available in the US, designed for unique pathology, manufactured on caseby-case basis or for a unique subset, and produced in quantities of less than five per year [this is specified in Section 520(b) of the FD&C Act and at 21 CFR 812.3(b)]. The five units per year specification refers to five custom units of a particular type as allowed for production by a manufacturer per year. For example, a manufacturer would be allowed to produce five patient-specific custom implants of a particular device type per year. A possible scenario provided in the guidance document for appropriate use of this pathway describes a patient with skeletal dysplasia requiring a total hip replacement for osteoarthritis. A custom implant is needed due to the patient's "unique pathological anatomy. "
Humanitarian use device
A humanitarian use device (HUD) is defined as "a medical device intended to benefit patients in the treatment or diagnosis of a disease or condition that affects or is manifested in fewer than 4000 individuals in the United States per year. " A manufacturer of an HUD can be exempt from scientifically based validation studies of efficacy mandated for standard premarket approval as long as there is enough information to suggest that the benefit is probably greater than the risk and no better alternative exists [FD&C Section 520 (m)]. This is known as the Humanitarian Device Exemption (HDE). Once a manufacturer has obtained an HDE approval for a specific device, IRB approval must be obtained from individual institutions, which may result in institutional device approval or necessitate IRB approval on a case-by-case basis.
Premarket notification [510(k)] and premarket approval application
Another pathway the FDA provides for approval of medical devices is via premarket notification and the premarket approval process. To obtain premarket approval for marketing and sale, there must be valid scientific evidence demonstrating the product is safe and effective. For premarket notification [510(k)], there must be evidence demonstrating a device is as safe as a currently legally marketed device. Submitters must compare their device to one or more similar legally marketed devices and make and support their substantial equivalency claims. A device is considered substantially equivalent if it has the same intended use and technological characteristics as a marketed device, or if it has the same intended use and different technological characteristics that does not raise new questions of safety and effectiveness. Many orthopaedic and implantable cardiac devices are approved via supplemental PMA pathways (Rome et al. 2014;Sheth et al. 2009).
Methods
Our institution is a tertiary referral center for patients with musculoskeletal tumors. There are three fellowship-trained orthopaedic oncologists on staff. An institutional protocol has been developed to organize and expedite the process of approving the customizable orthopaedic implants needed for our patient population. We have determined that a customizable implant is needed approximately 2-3 times per year in our patient population (Beebe et al. 2009(Beebe et al. , 2010. Our protocol has been designed to satisfy both institutional IRB and FDA requirements for customizable implants, and this protocol has been pre-approved by our institution's IRB to expedite the overall process. Prior to this, we needed to submit a new IRB for each patient, which would often require a significant amount of paperwork. Under our current protocol, when a patient needs a customizable implant, a modification to the approved IRB protocol is submitted to the IRB for review, which includes information regarding the specific rationalization behind our request of a customizable implant and a modified consent for surgery and implant usage. Upon IRB approval, consent forms, patient specific information, and the notice of IRB approval are forwarded to the FDA by our staff or the sponsor. After FDA approval, the approval letter is provided to the IRB and the surgery is completed when the implant arrives from the manufacturer, consents are signed, and when the patient is ready from a medical and/or oncologic standpoint. This protocol has been used for 8 retrospectively identified patients between 2013 and 2014 (Table 2).
Results
The average patient age was 12 (range 7-21) years old at the time of surgery. The pathologic diagnosis of the patients was either osteosarcoma or Ewing's sarcoma.
Half of the patients required implants for primary tumor resection and reconstruction surgeries, and half of the customizable implants were for revision surgeries. The most common reason a patient needed a customizable implant was because they needed smaller or modified Repiphysis (Microport Orthopaedics, Arlington, TN) implants due to their age and body size (4/8 patients). For patients who underwent a primary procedure, mean time to IRB modification approval was 15.75 days (9-21 days). For patients who underwent revision, the average time to IRB modification approval was 12 days (7-18 days).
The time required to complete the paperwork for an IRB modification is considerably quicker than the usual time needed for a new submission. Mean time to FDA approval after submission of the IRB approval to the sponsor was 10 days (7-15 days) for primary patients and 15 days (7-19 days) for revision patients. Mean time to surgery after FDA approval was 13.75 days (4-28 days) for primary patients and 26.5 days (12-44) for revision patients. The longest time to surgery of 44 days required importation of an implant from a manufacturer in Germany.
Many of the patients underwent pre-operative chemotherapy during the time period in which the implant was being approved. FDA approval was obtained for all implants under the "compassionate use" pathway with citation of Section 561 of the FD&C act's expanded access provisions. None of the implants were currently involved in ongoing clinical trials or had been reviewed by the FDA as part of an IDE application.
Discussion
A large portion of our demand for customizable implants came from pediatric patients and there was particular need for the Repiphysis Limb Salvage System (MicroPort Orthopaedics, Arlington, TN). This system has been in use since the early 1990s in Europe and has had generally good to excellent results despite a relatively high complication rate (Gitelis et al. 2003;Saghieh et al. 2010). Despite its design to address potential limb length discrepancies in skeletally immature patients, we have encountered problems requiring requesting customized Repiphysis implants that were not previously FDA approved. Two of our younger patients were too small for their lower extremity implant sizes, one patient needed modification of the FDA approved femoral Repiphysis due to inadequate femoral bone stock, and two pediatric patients needed Repiphysis implants designed for humeral tumors, which are not FDA approved, requiring utilization of the compassionate use pathway. This is representative of a longstanding deficiency in the pediatric medical device market, especially for rare pediatric conditions. This has primarily been due to insufficient financial incentives for companies potentially interested in development. A major step forward occurred in 2007 when the Pediatric Medical Device Safety and Improvement Act was passed to improve post-market surveillance and eliminate profit restrictions on HDE approved devices, in an effort stimulate production and innovation. Despite this, many pediatric devices are approved on the basis of trials conducted in non-pediatric patients (Hwang et al. 2014) and there remains significant underdevelopment in the pediatric medical device market.
The rest of the implants used required customization or modification of existing implants due to unique anatomical issues, bone loss, or a complex mechanical problem with an existing prosthesis. Because all of these situations necessitated modification of existing implants and not creation of de novo implants for unique anatomy, the "compassionate use" pathway of the Investigational Device Exemption regulation was appropriate. A de novo implant for a unique anatomical problem would be best suited by utilizing the Custom Device Exemption pathway.
Limitations in this study include the retrospective nature of this study, as well as lack of a specific control group to compare the time to IRB approval and surgery before versus after the establishment of our institutional protocol.
Obtaining the customizable orthopaedic implants necessary to promptly and properly treat patients can be complicated and time-consuming. However, with a thorough understanding of the different pathways and their indications and a protocol in place with the institution's IRB, the process can be accelerated and less arduous for all parties involved.
|
v3-fos-license
|
2021-12-03T16:19:12.578Z
|
2021-11-30T00:00:00.000
|
244834881
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2021/8291773",
"pdf_hash": "fb4a8e730b9ca96956d43d3cf6e82b865475570d",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42391",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"sha1": "23635e301965751aec28938cbfe35d6ce0e79778",
"year": 2021
}
|
pes2o/s2orc
|
Indoor Positioning System Based on Standardizing Waveform Tendency
Estimating the indoor position of users in commercial buildings remains a significant challenge to date. Although the WiFi-based indoor localization has been widely explored in many works by employing received signal strength (RSS) patterns as the features, they usually lead to inaccurate results as the RSS could be easily affected by the indoor environmental dynamics. Besides, existing methods are computationally intensive, which have a high time consumption that makes them unsuitable for real-life applications. In order to deal with those issues, we propose to use standardizing waveform tendency (SWT) of RSS for indoor positioning. We show that the proposed SWT is robust to the noise generated by the dynamic environment. We further develop a novel smartphone indoor positioning system by integrating SWT and kernel extreme learning machine (KELM) algorithm. Extensive real-world positioning experiments are conducted to demonstrate the superiority of our proposed model in terms of both positioning accuracy and robustness to environmental changes when comparing with state-of-the-art baselines.
Introduction
Over the past two decades, with the increasing popularity of smart devices (e.g., smartphones and tablet computers), the demand of Location-Based Services (LBSs) also comes with fast-pace increasing, for instance, driving to a destination, tracking, and recording user's movements [1,2]. These services are commonly performed outdoors through Global Positioning System (GPS) and its derived applications (APPs). Nevertheless, GPS technology is unavailable inside buildings as satellites' signals are blocked by objects in indoor circumstances [3,4].
In cities, there are increasingly numerous shopping malls, which have miscellaneous business shops on each floor as well as large parking lots. GPS cannot offer the positioning services with satisfactory accuracy indoors. Consequently, numerous indoor positioning technologies have emerged to meet the growing demand for indoor LBS in GPS-denied surroundings, such as indoor positioning technology based on Bluetooth [5], ZigBee [6], Radio Frequency Identification (RFID) [7], Ultrawideband (UWB) [8], and IEEE 802.11 (WiFi) [9]. Unlike other wireless technologies, WiFi does not need extra installation and deployment efforts as the existing WiFi infrastructure is off-the-shelf in all kinds of indoor public places thanks to the development of network technology. For the above reason, WiFi-based indoor localization has received extensive attention, and many research institutions have focused on developing WiFi-based LBS technology [10].
After nearly a decade of exploration and research, different WiFi-based positioning methods have been developed. There are two main types of indoor localization methods: range-free and range-based. The location methods based on ranging include Time of Arrival (TOA) [11], Time Difference of Arrival (TDOA) [12], Direction of Arrival (DOA) [13], and received signal strength (RSS) [14,15]. However, these ranging-based methods are not appropriate for non-line-of-sight (NLOS) indoor environment [16,17], and the system based on communication hopping is generally complicated. Fingerprint technology becomes the most prevailing indoor positioning method since it can provide satisfying positioning accuracy.
There are a variety of algorithms employed by fingerprint positioning technology [18]. The most widely adopted algorithms are classification models such as k-nearest neighbor (KNN) [19]; weighted k-nearest neighbor algorithm (WkNN) [20]; probability algorithms, e.g., Bayesian (BYS) estimation [21]; regression algorithms, e.g., Support Vector Regression (SVR) [22]; and neural networks such as Back Propagation (BP) [23] and Convolution Neural Network (CNN) [24]. However, some algorithms (e.g., neural networks) have high computational costs as they are trained using a large amount of data and requiring advanced hardware. Thus, they usually cannot be deployed in commercial computers. Amongst all those approaches, the RSS has been widely adopted. However, it is noteworthy that the RSS is commonly noisy because the RSS could be easily affected by the dynamic environment (e.g., random flow of people and the movement of furniture) and this may result in severe degradation of localization accuracy.
In this paper, we propose a novel indoor positioning system (IPS) to address the aforementioned issues caused by the dynamic environment's effects. Compared with existing works, the proposed IPS improves the traditional fingerprint-based positioning system by applying the standardizing waveform tendency (SWT) of RSS and kernel extreme learning machine (KELM). Experimental results showed that the proposed algorithm can deliver reliable and precise localizations compared with baselines and has high robustness and efficiency to deal with the noise of the indoor environmental dynamic.
The contributions of the paper are summarized as follows: (1) To deal with the robustness issues caused by environmental dynamics, we propose to employ the standardizing waveform tendency as fingerprint characteristics. The proposed SWT is more effective in extracting patterns of wireless network environment and has better tolerance to equipment heterogeneity and indoor dynamic environment. It is worth noting that SWT can be integrated into existing WiFi positioning schemes (2) We propose a kernel extreme learning machine based localization algorithm for indoor smartphone positioning system. The KELM-based localization algorithm is of fast learning speed and better generalization ability (3) We conduct multitudinous simulations and real experiments. The localization error of our algorithm is 1.9 m, which reduces the localization error by 1 m. In contrast, the baseline algorithms typically have an error of more than 3 meters The remaining of this paper is organized as follows. Section 2 gives an introduction of the new fingerprint feature, i.e., SWT. Section 3 provides our proposed SWT-KELM algorithm. In Section 4, the experiment validation of our IPS is presented. We draw the conclusion in Section 5.
Standardized Waveform Tendency Fingerprints
This section elaborates on the standardized waveform tendency (SWT) of RSS employed as a fingerprint feature of our IPS to handle the effects of dynamic environments and equipment heterogeneity. To investigate the distribution of RSS, we collected RSS measurements in a 32 × 16 m industrial robotics laboratory. There are many large-scale instruments, tables, and chairs crowded in the experimental area where most of the LOS paths are blocked. The layout of the industrial robotics laboratory for experiment is demonstrated in Figure 1. Eight commercial WiFi routers (TP-Link WDR6500) are adopted as access points (APs) in the proposed localization system. The locations of these 8 APs are presented in Figure 1. All APs are placed on 1.2 meters above the ground. We leverage the smartphone (OPPO R7sm) to collect the RSS data in our experiment. Figure 2 shows the RSS data collected from a certain AP over a continuous period at a reference point. It can be found from the figure that at the same point, the received signal strength values from the same AP will fluctuate over time, and large attenuations are generated at some moments. Figure 3 illustrates the raw RSS waveform for 500 measurements from eight APs at the same reference point. It should be noted that although the shape of the curves is chaotic, they show a certain similar trend. This phenomenon can be explained from the theoretical way that the RSS measurements should follow the same distribution without noise and outliers. Motivated by this observation, instead of using RSS fingerprint directly, a new fingerprint feature could be developed with the shape of the RSS curves. So we propose a new fingerprint-standardized waveform tendency, which standardizes the RSS waveform at one location and makes RSS have a consistent trend. The proposed SWT is an idealized fingerprint feature data, leading to improving the accuracy and robustness of fingerprint positioning.
Besides, from the figure, we could find that there are multiple outliers introduced by environmental dynamics and hardware restrictions. To remove the outliers, we need to find a R for n RSS values R i ði = 1,⋯nÞ, which denotes the mean value of n RSS measurements collected at a sample 2 Journal of Sensors point from the same AP, to minimize the square sum of the difference between R and R i , i.e., According to the Gaussian error theory, when the measured value obeys the normal distribution, the probability of the residual that falls into the threefold variance interval (i.e., ½−3σ, 3σ) exceeds 99.17%, and the probability of falling outside this interval is less than 0.13%. Therefore, it can be considered that the measurement with the deviation outside the area is an anomaly. This is the Three-Sigma Limits method.
From (2), we can see that the arithmetic average of R i is R, so the deviation is e According to the 3σ criterion, where the residual deviation is greater than three times the standard deviation, the corresponding measurements will be considered as outliers and should be replaced by R.
where e R b is the outlier in the observations ð1 ≤ b ≤ nÞ.
Journal of Sensors
The removal of outliers will remain a useful information involved in RSS measurements and help to realize satisfied localization performance. Then, we get a new RSS dataset R. The processed RSS waveform is shown in Figure 4.
Considering that the possessed RSS will lead to limited robustness of real-life location prediction, we intend to add some instability characteristics into the offline training measurements. To appropriately expand the predictable range of the new RSS waveform, after several experiments, we add noise N (N ∈ ½−1, 1, satisfying uniform distribution) to the new RSS after removing outliers, namely, where X is the SWT of RSS. SWT can decrease the interference of outliers and abnormal pulses, leading to higher accuracy and robustness of the positioning system. Figure 5 illustrates the SWT waveform of RSS measurements.
Proposed SWT-KELM Algorithm
We shall illustrate the framework of the proposed SWT-KELM in this section.
Preliminaries on KELM. Extreme learning machine (ELM) is designed to train Single-hidden Layer Feedforward
Network (SLFN). Recently, due to its good learning ability [25,26], SLFN has been widely applied in many fields. However, traditional feedforward neural networks generally employ the gradient descent algorithm for training, which has the following defects: (1) since multiple iterations are necessary for training the weights and threshold, it usually takes a long time to train the network; (2) gradient descent algorithm is always easy to fall into local minimum and hard to obtain the global minimum; and (3) the setting of the learning rate has an extremely great effect on the performance of the network. Different from traditional learning algorithms, the weights between the input layer and the hidden layer in ELM are randomly generated, as well as the bias of the hidden layer. The optimal solution could be acquired by setting the number of neurons in the hidden layer [27,28]. Thus, the ELM algorithm could obtain higher training speed and better generalization ability than traditional learning algorithms.
GivenZarbitrary distinct training samples ðX j, P j Þ, j = 1, 2, 3, ⋯, Z, where X j = ½x j,1 , x j,2 , ⋯, x j,m T ∈ R m , P j = ½p j,1 , p j,2 , ⋯, p j,m T ∈ R m , P j in a single hidden layer network withMneural units can be described as follows: where β i is the weight matrix of the output layer and gð·Þ indicates the activation function. W i = ½ω i,1 , ω i,2 , ⋯, ω i,M T denotes the weighting matrix between the input layer and hidden layer, b i is the bias of the ith hidden layer neuron, and • indicates the inner product operation. By simplifying (7), we obtain where To train the network to obtain a minimum error of the estimated output, β, W, and b need to satisfy the leastsquares equation (11). Because the bias b and the input weight W are randomly generated, the least-squares solution Journal of Sensors of (10) for β can be obtained as (12).
Equation (12) could be rewritten with optimization theory as where C denotes the regularization coefficient and ξ j represents the prediction error of the ground truth relative to the estimated output. Equation (13) could be solved by the KKT optimal conditions Ω ELM is defined as the kernel matrix by applying the Mercer condition: where Kðx i , x j Þ denotes the kernel function, which represents the element in Ω ELM at the jth column and ith row. The output of the hidden layer could be expressed as [26] f In short, the training process of KELM could be mainly summarized in three steps: (1) Randomly set the bias b of the hidden layer and the weight matrix W between the input layer and the hidden layer (1) Offline training stage: the framework of the offline creation procedure is demonstrated in Algorithm 1 (2) Online positioning stage: when a user launches a positioning request to the smartphone, the smartphone collects RSS real time and sends RSS measurements to the server. Then, RSS measurements are fed into the SWT-KELM network for position prediction. In the end, the estimated position information is sent to the smartphone and read by the user
Experimental Validation
In this section, numerous experiments were carried out with the purpose of assessing the performance of the proposed SWT-KELM positioning algorithm. We first depict the deployment of the experiment and the architecture of the proposed IPS, then analyze the results of the experiment and evaluate the performance of our IPS.
Architecture of IPS Based on Smartphone.
The architecture of the proposed smartphone-based IPS is presented in Figure 6. The proposed IPS mainly includes three main components: the existing commercial WiFi routers, mobile phones, and a server. The smartphone used in the experiment is OPPO R7sm. An Android APP for data collection was developed to synchronously collect RSS data and then send RSS data to the server. Meanwhile, a web-based monitor system on the server was developed to allow the server to communicate directly with the mobile phone for receiving collected RSS measurements. The RSS from each AP (TP-Link WDR6500) is collected at each reference point using the smartphone that has the installed APP we developed. Then, the smartphone sends all the collected RSS values to the server (DELL PowerEdge T630). When all data is collected and sent, the server processes the data to form a database. The next step is to train the SWT-KELM model with the built database. Finally, in the online positioning phase, when users request real-time positioning, RSS measurements are collected and sent by smartphone in real time, and then, the well-trained SWT-KELM model is utilized to estimate the current location on the server.
Data Collection.
Our test scenario is a laboratory for industrial robotics which covers an 32 m × 16 m area. The layout of the industrial robotics laboratory for the experiment is demonstrated in Figure 1. There are 23 undergraduate students working and studying in this laboratory regularly. In order to include more features of dynamic environments, our experiments are conducted over five weeks, which could result in greatly increasing the robustness of the data for real-time positioning. There are 100 reference points and 20 testing points collected in our experiments, as shown in Figure 1. Reference points are represented by red dots and testing points are demonstrated by black triangles. The distance between the two adjacent sample points is set to 1.2 m. The collection interval for experiments is set to 200 ms. According to the previous investigations, using more APs, training points, and packages leads to better performances. However, the data acquisition with a larger dataset will lead to high labor costs and computation costs. In order to do a trade-off between the cost and accuracy, 500 sets of RSS vector data are collected at each training and testing point. During the offline stage, 100 offline calibration points were selected; at each offline point, 500 packet RSS receptions from 8 routers are collected. Thus, 100 × 500 RSS information was used to establish the SWT location fingerprint database. The SWT-KELM model is trained on the server with the SWT fingerprints and their physical coordinates as inputs and outputs accordingly. During the online localization stage, the RSS measurements were collected in different periods of a week to reflect the environmental dynamics caused by human movements and physical layout variations. The position of the user is estimated by adopting the well-trained SWTI-KELM model with the raw online RSS.
Parameter Setting
4.3.1. The Type of Activation Function. Based on the above analysis, it could be found that kernel function type has a major impact on the prediction accuracy of KELM. There are two commonly used kernel functions in the KELM algorithm: RBF-kernel and lin-kernel. We evaluate the localization result of different kernel functions. As depicted in Figure 7, the RBF-kernel has better positioning accuracy than lin-kernel in our settings. Thus, RBF-kernel is employed as the kernel function for our proposed SWT-KELM positioning algorithm.
The Number of Hidden Nodes.
Another essential parameter for the SWT-KELM is the number of neurons. After the kernel function is determined, we employed the fivefold cross-validation approach with a range from 100 to 1500 and a step size of 10 to set the optimal number of neurons. As demonstrated in Figure 8, the capability of SWT-KELM is optimal when the number of neurons is 400.
Experimental Results and Evaluation.
In this paper, we utilize Root Mean Square Error (RMSE) and standard deviation (STD) as evaluation indicators to appraise the experimental results: where s denotes the total number of test samples. In our experiment, s is 10,000.
Comparison of Localization Accuracy with Different
Localization Algorithms. Four classical localization methods, BYS, KNN, ELM, and OS-ELM, are chosen to further compare their performance when SWT and RSS are employed as fingerprint characteristics, respectively. The experimental results of different algorithms are shown in Table 1 and Figure 9. Table 1 depicts that the proposed SWT-KELM algorithm exceeds the other four algorithms in terms of RMSE and STD. It can be easily observed that the positioning performance of each algorithm with RSS fingerprint characteristics is not as good as that with SWT, indicating that our standardized waveform trend method is better than the original RSS fingerprint method. In addition, it can be seen that the ELM-based algorithms are better than other algorithms.
As demonstrated in Table 1, the proposed algorithms incur longer training time due to the combination of standardized RSS and kernel functions. What needs to be specifically mentioned is that our proposed algorithm only occupies a little amount of time during the online Server installed with developed software Smartphone installed with APP Figure 6: System architecture of the proposed IPS. 6 Journal of Sensors positioning stage, which can verify that our IPS could satisfy the need of actual applications. Figure 10 displays a comparison of cumulative percentiles of positioning error, which demonstrates that our proposed SWT-KELM algorithm could obtain higher accuracy and lower STD than other algorithms. We also compare our proposed IPS with state-of-the-art WiFi-based positioning systems. The average positioning error of the STI-WELM [29] (1) Impact of different numbers of APs Furthermore, we also evaluated the positioning error of different algorithms with different numbers of APs. As demonstrated in Figure 11, the positioning error of all five algorithms increases with the reduction in the number of APs. It is easy to see that SWT-KELM is superior to other approaches in various situations. The increase in the number of AP is equal to the increase in the number of features. The experimental results agree with the theory that the increase in the number of features is conducive to the prediction accuracy for machine learning algorithms. From Figure 11, we also notice that in the case of the number of AP changes, the positioning error of our proposed SWT-KELM is relatively stable. It is worth noting that the positioning error of SWT-KELM is the smallest when there are only three APs available within indoor environments. Performance evaluation of SWT-KELM under the influence of different numbers of training points. We compare the performance of SWT-KELM with other localization methods when the number of training points is altered.
We consider three different settings of training points for this experiment. The locations of training points and testing points are the same as those in Data Collection. The overall performance in terms of RMSE between these approaches under different numbers of training points is demonstrated in Table 2. It can be easily found from Table 2 that the average localization error of all algorithms becomes better when more training points have been trained offline. What is more, as shown in Table 2, SWT-KELM outperforms other methods in every situation.
Performance evaluation of SWT-KELM under the influence of different locations of training points. In order to investigate the influence of the locations of training points, we compare the performance of SWT-KELM with different locations of training points and testing points. Different from the locations shown in Figure 1, we randomly select training points and testing points from this dataset. The number of training points and testing points is 100 and 20, respectively. The RMSE is measured over 2000 repeated realizations, and this experiment is conducted based on the same parameters of algorithms. Figure 12 describes the localization errors of different algorithms by randomly selecting the locations of training and testing points. As illustrated in Figure 12, the localization accuracy of SWT-KELM is higher than that of the other approaches.
In summary, SWT-based localization algorithms can provide higher accuracy than traditional RSS-based fingerprint approaches. The proposed SWT fingerprints have an obvious effect in reducing localization error caused by environmental dynamics. Furthermore, SWT-KELM, which integrates the advantages of both SWT and KELM, can Journal of Sensors effectively reduce localization error, compared with existing methods in representative indoor environment.
Conclusion
In this paper, we propose an SWT-KELM-based indoor localization system using smartphones. Different from existing fingerprint-based methods, the proposed algorithm standardizes the waveform tendency of RSS to enhance the robustness for noises in the measurements of IPS. The positioning accuracy of the proposed IPS under different experimental settings is further discussed. Extensive experiments are carried out, and the results show that compared with state-of-the-art baselines, our proposed SWT-KELM method is more accurate, efficient, and robust.
Data Availability
Data is available on request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
|
v3-fos-license
|
2023-03-09T16:13:38.247Z
|
2023-03-07T00:00:00.000
|
257415783
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.7717/peerj.14866",
"pdf_hash": "bf1ad97649fab274e0a3bbfb191d0530591c6c4e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42394",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "b6c7132f1a2688f332879d82ddb4f463cb29a11d",
"year": 2023
}
|
pes2o/s2orc
|
Quantitative trait loci associated with straighthead-resistance used for marker assisted selection in rice (Oryza sativa L.) RIL populations
Straighthead is a physiological disorder of rice (Oryza sativa L.) that causes dramatic yield loss in susceptible cultivars. This disorder is found worldwide and is reported to increasingly occur in the southern United States. Genetic resistance breeding has been considered as one of the most efficient methods for straighthead prevention because the traditional prevention method wastes water and costs labor. In this study, we analyzed the genetic effects of five straighthead quantitative trait loci (QTLs), namely, AP3858-1 (qSH-8), RM225 (qSH-6), RM2 (qSH-7), RM206 (qSH-11), and RM282 (qSH-3), on the recombinant inbred lines (RILs) developed from Jing185/Cocodrie and Zhe733/R312 populations using our five previously identified markers linked to these QTLs. As a result, recombinant inbred lines (RILs) with four resistant alleles at the four loci (AP3858-1, RM225, RM2, and RM206) exhibited the highest straighthead resistance. This result suggests that the four markers could be efficiently used to select the straighthead-resistant recombinant inbred lines (RILs). Furthermore, by using AP3858-1, we successfully obtained five straighthead-resistant recombinant inbred lines (RILs) with more than 50% genetic similarity to Cocodrie. These markers and recombinant inbred lines (RILs) can be used for future straighthead resistance breeding through marker-assisted selection.
INTRODUCTION
Straighthead is a physiological disorder of rice that is characterized by sterile florets and distorted spikelets (Yan et al., 2005). It can make rice kernels empty and panicles erect and fail to head out. As a result, straighthead often causes dramatic yield loss in susceptible cultivars (Dilday et al., 2000). Straighthead was first reported in the US (Wells & Gilmour, 1977) and is now found in Japan (Takeoka, Tsutsui & Matsuo, 1990), Australia (Dunn et al., 2006), Portugal (Cunha & Baptista, 1958), Thailand (Weerapat, 1979), and Argentina . It has become a huge threat to rice production in the southern US and worldwide.
According to previous studies, straighthead can be caused by numerous factors, such as sandy to silt loam-textured soils (Ehasanullah & Meetu, 2018), low free iron and low pH in soil (Hua et al., 2011;Huang et al., 1997), presence of As, Mn, Ca, and S, and soil organic matter (Hua et al., 2011;Hulbert & Bennetzen, 1991). In the southern U.S., arsenic-based herbicides such as monosodium methanearsonate (MSMA) have been widely applied in cotton-growing areas. Thus, arsenic (As) usually residues in paddies. Toxicity in rice induces a series of symptoms, such as decreases in plant height and tillers (Kang et al., 1996), reduction in shoot and root growth (Dasgupta et al., 2004;Rahman et al., 2012), inhibition of seed germination (Shri et al., 2009;Rahman et al., 2012), decline in chlorophyll content and photosynthesis, and sometimes plant death (Rahman et al., 2007). Notably, As can cause typical straighthead symptoms in susceptible rice cultivars in MSMA-applied soil (Rahman et al., 2008;Lomax et al., 2012). Thus, MSMA-induced application is a common method of evaluating rice straighthead Wilson Jr et al., 2001).
For straighthead prevention, one method used is water management called ''draining and drying'' (D&D). In this method, farmers need to drain their rice field about two weeks after a permanent flood and then wait for reflooding until the rice leaves exhibit drought stress symptoms (Rasamivelona, Kenneth & Robert, 1995;Slaton et al., 2000). In Arkansas, one-third of the rice fields applies the D&D method, which results in approximately 150 million m 3 of wasted irrigation water every year (Wilson Jr & Runsick, 2008). Clearly, the method costs natural resources and manpower and also leads to drought-related yield loss.
Resistant breeding is considered as the most efficient and environmentally friendly strategy for straighthead prevention. A number of resistant germplasms have been identified, and the genetic base of straighthead has been examined (Yan et al., 2002;Pan et al., 2012). Marker-assisted selection (MAS) has been used in resistant breeding for many years and has been demonstrated as a feasible strategy in multiple crops (Yan et al., 2005). In our previous study (Pan et al., 2012), we constructed two recombinant inbred line (RIL) F 9 populations using two resistant parents (Zhe733 and Jing185) and the susceptible parents Cocodrie and R312. Five quantitative trait loci (QTLs),namely,, were identified to be associated with straighthead via linkage mapping using the two RIL populations. Four QTLs were determined for the Zhe733/R312 population and two QTLs (qSH-3 and qSH-8) were identified for the Cocodrie/Jing185 population. Of these QTLs, qSH-8,which is 290 kb long and is found on chromosome 8, was identified in the two populations. Moreover, the presence of qSH-8 was confirmed in the F 2 and F 2:3 populations of Zhe733/R312 (Li et al., 2016b). Therefore, qSH-8 was proven as a major QTL for straighthead resistance. Furthermore, five markers, namely, RM282, RM225, RM2, AP3858-1, and RM206 (Table S1), were associated with the five aforementioned QTLs, respectively.
Arkansas accounts for a large part of rice production in the U.S.. However, as previously mentioned, many cultivars grown in this region are highly susceptible to straighthead. For instance, Cocodrie, a major cultivar grown in Arkansas, lost up to 94% of its yield when straighthead occurred (Linscombe et al., 2000;Wilson Jr et al., 2001). Thus, genetically improving straighthead resistance is necessary to ensure high rice yields. In the present study, our objective is to identify RILs with straighthead-resistant QTLs and similar agronomic traits and backgrounds to Cocodrie in the Cocodrie/Jing185 population for use in further resistant breeding.
Phenotyping
Both Zhe733/R312 and Cocodrie/Jing185 populations were planted in MSMA-treated soil at Dale Bumpers National Rice Research Center near Stuttgart, Arkansas for two years (2010 and 2011). Using a randomized complete block design, the RILs of the two F 9 populations were planted in single-row field plots (0.62 m 2 ) with three replications, as previously described (Pan et al., 2012). Exactly 6.7 kg ha −1 of MSMA was applied to the soil surface and incorporated prior to planting, as previously described (Yan et al., 2005). The four parents (Zhe733, R312, Cocodrie, and Jing185) were repeatedly planted in each field tier of 99 rows as controls. Field management was performed as previously described (Yan et al., 2008).
Evaluation of straighthead rating was based on floret sterility and panicle development using a scale of 1 to 9 at the maturity stage (Yan et al., 2005). A score of 1 represented normal plants with panicles fully emerged and more than 80% grains developed, whereas 9 represented sterile plants with no panicle emergence and complete absence of developed grains. Based on our previous research, RILs with a score of 4.0 or below were resistant and had 41%-60% seed sets or higher, whereas RILs with a score of 6.0 or above were susceptible and had 11%-20% seed sets or lower (Li et al., 2016b).
The Cocodrie/Jing185 population was then planted in clean soil without MSMA at Dale Bumpers National Rice Research Center near Stuttgart, Arkansas for two years (2010 and 2011). To ensure a reliable evaluation, we performed water management to prevent straighthead. We conducted a randomized complete block design for the field experiments. RILs were planted in single-row field plots (0.62 m 2 ) with three replications each year. The parents were repeatedly planted in a field tier of 99 rows as controls.
Evaluations of the heading date, height, and tillers were conducted in the field. The heading date for each plot was recorded when 50% of the panicles had emerged from the rice culms, as determined using visual estimation. The height and tillers of each plot were assessed at the mature stage using three central individuals, and the plant height was measured from the ground to the tip of the rice panicle (Counce, Keisling & Mitchell, 2000). The three central individuals of each plot were then harvested and air-dried in a greenhouse for biomass evaluation.
Genotyping and genetic analysis
DNA was extracted from each RIL of the two populations and their parents following the CTAB method described by Hulbert & Bennetzen (1991). The straighthead-linked markers, namely, RM282, RM225, RM2, AP3858-1, and RM206, were used to screen the RILs of the two populations.
DNA amplification was performed as previously described (Pan et al., 2012). As to genotyping, alleles corresponding to resistant or susceptible parents were noted as ''a'' or ''b,'' respectively. RILs with both alleles were noted as ''h.'' Missing data were noted as ''.''. According to our previous report, ''a'' was a resistant allele and ''b'' was susceptible at each QTL locus of the ZHE733/R312 population. In the Cocodrie/Jing185 population, ''a'' was notably resistant and ''b'' was the susceptible allele at the qSH-8 locus, whereas ''a'' was the susceptible allele and ''b'' was the resistant allele at the qSH-3 locus. RILs with straighthead ratings ≤ 4.0 were selected for further allelic analysis using a number of markers. These markers, including RM225, RM2, RM206, RM282, and AP3858-1, were associated with straighthead resistance (Pan et al., 2012) and can be useful in MAS.
Identification of RILs and statistical analysis
In the Cocodrie/Jing185 population, RILs with over 50% Cocodrie genetic background were selected for further analysis. The agronomic traits of these selected RILs were analyzed using analysis of variance (ANOVA). Duncan's multiple range test was performed between selected RILs and Cocodrie based on the agronomic traits. RILs with different allele combinations were compared with RILs without any resistant alleles (RWARA) using the F-test and T -test. All these statistical procedures were conducted using SAS software v9.1 (SAS Institute Inc., Cary, NC, USA).
Two SSRs linked to the straighthead-related QTLs RM282 (qSH-3, susceptible QTL) and AP3858-1 (qSH-8, resistant QTL) were identified in the Cocodrie/Jing185 population in a previous study (Pan et al., 2012). Four RILs were selected for comparison based on the straighthead ratings. The two parents (the susceptible parent ''Cocodrie'' with a straighthead rating of 9.3 and the resistant parent ''Jing185'' with a straighthead rating of 2.2) were set as controls. The results (Fig. 2B) show that RIL CJ-405, which has no resistant alleles at both loci, showed a very high straighthead rating of 9.0. CJ-522, with one resistant allele at RM282, showed a straighthead rating of 7.2. CJ-407, which has resistant alleles only at AP3858-1, showed a straighthead rating of 2.7. Furthermore, CJ-427, which has both resistant alleles, showed a straighthead rating of 1.8. Clearly, qSH-8 showed the highest contribution to resistance. Therefore, the RILs CJ-407 and CJ-427, which have the major resistant QTL, can be used as elite lines for future straighthead-resistance breeding programs.
Agronomic analysis of both RIL populations and straighthead-resistant RILs
When we performed water management, we did not observe straighthead symptoms in both parents and in the 91 RILs of the Cocodrie/Jing185 population. This result shows
A total of 27 straighthead-resistant RILs with at least one resistant allele at AP3858-1 were selected for analysis. Afterward, 166 polymorphism markers were used to compare the genetic backgrounds of the selected RILs and their susceptible parent Cocodrie. The results show that five RILs, namely, CJ-404, CJ-407, CJ-479, CJ-480, and CJ-506, shared more than 50% genotypic background with Cocodrie (Table 4) highest genetic similarity at 66.0%. These RILs and the two parents were then subjected to phenotypical similarity analyses using Duncan's multiple test (Tables 5 and 6). Significant differences were observed between the heading days of Cocodrie and all RILs (Table 6). CJ-479 had the longest heading day among the RILs, whereas CJ-480 had the shortest one (Table 5). Significant differences in plant height were also observed between all RILs and Cocodrie, except for RIL480 (Table 6). CJ-479 had the highest plant height, whereas CJ-506 had the shortest one (Table 5). However, no significant differences were observed in the tillers and biomass of the RILs with a Cocodrie background (Table 6). In conclusion, all five RILs with greater than 50% genotypic similarity to Cocodrie showed high yields similar to Cocodrie's. These RILs are potential germplasms for straighthead-resistance breeding.
DISCUSSION
With the discovery and application of molecular markers in the late 1970s, MAS has provided a time-saving and purpose-directing strategy for plant breeding that is superior to conventional strategy. Previous studies reported MAS application in different species Notes. 1 ''a'' represents susceptible alleles of parent ''Jing185'' while ''b'' represents resistant alleles of parent ''Cocodrie''at qSH-3 locus. 2 ''a'' represents resistant alleles of parent ''Jing185'' and ''b'' represents susceptible alleles of parent ''Cocodrie''at qSH-8 locus. 3 Straighthead rating using a 1-9 scale. was averaged over 3 replications each year and 2 years for which the SD was estimated. Straighthead rating of 4 or below was resistant and 6 or above was susceptible. and traits (Chen et al., 2008;Huang et al., 1997;Li et al., 2018;Zhao et al., 2012). According to our previous report (Pan et al., 2012), the straighthead-resistant QTL qSH-8 accounted for approximately 67% of the phenotypic variations in the Cocodrie/Jing185 population, which is much higher than those of any other QTL. In the present study, AP3858-1 tightly linked to the major straighthead-resistant QTL qSH-8 was used to screen 91 RILs from the Cocodrie/Jing185 population. The results show that 22 RILs with the resistant allele qSH8 (AP3858-1) showed a mean straighthead rating of 4.51 (medium resistant). This result suggests that AP3858-1 is a reliable marker for straighthead-resistance selection. The three other QTLs in the Zhe733/R312 population, namely, qSH-6, qSH-7, and qSH-11, accounted for 13%, 12%, and 8% of the phenotypic variations, respectively. Although the three QTLs accounted for much lower variations than qSH-8, they can still be useful when applied in other genetic backgrounds and can also help us understand the genetic structure of the interest trait. For instance, 49 QTLs for 14 rice traits were reported by Wang et al. (2011), eight of these QTLs were related to spikelet number per panicle and to 1000grain yield, which account for approximately 8% and 10% of the phenotypic variations, respectively. These QTLs were introduced into chromosome segment substitution lines, which exhibited increased panicle and spikelet sizes compared with their parent 93-11 (Zong et al., 2012). Based on our study, RILs that pyramid all three QTLs showed increased levels of straighthead resistance compared with the susceptible parent R312. This result suggests that the three QTLs can be used in MAS for resistance breeding. In our study, the QTLs were related to MASA-induced straighthead. In previous studies on As-plant interaction, a number of QTLs were identified to correlate with As tolerance (Ehasanullah & Meetu, 2018;Syed et al., 2016;Xu et al., 2017) and accumulation (Song et al., 2014;Wang et al., 2016;Yamaji & Ma, 2011). Interestingly, some of these QTLs shared regions with our straighthead-resistant QTLs in rice. For instance, Syed et al. (2016) reported three QTLs, namely, qAsTSL8, qAsTRL8, and qAsTRSB8, which were associated with shooting length, root length, and root-shooting biomass under As stress, respectively. Wang et al. (2016) reported a gene, OsPT8, that was related to AsV transport in root cells and root-elongation inhibition. Kuramata et al. (2013) reported the qDMAs6.2 gene, which was associated with As accumulation in rice grains. Thus, researchers have already connected straighthead to As accumulation. Yan et al. (2008) reported that the As concentration in the straighthead-resistant cultivar Zhe733 was much lower than in the susceptible cultivar Cocodrie when the two were planted in the same soil condition. Hua et al. (2011) also found that the As concentration in Cocodrie was nearly three times higher than in Zhe733 when the two were grown in MASA soil. Therefore, the straighthead-resistant QTLs may also be tolerant to As stress. These QTLs will help in understanding the mechanism behind As transportation and accumulation in plants.
Although breeding for straighthead resistance has been conducted since the 1950s, little progress has been made until 2002 (Yan et al., 2002). One of most important factors was the lack of resistant germplasms in the US. The southern United States produces over 80% of rice, and 90% of the cultivars grown here are tropical japonica (Mackill & Mckenzie, 2002); most of these cultivars are susceptible to straighthead. In previous studies, 42 resistant accessions were identified from a survey of 1,002 germplasms collected worldwide. None of these accessions were japonica (Agrama & Yan, 2010), whereas most of the resistant accessions were classified into the indica subspecies. Possibly, straighthead resistance comes from indica. This resistance would thus be used to improve the susceptible cultivars grown in the southern U.S.. In fact, the two resistant parents in the present study are both from indica accessions. However, incompatibilities between the two subspecies were observed. Straighthead evaluation is based on rice infertility; therefore, the incompatibility made it challenging to obtain well-developed seeds and may have also caused bias when the straighthead resistance of the offspring was evaluated. In our previous research, for instance (Pan et al., 2012), 13 RILs with resistant alleles showed high straighthead ratings in some cases because of the incompatibility between the two subspecies. In the present study, we identified five F 9 RILs from the crossing between japonica Cocodrie and indica Jing185. These RILs had the highly straighthead-resistant QTL qSH-8, which is similar to Cocodrie both genotypically and phenotypically. The results suggest that the five F 9 RILs, which have both japonica genetic backgrounds and straighthead resistance, are potential lines for developing japonica cultivars for straighthead-resistance breeding.
CONCLUSIONS
This study suggests that qSH-8 is a major QTL for straighthead resistance, and AP3858-1, which is linked to qSH-8, is an ideal tool in marker-assisted breeding for straighthead resistance. Five RILs from the Cocodrie/Jing185 F 9 population contained the resistant alleles of qSH-8. In addition, these RILs had more than 50% genotypic background similarities to Cocodrie. Compared with Cocodrie, these lines exhibited significant differences in the heading date and plant height but no significant difference in the tillers and biomass. Most importantly, these RILs exhibited high yields similar to Cocodrie's. The genotypically and phenotypically diverse RILs are potential germplasms that can be used in straightheadresistance breeding.
|
v3-fos-license
|
2019-04-26T13:09:05.177Z
|
2019-04-09T00:00:00.000
|
131773668
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://eprints.whiterose.ac.uk/196491/1/2019_repeval.pdf",
"pdf_hash": "96a94482a9a328104c534bec8b9f89b580eb2175",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42395",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "d1f25d49bbbb2244e3639cb870623b1b831a76e8",
"year": 2019
}
|
pes2o/s2orc
|
Characterizing the Impact of Geometric Properties of Word Embeddings on Task Performance
Analysis of word embedding properties to inform their use in downstream NLP tasks has largely been studied by assessing nearest neighbors. However, geometric properties of the continuous feature space contribute directly to the use of embedding features in downstream models, and are largely unexplored. We consider four properties of word embedding geometry, namely: position relative to the origin, distribution of features in the vector space, global pairwise distances, and local pairwise distances. We define a sequence of transformations to generate new embeddings that expose subsets of these properties to downstream models and evaluate change in task performance to understand the contribution of each property to NLP models. We transform publicly available pretrained embeddings from three popular toolkits (word2vec, GloVe, and FastText) and evaluate on a variety of intrinsic tasks, which model linguistic information in the vector space, and extrinsic tasks, which use vectors as input to machine learning models. We find that intrinsic evaluations are highly sensitive to absolute position, while extrinsic tasks rely primarily on local similarity. Our findings suggest that future embedding models and post-processing techniques should focus primarily on similarity to nearby points in vector space.
Introduction
Learned vector representations of words, known as word embeddings, have become ubiquitous throughout natural language processing (NLP) applications. As a result, analysis of embedding spaces to understand their utility as input features has emerged as an important avenue of inquiry, in order to facilitate proper use of embeddings in downstream NLP tasks. Many analyses have focused on nearest neighborhoods, as a viable proxy for semantic information (Rogers et al., * These authors contributed equally to this work. 2018; Pierrejean and Tanguy, 2018). However, neighborhood-based analysis is limited by the unreliability of nearest neighborhoods (Wendlandt et al., 2018). Further, it is intended to evaluate the semantic content of embedding spaces, as opposed to characteristics of the feature space itself.
Geometric analysis offers another recent angle from which to understand the properties of word embeddings, both in terms of their distribution (Mimno and Thompson, 2017) and correlation with downstream performance (Chandrahas et al., 2018). Through such geometric investigations, neighborhood-based semantic characterizations are augmented with information about the continuous feature space of an embedding. Geometric features offer a more direct connection to the assumptions made by neural models about continuity in input spaces (Szegedy et al., 2014), as well as the use of recent contextualized representation methods using continuous language models (Peters et al., 2018;Devlin et al., 2018).
In this work, we aim to bridge the gap between neighborhood-based semantic analysis and geometric performance analysis. We consider four components of the geometry of word embeddings, and transform pretrained embeddings to expose only subsets of these components to downstream models. We transform three popular sets of embeddings, trained using word2vec (Mikolov et al., 2013), 1 GloVe (Pennington et al., 2014), 2 and FastText (Bojanowski et al., 2017), 3 and use the resulting embeddings in a battery of standard evaluations to measure changes in task performance.
We find that intrinsic evaluations, which model linguistic information directly in the vector space, are highly sensitive to absolute position in pretrained embeddings; while extrinsic tasks, in which word embeddings are passed as input features to a trained model, are more robust and rely primarily on information about local similarity between word vectors. Our findings, including evidence that global organization of word vectors is often a major source of noise, suggest that further development of embedding learning and tuning methods should focus explicitly on local similarity, and help to explain the success of several recent methods.
Related Work
Word embedding models and outputs have been analyzed from several angles. In terms of performance, evaluating the "quality" of word embedding models has long been a thorny problem. While intrinsic evaluations such as word similarity and analogy completion are intuitive and easy to compute, they are limited by both confounding geometric factors (Linzen, 2016) and task-specific factors (Faruqui et al., 2016;Rogers et al., 2017). Chiu et al. (2016) show that these tasks, while correlated with some semantic content, do not always predict downstream performance. Thus, it is necessary to use a more comprehensive set of intrinsic and extrinsic evaluations for embeddings. Nearest neighbors in sets of embeddings are commonly used as a proxy for qualitative semantic information. However, their instability across embedding samples (Wendlandt et al., 2018) is a limiting factor, and they do not necessarily correlate with linguistic analyses (Hellrich and Hahn, 2016). Modeling neighborhoods as a graph structure offers an alternative analysis method (Cuba Gyllensten and Sahlgren, 2015), as does 2-D or 3-D visualization (Heimerl and Gleicher, 2018). However, both of these methods provide qualitative insights only. By systematically analyzing geometric information with a wide variety of eval-uations, we provide a quantitative counterpart to these understandings of embedding spaces.
Methods
In order to investigate how different geometric properties of word embeddings contribute to model performance on intrinsic and extrinsic evaluations, we consider the following attributes of word embedding geometry: • position relative to the origin; • distribution of feature values in R d ; • global pairwise distances, i.e. distances between any pair of vectors; • local pairwise distances, i.e. distances between nearby pairs of vectors.
Using each of our sets of pretrained word embeddings, we apply a variety of transformations to induce new embeddings that only expose subsets of these attributes to downstream models. These are: affine transformation, which obfuscates the original position of the origin; cosine distance encoding, which obfuscates the original distribution of feature values in R d ; nearest neighbor encoding, which obfuscates global pairwise distances; and random encoding. This sequence is illustrated in Figure 1, and the individual transformations are discussed in the following subsections.
General notation for defining our transformations is as follows. Let W be our vocabulary of words taken from some source corpus. We associate with each word w ∈ W a vector v ∈ R d resulting from training via one of our embedding generation algorithms, where d is an arbitrary dimensionality for the embedding space. We define V to be the set of all pretrained word vectors v for a given corpus, embedding algorithm, and parameters. The matrix of embeddings M V associated with this set then has shape |V | × d. For simplicity, we restrict our analysis to transformed embeddings of the same dimensionality d as the original vectors.
Affine transformations
Affine transformations have been previously utilized for post-processing of word embeddings. For example, Artetxe et al. (2016) learn a matrix transform to align multilingual embedding spaces, and Faruqui et al. (2015) use a linear sparsification to better capture lexical semantics. In addition, the simplicity of affine functions in machine learning contexts (Hofmann et al., 2008) makes them a good starting point for our analysis.
Given a set of embeddings in R d , referred to as an embedding space, affine transformations change positions of points relative to the origin.
While prior work has typically focused on linear transformations, which fix the origin, we consider the broader class of affine transformations, which do not. Thus, affine transformations such as translation cannot in general be represented as a square matrix for finite-dimensional spaces.
We use the following affine transformations: • translations; • reflections over a hyperplane; • rotations about a subspace; • homotheties.
We give brief definitions of each transformation.
the reflection over the hyperplane through the origin orthogonal to a.
Definition 3. A rotation through the span of vectors u, x by angle θ is a map Rot u,x : and I ∈ Mat d,d (R) is the identity matrix.
Definition 4. For every a ∈ R d and λ ∈ R \ { 0 }, we call the map H a,λ : R d → R d given by a homothety of center a and ratio λ. A homothety centered at the origin is called a dilation.
Parameters used in our analysis for each of these transformations are provided in Appendix A.
Cosine distance encoding (CDE)
Our cosine distance encoding transformation obfuscates the distribution of features in R d by representing a set of word vectors as a pairwise distance matrix. Such a transformation might be used to avoid the non-interpretability of embedding features (Fyshe et al., 2015) and compare embeddings based on relative organization alone.
where the second term is the cosine similarity.
As all three sets of embeddings evaluated in this study have vocabulary size on the order of 10 6 , use of the full distance matrix is impractical. We use a subset consisting of the distance from each point to the embeddings of the 10K most frequent words from each embedding set, yielding This is not dissimilar to the global frequencybased negative sampling approach of word2vec (Mikolov et al., 2013). We then use an autoencoder to map this back to R d for comparability.
Then an autoencoder over R |V | is defined as Vector h ∈ R d is then used as the compressed representation of v.
In our experiments, we use ReLU as our activation function ϕ, and train the autoencoder for 50 epochs to minimize L 2 distance between v andv. We recognize that low-rank compression using an autoencoder is likely to be noisy, thus potentially inducing additional loss in evaluations. However, precedent for capturing geometric structure with autoencoders (Li et al., 2017b) suggests that this is a viable model for our analysis.
Nearest neighbor encoding (NNE)
Our nearest neighbor encoding transformation discards the majority of the global pairwise distance information modeled in CDE, and retains only information about nearest neighborhoods.
The output of f NNE (v) is a sparse vector.
This transformation relates to the common use of nearest neighborhoods as a proxy for semantic information (Wendlandt et al., 2018;Pierrejean and Tanguy, 2018). We take the previously proposed approach of combining the output of f NNE (v) for each v ∈ V to form a sparse adjacency matrix, which describes a directed nearest neighbor graph (Cuba Gyllensten and Sahlgren, 2015; Newman-Griffis and Fosler-Lussier, 2017), using three versions of f NNE defined below.
Thresholded The set of non-zero indices in f NNE (v) correspond to word vectorsṽ such that the cosine similarity of v andṽ is greater than or equal to an arbitrary threshold t. In order to ensure that every word has non-zero out degree in the graph, we also include the k nearest neighbors by cosine similarity for every word vector. Non-zero values in f NNE (v) are set to the cosine similarity of v and the relevant neighbor vector.
Weighted The set of non-zero indices in f NNE (v) corresponds to only the set of k nearest neighbors to v by cosine similarity. Cosine similarity values are used for edge weights.
Unweighted As in the previous case, only k nearest neighbors are included in the adjacency matrix. All edges are weighted equally, regardless of cosine similarity.
We report results using k = 5 and t = 0.05; other settings are discussed in Appendix B.
Finally, much like the CDE method, we use a second mapping function ψ : R |V | → R d to transform the nearest neighbor graph back to d-dimensional vectors for evaluation. Following Newman-Griffis and Fosler-Lussier (2017), we use node2vec (Grover and Leskovec, 2016) with default parameters to learn this mapping. Like the autoencoder, this is a noisy map, but the intent of node2vec to capture patterns in local graph structure makes it a good fit for our analysis.
Random encoding
Finally, as a baseline, we use a random encoding While intrinsic evaluations rely only on input embeddings, and thus lose all source information in this case, extrinsic tasks learn a model to transform input features, making even randomlyinitialized vectors a common baseline (Lample et al., 2016;Kim, 2014). For fair comparison, we generate one set of random baselines for each embedding set and re-use these across all tasks.
Other transformations
Many other transformations of a word embedding space could be included in our analysis, such as arbitrary vector-valued polynomial functions, rational vector-valued functions, or common decomposition methods such as principal components analysis (PCA) or singular value decomposition (SVD). Additionally, though they cannot be effectively applied to the unordered set of word vectors in a raw embedding space, transformations for sequential data such as discrete Fourier transforms or discrete wavelet transforms could be used for word sequences in specific text corpora.
For this study, we limit our scope to the transformations listed above. These transformations align with prior work on analyzing and post-processing embeddings for specific tasks, and are highly interpretable with respect to the original embedding space. However, other complex transformations represent an intriguing area of future work.
Evaluation
In order to measure the contributions of each geometric aspect described in Section 3 to the utility of word embeddings as input features, we evaluate embeddings transformed using our sequence of operations on a battery of standard intrinsic evaluations, which model linguistic information directly in the vector space; and extrinsic evaluations, which use the embeddings as input to learned models for downstream applications Our intrinsic evaluations include: We follow Rogers et al. (2018) in evaluating on a set of five extrinsic tasks: 5 • Relation classification: SemEval-2010 Task 8 (Hendrickx et al., 2010), using a CNN with word and distance embeddings (Zeng et al., 2014). • Sentence-level sentiment polarity classification: MR movie reviews (Pang and Lee, 2005), with a simplified CNN model from (Kim, 2014).
• Subjectivity/objectivity classification: Rotten Tomato snippets (Pang and Lee, 2004), using a logistic regression over summed word embeddings (Li et al., 2017a). • Natural language inference: SNLI (Bowman et al., 2015), using separate LSTMs for premise and hypothesis, combined with a feed-forward classifier. Figure 2 presents the results of each intrinsic and extrinsic evaluation on the transformed versions of our three sets of word embeddings. 6 The largest drops in performance across all three sets for intrinsic tasks occur when explicit embedding features are removed with the CDE transformation. While some cases of NNE-transformed embeddings recover a measure of this performance, they remain far under affine-transformed embeddings. Extrinsic tasks are similarly affected by the CDE transformation; however, NNE-transformed embeddings recover the majority of performance.
Analysis and Discussion
Comparing within the set of affine transformations, the innocuous effect of rotations, dilations, and reflections on both intrinsic and extrinsic tasks suggests that the models used are robust to simple linear transformations. Extrinsic evaluations are also relatively insensitive to translations, which can be modeled with bias terms, though the lack of learned models and reliance on cosine similarity for the intrinsic tasks makes them more sensitive to shifts relative to the origin. Interestingly, homothety, which effectively combines a translation and a dilation, leads to a noticeable drop in performance across all tasks. Intuitively, this result makes sense: by both shifting points relative to the origin and changing their distribution in the space, angular similarity values used for intrinsic tasks can be changed significantly, and the zero mean feature distribution preferred by neural models (Clevert et al., 2016) becomes harder to achieve. This suggests that methods for tuning embeddings should attempt to preserve the origin whenever possible.
The large drops in performance observed when using the CDE transformation is likely to relate 6 Due to their large vocabulary size, we were unable to run Thresholded-NNE experiments with word2vec embeddings. to the instability of nearest neighborhoods and the importance of locality in embedding learning (Wendlandt et al., 2018), although the effects of the autoencoder component also bear further investigation. By effectively increasing the size of the neighborhood considered, CDE adds additional sources of semantic noise. The similar drops from thresholded-NNE transformations, by the same token, is likely related to observations of the relationship between the frequency ranks of a word and its nearest neighbors (Faruqui et al., 2016). With thresholded-NNE, we find that the words with highest out degree in the nearest neighbor graph are rare words (e.g., "Chanterelle" and "Courtier" in FastText, "Tiegel" and "demangler" in GloVe), which link to other rare words. Thus, node2vec's random walk method is more likely to traverse these dense subgraphs of rare words, adding noise to the output embeddings.
Finally, we note that Melamud et al. (2016) showed significant variability in downstream task performance when using different embedding dimensionalities. While we fixed vector dimensionality for the purposes of this study, varying d in future work represents a valuable follow-up.
Our findings suggest that methods for training and tuning embeddings, especially for downstream tasks, should explicitly focus on local geometric structure in the vector space. One concrete example of this comes from Chen et al. (2018), who demonstrate empirical gains when changing the negative sampling approach of word2vec to choose negative samples that are currently near to the target word in vector space, instead of the original frequency-based sampling (which ignores geometric structure). Similarly, successful methods for tuning word embeddings for specific tasks have often focused on enforcing a specific neighborhood structure (Faruqui et al., 2015). We demonstrate that by doing so, they align qualitative semantic judgments with the primary geometric information that downstream models learn from.
Conclusion
Analysis of word embeddings has largely focused on qualitative characteristics such as nearest neighborhoods or relative distribution. In this work, we take a quantitative approach analyzing geometric attributes of embeddings in R d , in order to understand the impact of geometric properties on downstream task performance. We character-ized word embedding geometry in terms of absolute position, vector features, global pairwise distances, and local pairwise distances, and generated new embedding matrices by removing these attributes from pretrained embeddings. By evaluating the performance of these transformed embeddings on a variety of intrinsic and extrinsic tasks, we find that while intrinsic evaluations are sensitive to absolute position, downstream models rely primarily on information about local similarity.
As embeddings are used for increasingly specialized applications, and as recent contextualized embedding methods such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) allow for dynamic generation of embeddings from specific contexts, our findings suggest that work on tuning and improving these embeddings should focus explicitly on local geometric structure in sampling and evaluation methods. The source code for our transformations and complete tables of our results are available online at https://github.com/OSU-slatelab/ geometric-embedding-properties.
Appendix A Parameters
We give the following library of vectors in R d used as parameter values:
Appendix B NNE settings
We experimented with k ∈ {5, 10, 15} for our weighted and unweighted NNE transformations. For thresholded NNE, in order to best evaluate the impact of thresholding over uniform k, we used the minimum k = 5 and experimented with t ∈ {0.01, 0.05, 0.075}; higher values of t increased graph size sufficiently to be impractical. We report using k = 5 for weighted and unweighted settings in our main results for fairer comparison with the thresholded setting. The effect of thresholding on nearest neighbor graphs was a strongly right-tailed increase in out degree for a small portion of nodes. Our reported value of t = 0.05 increased the out degree of 20,229 nodes for FastText (out of 1M total nodes), with the maximum increase being 819 ("Chanterelle"), and 1,354 nodes increasing out degree by only 1. For GloVe, 7,533 nodes increased in out degree (out of 2M total), with maximum increase 240 ("Tiegel"), and 372 nodes increasing out degree by only 1.
|
v3-fos-license
|
2023-10-22T05:08:48.828Z
|
2023-10-20T00:00:00.000
|
264377501
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.adi2704?download=true",
"pdf_hash": "adde8cdac5ece44a2fc3102db5829ae0387babad",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42397",
"s2fieldsofstudy": [
"Computer Science",
"Psychology"
],
"sha1": "8e1bb3b4f602c919cb8ea82a9ac6f88dedcec15a",
"year": 2023
}
|
pes2o/s2orc
|
Computational mechanisms underlying latent value updating of unchosen actions
Current studies suggest that individuals estimate the value of their choices based on observed feedback. Here, we ask whether individuals also update the value of their unchosen actions, even when the associated feedback remains unknown. One hundred seventy-eight individuals completed a multi-armed bandit task, making choices to gain rewards. We found robust evidence suggesting latent value updating of unchosen actions based on the chosen action’s outcome. Computational modeling results suggested that this effect is mainly explained by a value updating mechanism whereby individuals integrate the outcome history for choosing an option with that of rejecting the alternative. Properties of the deliberation (i.e., duration/difficulty) did not moderate the latent value updating of unchosen actions, suggesting that memory traces generated during deliberation might take a smaller role in this specific phenomenon than previously thought. We discuss the mechanisms facilitating credit assignment to unchosen actions and their implications for human decision-making.
INTRODUCTION
Humans are known to make choices according to the expected value of available actions.A wealth of research has suggested that humans learn action values through an iterative process of trial and error, whereby the value of each action is updated according to observable and temporally adjacent outcomes (1).However, for many decisions, the outcome of actions we deliberated upon but did not commit to remains hidden.For example, we might deliberate on which form of transportation is best to commute to our first day at work, by bus or by train.If we choose to take the bus, the train experience remains unavailable to us, thus limiting our ability to update its value.Reinforcement learning models have extensively examined how humans update action values according to observed outcomes.However, a fundamental question remains regarding whether and how humans may falsely update the value of unchosen actions for which no feedback was observed.
Previous studies suggested that the co-occurrence of the chosen and unchosen actions during deliberation forms a unique memory association due to the shared context (2)(3)(4).Accordingly, an unchosen action may linger in one's mind even after a decision has been made due to memory traces generated during the deliberation (5)(6)(7).Memory research has demonstrated the co-occurrence of value updating in associated cognitive representations (8)(9)(10)(11).Studies suggest that when memory binds two representations together (e.g., in classical conditioning), a value update of one representation induces an update of the associated second representation (12)(13)(14)(15).Therefore, shared episodic memory for the chosen and unchosen actions could lead to corresponding value updates during feedback presentation.
Current research has further argued that instead of a direct value updating, unchosen actions may be updated inversely to the chosen action's feedback [i.e., if a chosen action is rewarded, the value of the unchosen action is reduced; (16,17)].According to this idea, the context of the deliberation creates a negative association in one's mind between the chosen and unchosen options.The notion here is that a reactivation of the chosen item during value updating should also reactivate the deliberation context where the individual "teased apart" and contrasted the value of the two options.Therefore, reactivation of the deliberation context should lead to inverse, rather than matching, value update for unchosen actions according to the observed outcome.Another reason to assume that the value of the unchosen action should be inversely updated is that when making a choice, the act of committing to an option is always confounded with a decision to reject the alternative.Therefore, any value assigned to a chosen action might also reinforce the decision to avoid the alternative.For example, if taking the bus rather than the train turns out to lead to the desired outcome, the individual might learn not to take the train the next day even if deliberated against other forms of transportation.
In a recent pioneering study, Biderman and Shohamy (17) examined value updating for unchosen options in a single-shot decision paradigm and found inverse value updating of unchosen options according to the chosen option outcome.However, while this research provided a compelling first line of evidence, several key questions are left unanswered.First, it is still unknown whether inverse value updating for unchosen options will be observed in a sequential decision task where options are reoffered for selection multiple times and individuals learn long-run expected value for each alternative.Second, Biderman and Shohamy suggested that the unique context of the deliberation where the individual needs to tease apart the values of the alternatives leads to a negative and inverted memory association between the value representations of the offered options.This contrastive binding of options in memory is then assumed to lead to the unchosen action value being inversely updated following the feedback for the chosen action.Following this theoretical notion, we examined whether estimates that are well known as indicators for the deliberation process [i.e., reaction time (RT) and choice difficulty; (18)] moderate the extent of value updating for unchosen actions.Third, the exact latent computational mechanism according to which unchosen values are updated remains unknown.
In the current study, we used a reinforcement learning paradigm together with computational modeling to explore the mechanisms underlying the value update of unchosen actions.Specifically, participants completed a multi-armed bandit task during which they were asked to make card selections to gain monetary rewards.In each trial, participants were offered two cards (randomly selected by the computer from a deck of four cards) and then were presented with the monetary outcome associated with the chosen card.The paradigm allowed us to tease apart the value of the unchosen from the chosen card by selectively analyzing trials in which a previously unchosen card was offered with a different third card that was not presented in the previous deliberation trial (see Fig. 1).The temporal adjacency of deliberation and outcome embedded in our design allowed us to test the extent to which the inverse value update of unchosen actions was dependent on the properties of the deliberation process.The use of adjacent and repeated choices and outcomes allowed us to further investigate, using computational modeling, the prediction-error mechanism underlying the value update of unchosen actions (19)(20)(21)(22).
First, our results demonstrated latent credit assignment to unchosen actions in the context of a consecutive choice-outcome trial design.Contrary to our intuition, we found that properties of the deliberation (i.e., duration and difficulty) did not moderate the extent of value updating for unchosen actions.Rather, reinforcement learning computational modeling suggested that for every action, individuals consider the outcome history for choosing an option, together with the outcome history of rejecting the alternative.This mechanism predicted the effect of value update for unchosen actions on both the group and individual levels.We discuss these findings in light of current theoretical considerations.
RESULTS
We studied the choice behavior of 178 participants who completed a multi-armed bandit reinforcement learning task online.In this task, participants were asked to choose cards to gain monetary rewards.In each trial, participants were randomly offered two of four possible cards by the computer.After choosing a card, participants could be rewarded or not according to a drifting probability.Half the task included winning blocks (rewarded versus unrewarded outcomes were randomly offered by the computer for participants' selection.We examined trials where the unchosen card in trial n was reoffered at trial n + 1 with a card that was not offered on trial n.This allowed us to examine whether the outcome associated with the chosen card in trial n influenced the probability that the participant will select the previously unchosen card at trial n + 1.For example, as illustrated in this panel, we ask whether the reward delivered at trial n (as a result of choosing the dark card) influenced the probability of selecting the unchosen card (orange) when offered with a third card (blue).(B) Card selection led to a binary outcome determined by slowly drifting probabilities.We used randomly drifting reward probabilities to ensure continued learning.The reward probabilities of each card were independent (mean shared variance = 5.3%).(C) Probability of choosing a previously unchosen action as a function of outcome in the previous trial.Results indicated that the probability of choosing a previously unchosen card was reduced after rewarded trials compared to unrewarded trials.This was true for both win blocks (where outcomes included winning/not winning a play pound coin) and loss blocks (where outcomes included not winning/losing a play pound coin).(D) The posterior distributions for the influence of previous outcome (top) and the interaction with condition (bottom) on choosing the previously unchosen card in a logistic regression (the blue dotted line indicates the null point, and the gray horizontal line indicates HDI 95% ).Overall results indicate an inverted influence of the previous outcome on the chances of selecting an unchosen action, regardless of win/loss conditions.
were +£1/0, respectively), and half included loss blocks (rewarded versus unrewarded outcomes were £0/−1, respectively).We were interested in examining whether participants assigned credit (i.e., value) to cards that were considered, but unchosen, compared to cards that were not considered during deliberation.We start by reporting and replicating model-independent sequential trial analyses, allowing us to demonstrate inverse credit assignment to unchosen actions.Next, we examine to what extent properties of the deliberation moderated latent value update of unchosen actions.Last, we use computational reinforcement learning modeling to examine possible internal updating mechanisms for unchosen actions.
Accuracy rates
As a first step, to ensure that participants were able to adequately perform the task, we examined overall accuracy rates.Accurate choices were defined as choices where the individual chose the card with the higher true latent expected values.We found an above-chance accuracy rate for both loss [56.8% accuracy, 95% equal-tailed density interval (HDI 95% ) between 55.5 and 58.0] and win blocks (58.1% accuracy, HDI 95% between 57.0 and 59.2).Accuracy rates improved with trial progression (see fig.S1 for learning curves) and increased for easier trials where the difference between the true latent expected values of the two cards was higher (see fig.S2).Therefore, overall participants learned to act according to the true expected values of the cards and were able to choose the better card above chance.
Influence of reward on unchosen actions
To address the main aim of the study, we performed a consecutive trial analysis allowing us to examine whether participants assigned credit to an unchosen card based on the outcome of the chosen card.Specifically, we examined only trials where the unchosen card from trial n was reoffered on trial n + 1.We further filtered and took only trials where the previously chosen card was not reoffered (see Fig. 1A).We performed a hierarchical Bayesian logistic regression where previous outcome for the chosen card in trial n (rewarded versus unrewarded) predicted the tendency to choose in trial n + 1 a card that was unchosen in trial n.We found strong evidence for the influence of previous outcome on participants' choices such that participants were less likely to choose a previously unchosen card in trial n + 1 if the chosen card in trial n was rewarded (43%) versus unrewarded (49%; Fig. 1, B and C; posterior median = −0.25,HDI 95% = −0.33 to −0.17; probability of direction = ~100%; see the Supplementary Materials for prior robustness checks).To confirm the robustness of this empirical finding, we performed three additional complementary analyses: Sensitivity to loss versus win conditions.The effect of previous outcome was similar for both win (7.3%) and loss blocks (6.8%; see Fig. 1C).Specifically, we repeated the same regression analysis with the addition of block type (win versus loss) and block type × previous outcome as additional predictors.We found no evidence for an interaction effect with block type such that participants were similarly likely to be affected by the previous outcome regardless of block type (interaction posterior median = −0.0005,HDI 95% between −0.08 and 0.07; probability of direction = 50.6%).
Replication with emphasized instructions.We repeated the same analysis with a second existing dataset from our laboratory where participants completed a similar four-armed bandit task online (N = 49; see the Supplementary Materials for further information) (23).The instruction phase in this second dataset emphasized and quizzed participants to ensure that they understood that monetary outcomes reflect the value of the chosen card alone and that whether a certain card is more/less valuable in a given trial had no relation to whether other cards are more or less valuable.This instruction was objectively correct, since in our design the true expected values of the arms were independent.Despite that, we found the same effect suggesting that participants tended to choose the previously unchosen option less often in trial n + 1 if trial n was rewarded (45%) versus unrewarded (51%; posterior median = −0.19,HDI 95% between −0.32 and −0.06; probability of direction = 99.8%).
Extending the effect to two trials back.We repeated the same analysis, only with respect to three consecutive trials (influence of outcome in trial n, on probability to select the unchosen option at trial n + 2), and found a similar yet somewhat smaller effect showing that participants were less likely to pick the previously unchosen card in trial n + 2 if trial n was rewarded (44%) versus unrewarded (46%; posterior median = −0.10,HDI 95% between −0.17 and −0.02; probability of direction = 99%).
Moderation of deliberation on value updating of unchosen actions
Previous studies suggested that the context of the deliberation binds in memory the two alternatives (17,24).Therefore, we aimed to extend our main finding (Fig. 1C) and examine whether the magnitude of latent value update for unchosen actions is moderated by the deliberation process.We operationalized the deliberation process by two indicators, RT and choice difficulty, both commonly used in decision-making literature.Specifically, studies have suggested that RTs encapsulate decision time (25,26).Accordingly, a prolonged deliberation process was shown to be indicated by slower RTs (27,28).Furthermore, it is well established that more difficult choices (where the accurate or best alternative is not easily distinct) lead to a prolonged deliberation process (18,(29)(30)(31)(32). Studies have also shown that longer decision times are associated with stronger memory associations (33).Following that motivation, we performed the following two analyses.
(i) Interaction with decision time.We hypothesized that a longer decision time will lead to increased memory associations between the two cards and will increase the inverse value updating of unchosen cards based on the outcome of the chosen card.Therefore, we examined whether RT in the previous trial moderated latent value update for unchosen actions.We performed a logistic regression with previous outcome, previous RT, and their paired interaction as predictors for choosing in trial n + 1 the card that was previously unchosen at trial n (see Fig. 1).As in the previous analysis, this was done only for trials where the chosen card was not reoffered.We found no evidence for a moderating effect of previous RT such that participants were similarly likely to be affected by the previous outcome on trial n in choosing a previously unchosen card in trial n + 1 regardless of their RT (Fig. 2B; interaction posterior median = 0.008, HDI 95% = −0.09 to 0.11; probability of direction = 55%).
(ii) Interaction with decision difficulty.We hypothesized that a more difficult deliberation process would lead to a stronger inverted memory association between the chosen and unchosen options (16,17).To estimate deliberation difficulty, we calculated the absolute difference in true reward expected values between the two offered cards.We reasoned that a smaller difference in true expected value between the two offered cards should reflect a more difficult deliberation process.Therefore, we examined whether decision difficulty, defined as the absolute difference between the expected value of the two offers in the previous trial, moderated value learning for unchosen actions.We performed a logistic regression with previous outcome, previous difficulty, and their paired interaction.We found no evidence for a moderating effect of previous difficulty (i.e., the absolute difference in expected values on the previous trial) on the tendency to assign value to unchosen actions.Specifically, participants were similarly likely to be affected by the previous outcome on trial n in choosing a previously unchosen card in trial n + 1 regardless of the true expected value difference (Fig. 2A; interaction posterior median = −0.03,HDI 95% = −0.21 to 0.15; probability of direction = 65%).We repeated the same analysis, only with difficulty of the current trial as additional predictor to ensure that the update of unchosen actions is robust and independent of the true running average of the four arms.We found the effect of the previous reward on choosing the previously unchosen option to be independent of the difference in expected values of the currently suggested options (interaction posterior median = 0.0003, HDI 95% = −0.3 to 0.3; probability of direction = 50%; see fig.S3).Overall, we found that the value assignment to unchosen actions was independent of the previous and current trial difficulty.Thus, we found no evidence in favor of a moderating role of deliberation duration or difficulty on the inverse value update of unchosen action.
Together, model-independent analyses showed clear evidence both in our main dataset and in a further replication in favor of an inverse and latent update of unchosen actions based on the outcome associated with the chosen card.We found no evidence that deliberation difficulty and duration (assumed to induce memory associations between chosen and unchosen cards) moderated the inverse value update of unchosen actions.Our findings, suggesting a lack of association between latent update of unchosen actions and deliberation properties, may only be applicable to the specific experimental setup used in this study.In particular, our observations pertain to sequential decision-making tasks where feedback is provided instantaneously.We now turn to computational modeling using Q-learning algorithms to further examine value update mechanisms that might underlie such an effect.
Computational modeling
To explore the updating process of the unchosen action, we fit several reinforcement learning models to participants' choice behavior.In all models, we updated the chosen action according to a well-established prediction-error mechanism where the difference between the expected and the observed outcome is used as a value teaching signal (1,34,35).However, the models differed in their theoretical assumptions regarding the inversion of information used to update the value of unchosen actions.
1) Double updating with two prediction errors: Here we assumed that the participant holds the belief that the arms are anticorrelated and generates internally an outcome for the unchosen action in an inverse direction to the external outcome.Therefore, in the current model, we included a separate prediction error for the unchosen action that is calculated as the difference between an inverted "hallucinatory" outcome and the expected value of the unchosen action [Eq.5; (36)].
2) Double updating with one prediction error: Here, we also assumed that participants might hold a belief regarding anticorrelations between the options.However, we relaxed the assumption that participants generate an internal hallucinatory outcome for the unchosen action.Instead, we assumed that the participant experiences only a single prediction error based on the difference between the external outcome and the expected value of the chosen action.Participants were further assumed to update the unchosen action based on an inverted portion of the prediction error of the chosen action (Eq.7).
3) Select-reject mechanism: Here, we relaxed the assumption that the participant performs any inversion at all (as suggested by the two previous models).Instead, we assumed that the individual holds and updates in mind two separate values for selecting and rejecting each card.In the select-reject model, both select and reject values are updated according to separate prediction-error signals, and choices are determined by a weighted integration of the value of selecting a card with the value of rejecting the alternative (Eq.12).
The three suggested models had three parameters each and were compared to a classical baseline model in which only the value of the chosen card, but not the value of the unchosen card, was updated.Before testing and comparing the models with empirical data, a few tests were performed to ensure that model space is well defined and can provide meaningful results.First, we found excellent parameter recovery properties for all models (see Materials and Methods and Fig. 3).Second, we performed a model recovery analysis, testing our ability to identify the true data-generating model from observed data, in silico.We found excellent recoverability of all models (see Materials and Methods and Table 1).Last, we added a fourth baseline model for comparison purposes, in which the unchosen actions were not updated.We then simulated data from all models based on estimated parameters, examined, and found that the findings reported in Fig. 1C cannot be reproduced by the baseline model (see Fig. 4 and Materials and Methods).However, as expected, we found that all other models of interest were able to produce the inverse credit assignment signature.The models and parameters were highly recoverable, suggesting the adequacy of this computational design in investigating the mechanism underlying latent updating for unchosen actions.Overall, these measures ensure that our model space is well defined and can be recovered with confidence.We will now describe each model in detail and report our empirical model comparison results.
Baseline model.For each trial, we calculated a prediction-error teaching signal and updated action values (represented by Q values) using a temporal difference learning algorithm.The baseline model was a classical Rescorla-Wagner model, where only the chosen option was updated according to a reward prediction error where α chosen is a learning rate (free parameter) and δ chosen indicates reward prediction error for the chosen option.Q values were used to predict each choice according to a softmax policy where β is an inverse noise parameter (free parameter).Overall, the current model had two population-level free parameters (α chosen , β), along with their associated random effect parameters.To ensure that this model is unable to produce the observed logistic regression effect reported above, we repeated the same logistic regression analysis with simulated data.We found that this model was unable to produce the regression signature found in empirical data, which validated its use as a baseline model (see the Supplementary Materials and Fig. 4).Model 1-Double updating with two prediction errors.Here, in addition to updating the chosen option, individuals were also assumed to experience a latent prediction error for the unchosen option (i.e., δ unchosen ).We updated the unchosen option according to where α unchosen is a learning rate (free parameter) and δ unchosen represents reward prediction error for the unchosen option.Note that δ unchosen is calculated according to the difference between the inverted observed reward and the predicted value of the unchosen option.Chosen options were updated as in the baseline model (Eqs. 1 and 2), with each action predicted according to a softmax policy (Eq.3).Overall, this model had the same two parameters as the baseline model, with an additional learning rate (i.e., α unchosen ) and its associated random effect scale parameter.Model 2-Double updating with one prediction error.Here, we assume that the individual experienced one prediction error based on the outcome and the predicted value of the chosen option (i.e., Eq. 1).However, unlike the baseline model, the Table 1.Model recovery results.Values represent Δelpd referring to the difference between the elpd of the data-generating model (columns) and the elpd of the alternative (rows).elpd estimates were obtained using leave-one-block-out cross-validation for all models including the data-generating models.Negative values reflect a worse fit of the alternative model compared to the data-generating model estimates.Values in brackets represent the SE of the elpd difference distribution.An Δelpd difference that is more than twice the SE should be seen as substantial (40,83).current model assumes that a portion of this prediction error is inverted and assigned to the unchosen option.Therefore, here, we calculate a prediction error using Eq. 1 and then update the chosen and unchosen options according to
Double-updating with two prediction errors
where α chosen and α unchosen are the learning rates for chosen and unchosen options, respectively (e.g., when α unchosen = 0, the model becomes exactly like the baseline model).Overall, this model had the same two parameters as the baseline model, with the addition of a learning rate parameter (i.e., α unchosen ) and its associated random effect scale parameter.Model 3-Select-reject value learning.Here, we assumed that the agent held in mind different values for when a card was selected or rejected.Therefore, the current model had a Q select value and a Q reject value for each possible action (a total of eight Q values for four possible cards in each block of the current task).An important aspect of this model is that it permits us to alleviate the assumption that participants engage in an additional process of mentally reversing the outcome before updating the value of the unselected option.The value of the chosen card was updated according to a reward prediction error The value of unchosen cards was updated according to Note that, here, the unchosen card is not updated according to an inverse reward signal, but rather the same reward value as the chosen action.Furthermore, the chosen and unchosen options were updated according to the same learning rate.Choices were predicted using an integrated Q value (i.e., Q net ) and a softmax decision policy The ω parameter indicated each individual's tendency to weigh the value of selecting a certain card with the value of rejecting the alternative.In the extreme case where ω = 1, the model converges to the baseline model, and when ω = 0, the agent will base choices purely on the outcome history of rejecting offered cards.Therefore, in the current model, we assume that the subject deliberates on an integrated option of taking a card while rejecting the other.This allowed us to relax the assumption that individuals inverted internally the reward that was assigned to the unchosen card as was described in both models 1 and 2. Note that while this model holds some resemblance to model 1 (double updating with two prediction errors), it also has a distinct feature of segregating the outcome history for chosen and unchosen options.For example, let us take an example where an individual deliberated cards A versus B and decided to take card A. Under model 1 (double updating with two prediction errors), the value of card A, and hence its prediction error, will be based on all the trials where it was either chosen or unchosen (to the extent that the learning rate permits; Eq. 5).However, in the current select-reject mechanism (model 3), the value and the associated prediction error of A (the chosen card) will be calculated solely based on the outcome history for when A was chosen, while the value of B and its associated prediction error will similarly be based on trials where B was unchosen.Overall, this model had the same two parameters as the baseline model, with the addition of an additional decision weight (i.e., ω) and its associated random effect scale parameter.
Examining each model's ability to generate the main behavioral effects
Following recent guidelines for model selection in computational cognitive modeling (37,38), we wanted to examine whether our models can generate the main regression results found in the empirical data (Figs.1C and 2A).We therefore simulated data for each model based on empirical parameters (see Materials and Methods) and calculated and illustrated the effect of previous outcome on the selection of unchosen actions (see Fig. 4).We found that the findings reported in Figs.1C and 2A were not reproduced by the baseline model (see Materials and Methods and the Supplementary Materials).However, as expected, we found that all other models of interest were able to produce the inverse credit assignment signature (see Fig. 4).Given that we also found good model recoverability for these models, we continued to perform model comparison using leave-one-block-out Bayesian estimation.
Model comparison
All models were fitted to the data using hierarchical Bayesian modeling via "stan" Markov chain Monte Carlo (MCMC) sampling [ (39); see Materials and Methods].For purposes of model comparison, we performed a leave-one-block-out cross-validation analysis.Specifically, in each round of the cross-validation procedure, we omitted one block, calculated hierarchically posterior distributions for each individual across all parameters based on the remaining blocks, and used these posterior distributions to predict trial-bytrial actions for the left-out observations.This allowed us to gain expected log-predictive density (i.e., elpd) distributions for each observed empirical action across all models.We then followed current well-accepted guidelines for Bayesian model comparison and compared elpd for each model, allowing us to quantify the ability of each model to accurately predict data that were not used for training (40).We then calculated the difference elpd distribution between the models (40).Table 2 reports the mean and SD of the elpd difference distribution for each model compared with the winning select-reject model.According to current established guidelines, a mean difference twice larger than the SE should be considered as evidence in favor of the winning model (40,41).As can be seen in Table 2, this criterion was reached for the winning model, suggesting superiority in terms of choice predictions compared with the alternatives (see Table 2).To further demonstrate the association between the winning model and empirical data, we report here two additional analyses.
Replicating individual differences from simulated data that are based on empirical parameters.Next, we examined whether the winning select-reject value learning model also adequately captured individual differences in the influence of the chosen action's feedback on the unchosen action value.Specifically, we examined the association between the ω parameter in the select-reject value learning model (weighting the contribution of reject value learning to decision-making) and individual estimates for the model-independent effect.We found a positive correlation (Pearson r = 0.20, BF 10 = 7.54; probability of direction = 99.65;HDI 95% = 0.06 to 0.34) between the ω parameter and the individual's previous-outcome coefficient (from the above logistic regression), showing that the parameter estimations of the computational and model-independent analysis are in line with each other (see Fig. 5B).
Choice accuracy and tendency to update unchosen actions.Last, we wanted to test accuracy rates in light of our main finding.In the current paradigm, the value of each card drifted independently, and knowledge about the value of one card was not predictive of the other.This means that updating the value of unchosen actions based on the outcome of a different (chosen) card should only reduce the accuracy of value estimation and lead to lower choice accuracy.However, we wanted to explore using empirical data whether inverse updating of unchosen actions did not lead to some unexpected monetary benefits, thus encouraging individuals to use such a strategy.We, therefore, estimated the association between individuals' parameters in the winning model (model 3, select-reject) and individuals' choice accuracy (defined as 1 if the participant selected the arm with the higher true expected value and 0 otherwise).We found a substantial positive correlation between ω and mean choice accuracy (Pearson r = 0.31; BF 10 = 953.83;probability of direction = ~100%; see Fig. 5C).Therefore, if anything, updating unchosen actions was counterproductive and led to lower profits.
Overall, our computational model comparison results support the hypothesis of latent value updating of unchosen actions.We found clear evidence to suggest that the observed latent update of unchosen actions is less probable under a double-updating model (either with one or two prediction errors).The data seem more plausible under a select-reject model, where individuals are assumed to hold and update separate values in mind for rejecting and selecting an action.The primary rationale behind using the select-reject model is that it eliminates the need to assume that humans engage in a mental inversion of the received outcome to update the value of the unselected option.The winning model was able to reproduce the signature regression analysis examining the influence of previous outcome on the probability of selecting a previously unchosen action (see Fig. 4).The winning model was For ω = 1, individuals will consider the value of a card based on the reward history when that card was selected.For ω = 0, individuals will consider the value of a card only based on the reward history when the alternative was rejected.The posterior distribution suggests that participants weigh in their decision both the history of when a card was chosen and when the alternative was rejected, with greater emphasis on the former compared with the latter.The posterior high-density interval (gray horizontal line) is clearly below one, suggesting that individuals considered the value of actions not only based on trials where a card was chosen but also to a lesser degree based on trials where the alternative was rejected (i.e., 0.5 < ω < 1).(B) Association between median posterior ω parameter estimates for each individual and the model-independent effect estimated for each individual using empirical data (i.e., β previous outcome ).The positive association demonstrates the model's ability to capture individual differences.(C) Association between individuals' ω parameter estimates and their mean choice accuracy.The correlation shows that higher ω values correlated with better performance accuracy in the current bandit task.also able to explain individual differences in this effect (see Fig. 5, B and C).
DISCUSSION
Prior studies have demonstrated counterfactual value-based learning in human choice behavior (19-22, 42, 43).However, these studies were conducted almost exclusively in the context of full feedback such that outcomes of both chosen and unchosen actions were observable.In the present study, we demonstrated latent value updating of unchosen actions in a multi-armed reinforcement learning task where no counterfactual feedback was available.This sequential design allowed us to explore the reward predictionerror mechanisms underlying latent value updating of unchosen actions and examine the extent to which properties of the deliberation process moderate such latent updating.
In the current study, we found that individuals assign value to unchosen actions based on the inverted observable value assigned to the chosen action; reward delivery in response to a chosen action reduced the likelihood of selecting a previously unchosen action, and loss increased its selection probability.Individuals were able to estimate the value of all four arms based on their choices, but unexpectedly, they still exhibited a tendency to update unchosen arms based on the outcomes of chosen arms.Since there was no dependency between the true expected values of the arms, the participants were not required, and had no benefit, in performing this counterfactual update.To the best of our knowledge, the only previous study that demonstrated an inverse value update for unchosen actions was a study by Biderman and Shohamy, which was recently further replicated (17,24).However, in their study, the deliberation and outcome phases were delivered in separate blocks.We extend their finding by showing similar inverse updating for unchosen actions in a standard reinforcement learning multi-armed bandit task in which outcomes immediately followed deliberation and choice.This pattern of findings demonstrated across different choice-outcome temporal variations attests to the robustness of the effect.
Our design allowed us to examine the influence of the deliberation process on the value updating of unchosen actions.A main theoretical explanation for the finding that individuals update unchosen actions according to the outcome of chosen actions is shared memory associations (12)(13)(14)(15).Specifically, a prior study has argued that the deliberation process "ties" the chosen and unchosen actions in one's mind such that a value update of one action leads to an update of the other (17).The unique context of the deliberation process, whereby individuals need to tease apart the value of two options, is assumed to further lead to an inverted, rather than direct, value update.The underlying theoretical assumption for this notion is that when the association between chosen and unchosen actions is reactivated during the value update process for the chosen action, the deliberation goal of teasing apart options based on their value is also in play, thus leading to an inverted value update of the unchosen action (44,45).Our findings showed no evidence in favor of a moderation effect of deliberation duration and difficulty on latent value updating of unchosen actions.Specifically, we found that neither RTs nor the difference in true expected value between the two options moderated the magnitude of the update for the unchosen action.Thus, it seems that although previous studies suggested that memory associations created during deliberation may lead to the effect at hand, this does not seem to be the case in this study.However, we cannot rule out the possibility that the sequential nature of our task, as well as the immediacy of the feedback, limits these findings to this specific paradigm.It is possible that since feedback in the current paradigm is abundant, participants tended to create fewer memory associations during deliberation, which led to our inability to find the aforementioned interaction.Furthermore, our analyses were merely correlative.Future studies may manipulate the deliberation process to search for further support for the claim that it has no impact on the degree of value update for unchosen options, for instance, by manipulating the speed-accuracy instructions (46) or by comparing a condition with value-based choices to arbitrary picking (31,(47)(48)(49).Nevertheless, such manipulations should be carefully conducted as it is nontrivial to manipulate the deliberation process without interfering with its natural dynamics (50).
If properties of the deliberation process do not moderate inverse latent value updating of unchosen actions, what alternative mechanism may explain the latent and inverse value update?Computational modeling allowed us to explore three potential theoretical mechanisms, each including a different assumption regarding the inversion of information in mind during the update of unchosen actions; first, we assumed that the cognitive system might produce an internal outcome for the unchosen option (i.e., double updating with two prediction errors model).Here, the system is thought to generate a "hallucinated" outcome for the unchosen action that is perfectly anticorrelated with the chosen action's outcome (36).This internal outcome is then assumed to lead to a second prediction error according to the difference between the inverted outcome and the unchosen option's expected value.This idea is in line with previous neuroimaging studies indicating that separate neural populations might generate two independent prediction errors for a chosen and unchosen action (at least when full feedback is included) (51,52).Thus, we tested a double-updating model with two prediction errors assuming that participants produce an internal outcome for the unchosen option based on the evident outcome of the chosen option.
Second, we assumed that the cognitive system might process only a single outcome, without producing an internal outcome for the unchosen arm (i.e., double updating with one prediction error model).This suggests that only one "surprise" value (or prediction error) might be formed according to the external outcome and the anticipated value of the chosen action.This single prediction error is then assumed to be inverted internally to accommodate a belief individuals might hold regarding a negative correlation between choice values (53).This model allowed us to relax the assumption that the system generates internally a hypothetical outcome.Instead, under the current model, the system predicts a contrasting update for the unchosen option based on the surprise signal for the chosen action.
Last, we assumed that the cognitive system might not perform any inversion of information at all.Instead, it could maintain separate value representations for trials where an option was either selected or rejected (i.e., select-reject model).Like the two previous models, a select-reject cognitive mechanism can also lead to the result observed in Fig. 1C, as the value of rejecting an option increases when a rejection is rewarded (see Fig. 4).Our modeling results indicated evidence in favor of the latter mechanism (i.e., select-reject value learning model), whereby two independent prediction errors are calculated by the system for the arm that was selected and the one that was rejected.
According to the select-reject value learning model suggested by our results, an option's value is determined as a combination of the previous outcomes in which it was chosen, and the alternative was rejected.An important theoretical benefit of the select-reject value learning model is that it allows us to relax the assumption that the system performed an inversion of values as suggested in the two double-updating models (17).It also does not require us to assume that the memory contrast between chosen and unchosen actions generated during deliberation is the mechanism driving the double and inverted update.Rather, we suggest that the two values, one for taking the chosen action and the other for rejecting the unchosen action, might linger in one's mind until an outcome is observed, thus leading to a direct update of both values.We would like to assert that this theoretical suggestion does not imply that a negative memory association between chosen and unchosen actions does not take place (8,11,13,15), but rather we present a plausible alternative explanation that does not rely on memory associations that may arise during the deliberation process.However, one of the challenges of the current cognitive mechanism is that it also implies additional value representations to be stored in memory.
To test this explanation, future studies can use dedicated sophisticated paradigms where only the value of rejecting alternatives can drive choices.
A relevant value update mechanism is choice-confirmation bias where individuals are suggested to show a larger value update for both chosen and unchosen actions following choice-confirming outcomes (43,(54)(55)(56)(57). Previous studies demonstrated that participants update the value of unchosen options when provided with counterfactual outcome feedback (also known as "full feedback" paradigms).Furthermore, several studies have indicated that chosen options receive a larger value update for positive versus negative outcomes.Palminteri and Lebreton (56) integrated these two findings and suggested a choice-confirmation bias mechanism.Here, participants are suggested to show a larger value update for both chosen options followed by positive obtained outcomes and unchosen options followed by negative foregone outcomes (in full feedback paradigms where the participants observe the outcome of both chosen and unchosen options).This mechanism effectively leads to an update of values in a confirmatory manner.In the current study, participants were provided with feedback for the chosen action alone without providing them with the foregone outcome.Extension of the computational modeling to include a choice-confirmatory bias showed, as expected, higher learning rates for confirmatory versus disconfirmatory outcomes, yet did not change our overall conclusions (see the Supplementary Materials).Future studies should more directly explore the role of a choiceconfirmation bias mechanism in the context of latent updating of unchosen actions using dedicated designs.
Another relevant theory for our findings is "divisive normalization," suggesting a neural mechanism by which only relative, but not absolute, values of options are maintained (51,(58)(59)(60).Following this line of thought, value update for the chosen action should affect (i.e., "normalize") the values of other alternatives that were not selected.However, the divisive normalization theory makes no specific distinction between two types of unchosen optionsoptions that were offered but not selected and options that were unavailable in a specific trial.Such a distinction seems necessary to explain our findings.According to the divisive normalization theory, both unchosen-offered and unavailable options should be subjected to a "normalizing" value change based on the value update of the chosen option.Here, we showed the influence of monetary outcome on the preference of offered-unchosen options when contrasted with the previously unavailable options, which is not accounted for by divisive normalization.Thus, divisive normalization in its current form could not account for our findings.Recently, Fouragnan and colleagues (61) conducted ground-breaking research in macaques showing that the value of a temporarily unavailable option was distinctly maintained by hippocampal activity, while activity in the medial orbitofrontal cortex/ventromedial prefrontal cortex was important for comparing the values of available options.Speculatively, this may suggest that the value of the offered but unchosen alternative has increased activity compared to unoffered alternatives during value updating.While our work shows a distinct and latent value update of unchosen-offered options, further work is needed to determine whether unavailable options were also updated to a lesser extent or completely shielded from value update.
Our results should be discussed in relation to the "choiceinduced preference change" effect.This phenomenon has been studied extensively in the field of psychology, and it suggests that the act of making a choice itself can influence our preferences and attitudes toward options.Specifically, studies have shown that regardless of the outcome, when deliberating upon two options, the mere act of choice leads to a "spreading of alternatives."Therefore, under choice-induced preference theory, the act of making a choice itself leads to increased value for the chosen option, while the value of an unchosen option decreases (62)(63)(64)(65)(66)(67).Choice-induced preference effects are independent of observed outcomes and can occur in the absence of an outcome.However, the current study was specifically designed to estimate outcome-dependent credit assignment to unchosen actions.Hence, we demonstrated both reduced and increased value of actions that were unchosen based on the value of the delivered outcome.Moreover, while choice-induced change is tightly related to the deliberation process, the current effect is not moderated by deliberation properties.Thus, our results cannot be explained by choice-induced preference alone or seen as providing evidence in favor or against such a theoretical notion.Note that we did find deliberation effects that were independent of reward (see Fig. 2).Specifically, we observed that more difficult choices and longer decisions were associated with a reduced inclination to choose the previously unselected card (see the Supplementary Materials), which can be seen as evidence for choice-induced preference change of unchosen actions.Further studies should directly explore whether latent credit assignment to unchosen actions and choice-induced preference bear any mechanistic associations.
Our findings are also consistent with a limited luck theory, which suggests that humans may perceive luck as a limited resource (53,68).This theory proposes that individuals may assume that obtaining a reward from one option implies that the other option is less lucky and therefore less likely to generate a reward.This tendency may reflect a basic property of ecological environments, where resources are limited and often correlated (69).This has been suggested to lead to "zero-sum thinking" according to which a gain for one option entails a loss for the other (70).In our computerized experiment, rewards were abundant and uncorrelated by design, so our conclusions may be limited to this particular design (71).To address this limitation, we emphasized the lack of correlation between action values in our replication study.However, it could be insufficient to override preexisting assumptions that humans have about the way in which rewards are distributed.Previous studies have shown individuals to perceive a negative correlation between the value of two offered options even when they could physically see that the pool of associated outcomes for the two options was separated and independent (53).Further research should examine whether the updating of the unchosen action's value is modulated by the true value dependencies of the environment and by individual experiences.
Another important and relevant mechanism is reference point dependency (72).According to this notion, individuals update action values not only based on an observed outcome of the selected action but also based on a mental value reference point that is updated across all actions.Specifically, Palminteri and Lebreton have shown that an objective outcome is mentally weighted by a mental reference, thus leading to a phenomenon where an action with a higher true expected value could be chosen less compared with an action with lower true expected values due to differences in the underlying reference point (i.e., also termed state value in the reinforcement learning literature).However, we argue that the effect reported in the current study could not be predicted by a reference point dependence mechanism.To illustrate this, consider that the individual was offered cards A and B and that A was chosen and followed by some observed outcome.Now, let us further consider that in the next trial, B (the unchosen arm) is reoffered with a different card (e.g., C or D).We found in the current study strong and replicated evidence suggesting that if A was rewarded, B is now less likely to be selected in a preceding "B vs. C" trial.In the same sense, if A was unrewarded, B is now more likely to be selected in the next B versus C trial.The reference point dependence mechanism cannot explain such an effect since the observed outcome following the selection of A would be only assigned to update the reference value of A versus B. Without making further A versus B choices, an update of a reference point has no effect on the actions' subjective value and should not influence the choices in the next B versus C trial.However, our evidence clearly shows such an association.This should not be taken as evidence against or in favor of the notion that individuals might weigh the observed outcome against a reference point when making value-based decisions.It is simply that the phenomenon of inverse credit assignment for unchosen actions is not predicted by such mechanisms, which makes it less relevant to the current study.
Our findings are relevant in the context of an anticipated regret minimization theory (73), which proposes that individuals make decisions not only to maximize their rewards but also to minimize their future regret (5).Moreover, when the outcome of foregone alternatives is not observable, counterfactual thinking theory (6) suggests that people tend to engage in "what if …" thinking patterns, envisioning the potential outcomes of their unselected actions.Further studies are needed to explore whether the select-reject mechanism proposed in the current study and specifically the speculated maintenance of reject values correspond to regret and counterfactual "what if …" processes in humans.
Our findings are also in line with previous neural studies that used full feedback (i.e., the outcomes of both the chosen and unchosen options were shown to the subjects) and suggested the existence of two distinct prediction-error signals, one for the actual reward and one for the foregone outcome (19)(20)(21)(22).Specifically, these studies recorded dopamine striatal levels (known to encode reward prediction error) with high temporal resolution in humans performing a gambling task (19).Dopamine fluctuations in the striatum were found to encode a combination of reward prediction errors, including counterfactual prediction errors defined by the unobtained outcome of unchosen options.A recent study demonstrated that when individuals were able to reject a certain option but were subsequently shown the outcome of the foregone alternative, striatal activity represented a prediction error with a reversed sign (74).Further neuroimaging studies are needed to confirm the existence of a counterfactual prediction error when the outcome of the rejected action remains hidden from the observer.Another unique perspective on the described value updating phenomenon is provided by the reinforcement learning theory of dopamine function, specifically the opponent actor learning model (75).This influential computational model is inspired by findings about the distinct role of the dopaminergic D1 and D2 neural pathways in approach and avoidance learning, respectively (76)(77)(78).Further studies should explore whether our findings correspond with the same pathways.Namely, our model argues that each choice is a weighted sum of selecting an action and rejecting an alternative.Further studies should explore whether selecting/rejecting an action involves similar neural mechanisms to the ones described in approach-avoid literature (79).
To conclude, we extended previous findings suggesting latent and inversed value updating for unchosen actions by demonstrating it in a sequential decision-making task.Furthermore, we found evidence contradicting the hypothesis that this type of updating is driven by the act of deliberation, which was suggested to negatively bind the chosen and unchosen actions in memory.Alternatively, we suggest a select-reject value learning model to best explain this phenomenon, which is aligned with emerging neurological descriptions of the dual reward prediction-error system.
Participants
One hundred seventy-eight prolific workers (age mean, 26.1; range, 18 to 51; 101 males, 76 females, and 1 other) completed an online experiment in return for monetary compensation.All participants reported normal or corrected vision and no current or past psychiatric or neurological diagnosis.The study protocol was approved by the Research Ethics Council of Tel-Aviv University, and all participants signed informed consent before participating in the study.
Reinforcement learning task
Participants completed an online multi-armed bandit reinforcement learning task where they were asked to choose cards to gain monetary rewards.The task included four cards, and in each trial, the computer randomly selected and offered two for participants to choose from.Each card led to a reward according to an expected value that drifted across the trials [generated using a random walk with a noise of N(0,0.03)].The task included two conditions (win versus loss block) manipulated between four interleaved blocks (whether the first block was win or loss was counterbalanced between participants).In a "win" block, the only possible outcomes were winning 1 or 0 play dollars, and in the "loss" condition, the only possible outcomes were losing 0 or 1 play dollar (henceforward addressed as rewarded and unrewarded, respectively, for convenience).These two conditions allowed us to test whether inverse value updating for unchosen actions is more pronounced under win versus loss blocks, thus achieving evidence related to former findings, which found outcome-context-specific effects (53).At the start of the session, participants were presented with task instructions, completed a short practice, and completed a multichoice quiz, which they had to complete with 100% accuracy.Participants were told that they needed to do their best to earn as much money as possible.To make sure that the reported effects are not due to some misunderstanding leading participants to wrongly assume that the cards' expected values are dependent, we report a replication study where both instructions and the quiz emphasized that finding a reward under one card had no meaning to how good/bad other cards were (see the Supplementary Materials).Participants completed four blocks, with 50 trials each, and at the end of the experiment were paid a fixed amount (£2.5) plus a bonus (of £1 or £1.5) based on their performance.Further information and trial sequence is described in Fig. 1 and the Supplementary Materials.
Data treatment
In the first trial on each block, trials with implausibly quick RTs (<200 ms) or exceptionally slow RTs (>4000 ms) were omitted (1.79% of all trials).Participants with more than 10% excluded trials (21 participants) or higher than 5% no-response rate (4 participants), in total 25 participants (12.3% of subjects; age mean, 22.8; range, 18 to 36; 22 males, 3 females), were excluded altogether.To conduct our main behavioral analysis, we selected a subset of trials in which the previously unoffered card was reoffered, and the previously offered card was not.This resulted in an average of 63.6 trials per participant (SD = 6.7), with the number of trials ranging from 46 to 81 across subjects.
Bayesian parameter estimation
Bayesian logistic regression and reinforcement learning computational modeling analyses were performed using "brms," "rstan," and "loo" packages in R (39,40,80).All models included population-level (fixed effects) and participant-level (random effects) parameters for all estimated models and were sampled with weakly informative priors.All chains were visually examined using trace plots, pairs plots, and R-hat estimates and were found to show good convergence.We report the median, HDI 95% , and probability of direction for parameters' posterior distributions (for logistic regression, estimates are on the log-odds scale; for prior robustness checks, see the Supplementary Materials) (81).For computational models, parameter estimation was conducted using the rstan package in R, and model comparisons were made using the leaveone-block-out approach and the loo package in R (40,82).For each model, we held out one of the four blocks and then calculated the elpd for each trial in the held-out block.This was repeated for all blocks, resulting in a matrix of 4000 MCMC samples × 34,096 observations.We then used the elpd R function to obtain a pointwise estimation for each observation and the loo_compare function (from the loo package) to perform pairwise model comparisons between each model and the model with the largest elpd.An elpd difference of two times the SE was considered substantial (40).
Simulation of data based on each models' empirical population parameters
We simulated 175 agents for each of the four computational models to examine the model's ability to produce the main behavioral effects observed in the empirical data (see Fig. 4).To gain an empirical estimate of population parameters, we first fitted a hierarchical Bayesian reinforcement learning model to the empirically observed behavior.This allowed us to gain population-level parameter posterior estimates for each parameter of each model.We sampled parameters for each agent from the empirical posterior distribution and simulated artificial choice data using the same task design as was used to collect empirical human data (e.g., number of arms, true expected values, and number of blocks and trials).This allowed us to obtain artificial data for each model based on empirical population parameters, which we then used to estimate the existence of the regression signature we found in empirical data (see Fig. 4 and the Supplementary Materials).
Parameter recovery
To establish the suitability of each of our three models, we tested our ability to recover simulated parameters (38).For that purpose, we first simulated data from each of the three generative models.Specifically, 175 agents were simulated hierarchically, separately for each of the three different models (i.e., double updating with two prediction errors, double updating with one prediction error, and select-reject models).For our simulation, we set population-level parameter values as follows: (i) model 1-double updating with two prediction errors: ɑ ch = 0.3, ɑ unch = 0.1, � = 4; (ii) model 2double updating with one prediction error: ɑ ch = 0.3, ɑ unch = 0.1, � = 4; and (iii) model 3-select-reject: ɑ = 0.3, ω = 0.7, � = 4. Individuallevel parameters were then sampled from a normal distribution with a mean defined by the population-level parameter (fixed effect) and an SD of 1 for ɑ and ω, or 1.5 for �.The ɑ and ω parameters were scaled to be between 0 and 1 using a logit transformation.To accommodate extreme cases where ɑ ch < ɑ unch , we truncated Q values in the rare event that they exceeded the range of [0,1].For every agent, four blocks containing 50 trials each were simulated using the same task used to collect empirical data.We used weakly informative priors [population location parameters ~N(0,2), population-scale parameters ~Cauchy (0,2), and individual-level parameters ~N(0,1)] and sampled from the posterior distribution using the rstan package in R (39).Specifically, we used 20 chains with 1000 warm-up and 50 sampling iterations each.We examined chain convergence using trace plots and Rhat estimates (83).We then examined the match between true and recovered parameters, both at the population and individual levels.Figure 3 illustrates the results for each of the three parameters for every model.Specifically, we found excellent recoverability for population-level (fixed effects) parameters with all true parameters being well within the HDI 95% of the updated posteriors.We further found good parameter recovery at the individual level, as can be seen by high correlations between the individual true and recovered parameters (all Pearson estimates are above 0.75; all P values < 0.01; see Fig. 3).Thus, we were able to recover with high accuracy and precision all simulated parameter values both at the population and at the individual levels (see Fig. 3).
Fig. 1 .
Fig. 1.Inverse value updating of unchosen actions.(A) Illustration of a trial sequence.Participants completed a four-armed bandit task.In each trial, two cards (of four)were randomly offered by the computer for participants' selection.We examined trials where the unchosen card in trial n was reoffered at trial n + 1 with a card that was not offered on trial n.This allowed us to examine whether the outcome associated with the chosen card in trial n influenced the probability that the participant will select the previously unchosen card at trial n + 1.For example, as illustrated in this panel, we ask whether the reward delivered at trial n (as a result of choosing the dark card) influenced the probability of selecting the unchosen card (orange) when offered with a third card (blue).(B) Card selection led to a binary outcome determined by slowly drifting probabilities.We used randomly drifting reward probabilities to ensure continued learning.The reward probabilities of each card were independent (mean shared variance = 5.3%).(C) Probability of choosing a previously unchosen action as a function of outcome in the previous trial.Results indicated that the probability of choosing a previously unchosen card was reduced after rewarded trials compared to unrewarded trials.This was true for both win blocks (where outcomes included winning/not winning a play pound coin) and loss blocks (where outcomes included not winning/losing a play pound coin).(D) The posterior distributions for the influence of previous outcome (top) and the interaction with condition (bottom) on choosing the previously unchosen card in a logistic regression (the blue dotted line indicates the null point, and the gray horizontal line indicates HDI 95% ).Overall results indicate an inverted influence of the previous outcome on the chances of selecting an unchosen action, regardless of win/loss conditions.
Fig. 2 .
Fig. 2. Moderation of deliberation duration and difficulty on inverse value updating of unchosen actions.(A) Hierarchical Bayesian logistic regression showed no moderating effect for the absolute difference in expected values between the two offered cards on the tendency to assign value to an unchosen action.(B) Higher RTs were assumed to be indicative of increased deliberation but had no moderation effect on the tendency to assign value for unchosen actions.(C) Posterior distribution showing no evidence supporting the moderation of value updating for unchosen actions by deliberation difficulty (blue line indicating the null point; probability of direction = 65%; gray line indicating HDI 95% ).(D) Posterior distribution depicting a lack of evidence for the interaction between RT and previous outcome on the tendency to choose a previously unselected card (blue line indicating the null point; probability of direction = 55%; gray line indicates HDI 95% ).
Fig. 3 .
Fig. 3. Parameter recovery.The top rows in each panel present population parameter recovery, including the posterior parameter distribution (pink) and the blue dashed line indicating the value of the true latent population parameter.The bottom rows refer to individual parameter recovery, showing a strong correlation between simulated individual parameters and recovered ones.(A) The three parameters of the "double updating with two prediction errors" model, (B) the "double updating with one prediction error" model, and (C) the "select-reject" model.Overall, we found good parameter recovery for all parameters and models.
Fig. 4 .
Fig. 4. Simulated effects for computational models.(A)The main regression signatures found in the empirical data.For each model (B to E), we simulated data from 175 agents using empirical parameters (sampled from the population marginal posterior distributions).We then used the simulated data to examine the ability of each model to reproduce the main regression signatures we found in the empirical data.(Left and middle columns) The effect of previous outcome on the probability of choosing a previously unchosen action and the corresponding posterior distribution (estimates are presented on the log-odds scale).(Right column) The moderation of choice difficulty in the previous trial (indicated by absolute difference between the expected values of the two offers) on the effect of previous outcome on the probability of choosing a previously unchosen action.Overall, we found that the baseline model was unable to reproduce the effect of latent updating for unchosen actions.All other models were able to produce this effect in the same direction as the empirical data, with the select-reject model showing the closest effect to the empirical one.PE, prediction error.
Fig. 5 .
Fig. 5. Results of the select-reject value learning model.(A) Population-level posterior distribution of the ω parameter in a hierarchical model.For ω = 1, individualswill consider the value of a card based on the reward history when that card was selected.For ω = 0, individuals will consider the value of a card only based on the reward history when the alternative was rejected.The posterior distribution suggests that participants weigh in their decision both the history of when a card was chosen and when the alternative was rejected, with greater emphasis on the former compared with the latter.The posterior high-density interval (gray horizontal line) is clearly below one, suggesting that individuals considered the value of actions not only based on trials where a card was chosen but also to a lesser degree based on trials where the alternative was rejected (i.e., 0.5 < ω < 1).(B) Association between median posterior ω parameter estimates for each individual and the model-independent effect estimated for each individual using empirical data (i.e., β previous outcome ).The positive association demonstrates the model's ability to capture individual differences.(C) Association between individuals' ω parameter estimates and their mean choice accuracy.The correlation shows that higher ω values correlated with better performance accuracy in the current bandit task.
Table 2 .
(40)l comparison results.elpdrefers to expected logprobability density estimates calculated using leave-one-block-out crossvalidation (larger values indicate better fit; see Materials and Methods).elpdSEand elpd difference SE are mentioned in brackets.elpddifferencelarger than four and at least twice the SE can be considered as a heuristic to decide which is the winning model(40).
|
v3-fos-license
|
2021-01-06T06:18:53.762Z
|
2020-12-31T00:00:00.000
|
230660712
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-4991/11/1/76/pdf",
"pdf_hash": "17afa3f3446393486cd2649e0aa27ae8aa334e62",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42399",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"sha1": "8ba5bacaab1d12f14b146bf8788acae7312bf543",
"year": 2020
}
|
pes2o/s2orc
|
Titanium Nitride Nanodonuts Synthesized from Natural Ilmenite Ore as a Novel and Efficient Thermoplasmonic Material
Nanostructures of titanium nitride (TiN) have recently been considered as a new class of plasmonic materials that have been utilized in many solar energy applications. This work presents the synthesis of a novel nanostructure of TiN that has a nanodonut shape from natural ilmenite ore using a low-cost and bulk method. The TiN nanodonuts exhibit strong and spectrally broad localized surface plasmon resonance absorption in the visible region centered at 560 nm, which is well suited for thermoplasmonic applications as a nanoscale heat source. The heat generation is investigated by water evaporation experiments under simulated solar light, demonstrating excellent solar light harvesting performance of the nanodonut structure.
Introduction
The emerging field of thermoplasmonics uses metal nanoparticles (NPs) and metal metamaterial structures as nanoscale heat sources when excited at their localized plasmon resonance wavelength through incident light absorption [1,2]. This has been employed for various applications, such as cancer therapy, photothermal imaging, photothermal and hot-electron enhanced chemistry, and applications based on solar light harvesting [3]. Solar light is a very important source of environmentally clean and sustainable energy. Thermoplasmonic systems are particularly interesting for solar light harvesting applications, such as thermophotovoltaics and solar water evaporation (SWE). Since ancient times, SWE has been a fundamental technology for potable water production [4,5]. This technology has gained even more attention nowadays due to its great potential for addressing global challenges, such as clean water shortage (e.g., people in remote areas during the flooding season, fisherman on an unexpected long trip on the sea), desalination, and wastewater treatment [6][7][8][9]. Generally, in SWE, sunlight is absorbed by a photothermal material (i.e., absorber), which is converted into heat to vaporize water [10]. Due to their broad absorption range, carbon nanomaterials, such as amorphous carbon, graphene, and carbon nanotubes, are high-efficiency solar light absorbers. Although their low emissivity is a limiting factor for achieving high-efficiency photothermal conversion, various carbon-based materials and structures have demonstrated good SWE performance [9,[11][12][13][14].
Pioneering research utilizing thermoplasmonics for SWE employed solar harvesting with metal NPs dispersed in a liquid [15][16][17]. Typically, noble metal NPs, such as Au, Ag, Pt, and Pd, have been employed because of their widespread use in plasmonics and strong light absorption at the localized surface plasmon resonance [10,15,17]. However, their high cost and narrow absorption range are hindrances for practical SWE deployment. Recently, titanium nitride (TiN) has been demonstrated as a highly stable plasmonic material that is much cheaper than noble metals [18][19][20][21][22][23][24][25]. TiN NPs have been reported to be promising for solar harvesting applications, in which efficient nanoscale heat generators with a wide spectral absorption range are highly desirable [21,26,27]. Employing these advantages, TiN has been demonstrated as an excellent photothermal material for SWE [21][22][23][28][29][30]. Furthermore, since evaporation occurs at the liquid-air interface, and heat is generated in the bulk liquid, the volumetric heating method usually achieves low energy conversion efficiency due to the heat loss [5,28]. Therefore, recent SWE studies have employed floating structures, in which the photothermal material is immobilized on a substrate that floats in water. Using this approach, significant improvements of the SWE efficiency have been achieved [9,11,14,[30][31][32].
In this work, we present a low-cost method for fabricating a novel nanostructure of TiN, i.e., nanodonut. The TiN nanodonuts exhibit strong and spectrally broad localized surface plasmon resonance absorption in the visible region that provides excellent photothermal conversion performance. We demonstrate the effectiveness of the TiN nanodonuts as broad-band thermoplasmonic heat generators for SWE under simulated solar light using the floating substrate approach by depositing the TiN nanodonuts on a polymer membrane.
Synthesis of TiO 2 Nanoparticles from Ilmenite Ore
TiO 2 NPs were synthesized by a three-step process described as follows: Step 1: Ilmenite ore was firstly crushed and ground into fine powder with particle sizes in the range of 50-75 µm. Then, 10 g of the powder was transferred into a 250 mL plastic beaker containing 70 mL of HF 20% solution. The suspension was continuously stirred for 5 h at room temperature. The obtained slurry suspension (i.e., filtrate) was separated from the deposited solid residual.
Step 2: 30 mL KCl 4 M solution was slowly added to the filtrate, resulting in a white K 2 TiF 6 precipitate. In the next step, the precipitate was separated and dissolved in water by heating up the suspension to 80 • C until a saturated solution was achieved, which was then filtered and rapidly cooled down to room temperature to form again K 2 TiF 6 precipitate. This step was used to eliminate the impurities and purify the K 2 TiF 6 precipitate. The precipitate was dried in air at 105 • C for 2 h.
Step 3: 5 g of K 2 TiF 6 precipitate was dissolved in 500 mL of distilled water by heating up to 80 • C. Then, NH 3 solution (4 M, prepared from ammonium hydroxide 28% solution) was slowly added until pH = 9. This hydrolysis reaction produced Ti(OH) 4 , which was then annealed at 550 • C for 3 h to obtain TiO 2 .
Synthesis of TiN by Nitridation of TiO 2 in NH 3
Nitridation of TiO 2 to obtain TiN has been reported by several research groups [33][34][35][36][37]. In our approach, for each experiment, 1 g of TiO 2 NPs was loaded into a ceramic boat and placed at the center of a quartz-tube furnace (PTF 12/50/610, Lenton, UK). One end of the tube was connected to the gas inlet (N 2 , NH 3 ). The other end was connected to a mechanical vacuum pump. Initially, the quartz tube was evacuated to reach a vacuum of 10 −2 mbar and then pre-heated to 250 • C. The tube was purged several times by N 2 (99.99%) to remove contaminants. Thereafter, the temperature in the furnace was increased to either 700 or 900 • C, both at a ramping rate of 3 • C min −1 . After the temperature was stabilized, NH 3 gas was introduced into the furnace at a flow rate of 1000 sccm for 1 h. Finally, the furnace was cooled down to 100 • C in NH 3 ambient, and further to room temperature in N 2 before unloading the sample.
Material Characterizations
The morphology of the materials was studied by Field-Emission Scanning Electron Microscopy (FE-SEM) and High-Resolution Transmission Electron Microscopy (HR-TEM) using Hitachi S4800 (Ibaraki, Japan) and JEOL ARM-200F (Tokyo, Japan) systems, respectively. The crystalline structure of the materials was investigated by X-Ray Diffraction (XRD) using a Bruker diffractometer (D8 Advance Eco, Bruker, Billerica, MA, USA) equipped with a Cu Kα X-ray radiation source. The optical absorption spectra were acquired by using a JASCO V-750 UV-VIS spectrophotometer (Easton, MD, USA). Chemical compositions of the materials and the binding energy of the elements were determined by X-Ray Photoelectron Spectroscopy (XPS) using a XR4 Thermo Scientific Spectrometer (Waltham, MA, USA) equipped with an Mg-Kα X-ray radiation source.
Solar Water Evaporation Experiments
For each experiment, 20 mg of the powder (TiO 2 , TiON, TiN or nanocarbon) was dispersed in ethanol and sonicated for 10 min. Using the drop-coating method, the powder was deposited on a polymer membrane (Novatexx 2471, Freudenberg, 5 cm in diameter). The membrane was immersed in water contained in a 100 mL glass beaker and was kept afloat at a distance of~5 mm below the water surface, which was the equilibrium position of the membrane when it floated. It is worth noting that due to the non-uniform mass distribution, the membrane might be slightly tilted. To address this issue, a thin fabric string was used to keep the entire membrane in the horizontal position. The evaporation was investigated by monitoring the weight change of the system (glass beaker, water and the membrane) under simulated solar light generated by a Xenon arc lamp (60 W, Guangzhou Lightech Auto Lighting Co., Ltd. Guangdong, China) with an illuminance of 550 W m −2 , which is equivalent to an illuminance of 0.55 sun of natural solar light. The temperatures of the environment, the surface of the membranes and the liquid were measured using a BETEX 1230 Infrared Thermometer (Bega Special Tools, Vaassen, the Netherlands).
Results and Discussion
The hydrofluoric acid leaching of ilmenite ore produces TiO 2 NPs with sizes in the range of 70-160 nm, as shown in the SEM micrograph in Figure 1a. The presence of TiO 2 material is confirmed by XRD and XPS analyses shown in Figures 1d and 1e, respectively. The XRD pattern from the obtained powder (Figure 1d, bottom pattern) is consistent with that of the polycrystalline TiO 2 containing both anatase and rutile phases [38]. The XPS spectrum of the Ti 2p core-level (Figure 1e, bottom spectrum) shows two peaks at binding energies of 464.0 and 458.2 eV. These peaks are the characteristic doublet state of Ti 2p (i.e., Ti 2p 1/2 and Ti 2p 3/2 , respectively) in TiO 2 [39]. Following annealing in NH 3 at 700 • C for 1 h, a slight coalescence of the NPs is observed (Figure 1b). The annealing strongly affects the crystalline structure and chemical composition, as shown in the spectra in Figure 1d-f. In the XRD pattern (Figure 1d), the R(110) and A(200) peaks observed for TiO 2 vanish and a new peak at 43.3 • appears. This peak represents the (200) plane of TiN cubic structure [40], and is further confirmed by the HR-TEM image shown in Figure 2a. The co-existence of both TiO 2 and TiN results in the N-Ti-O bonds, causing the broadening of the Ti 2p peaks to the lower binding energy side, as shown in Figure 1e (middle spectrum) [39]. The presence of those bonds is also demonstrated by the broad and asymmetric N 1s peak shown in Figure 1f (middle spectrum) [39]. The XRD and XPS analyses indicate that the nitridation of the TiO 2 at 700 • C was incomplete, resulting in TiO 2 -TiN composite (hereafter denoted as TiON). After nitriding at 900 • C in NH 3 for 1 h, the NPs exhibit distinct changes in morphology: the NPs transform into an entirely different structure with the shapes of nanodonuts having outer diameters in the range of 80-120 nm, and inner diameters ranging from 30 to 60 nm (Figure 1c and Figure S1,). In the XRD pattern shown in Figure 1d, the diffraction peaks of TiO 2 entirely disappear, and only TiN peaks are observed [27]. This is further supported by the results obtained by HR-TEM shown in Figure 2b. Furthermore, the two peaks at 461.0 and 455.3 eV in the Ti 2p XPS spectrum (Figure 1e) and the peak at 396.5 eV in the N 1s spectrum (Figure 1f) are consistent with the binding energies of Ti-N bonds in stoichiometric TiN [39]. Therefore, we conclude that by annealing in NH 3 at 900 • C, the TiO 2 was completely nitridized and transformed into TiN.
The nitridation of TiO 2 in NH 3 ambient was previously explained by Gou et al. [42]. According to the study, at 900 • C, the nitridation takes place via the formation of TiN 1−x O x and releases H 2 O vapor and N 2 gas. With increasing the reaction time, the oxygen atoms of TiN 1−x O x are gradually substituted by the nitrogen atoms. Eventually, TiO 2 is nitridized to TiN [42]. Importantly, the authors observed the formation of mesopores with diameters in the range of 20-40 nm after the nitridation. This is consistent with formation of the cavities, which results in the nanodonuts; this can be attributed to the release of H 2 O vapor and N 2 gas. In addition, it has been reported that the incorporation of nitrogen atoms during the nitridation process can cause an expansion and contraction of the particles [43,44], which can be another factor that promotes the formation of the nanodonut structure. Nevertheless, this assumption requires further studies for clarification.
The UV-VIS diffuse reflectance spectra of the materials are shown in Figure 3a. The TiO 2 NPs have an absorption edge at 410 nm, which corresponds to a bandgap of 3.0 eV (using the Tauc method). The absorption of the TiON NPs exhibits a significant red shift that results in a bandgap of 2.1 eV. The TiN nanodonuts manifest a broad plasmon resonance spectrum in the visible region with a peak centered at 560 nm. This is in contrast to resonance peaks commonly observed for TiN, which are in the far-red and near infrared ranges. For instance, Traver et al. [40]. These examples demonstrate a nonmonotonic relationship between the particle size and the plasmon resonance of TiN NPs. We speculate that the peak resonance at 560 nm, in this case, may arise from the structure of the nanodonut NPs, which requires further exploration. Importantly, the broad plasmon resonance spectrum of the TiN nanodonuts corresponds well with the solar spectral range where sunlight provides the highest flux ( Figure S2, Supporting Information). This is highly desirable for the solar light harvesting applications. respectively) under continuous-wave (cw) illumination of simulated solar light generated by a Xenon arc lamp with illuminance of 550 W m -2 . The measurements were performed in air at a relative humidity of 72% and an ambient temperature of 31 • C. The results demonstrate that, after 9 min of cw illumination, the temperature of the blank membrane increases from 31 to 45 • C and stabilizes thereafter. Higher temperatures are acquired for the TiO 2 and TiON membranes (i.e., 48 and 53 • C, respectively), which can be explained by the improved light absorption (Figure 3a). For the TiN membrane, the temperature reaches 60 • C, indicating its higher photothermal conversion efficiency.
The use of TiN nanodonuts as nanoscale heat generators was tested by studying their SWE performance under cw illumination of simulated solar light with illuminance of 550 W m −2 . For this purpose, the membranes were immersed in water and kept at a position of about 5 mm below the water surface. Water evaporation was investigated by monitoring the weight change under continuous cw simulated solar illumination. The results are shown in Figure 3c, demonstrating a linear decrease in weight after 10 min of cw illumination. From these plots, the evaporation rates are calculated, which are presented in Table S1, Supporting Information. For the TiN membrane, an evaporation rate of 0.045 g min −1 is achieved. Taking into account the diameter of the glass beaker (~5 cm) gives an evaporation rate of 1.38 kg h −1 m −2 . This rate is comparable to evaporation rates obtained for various other materials, which are typically in the range of 1.0-1.9 kg h −1 m −2 , despite our lower illuminance (Table 1). This suggests the high light harvesting efficiency of the TiN nanodonuts. In addition, the TiN nanodonuts outperform carbon and graphene NPs under similar experimental conditions (Table S1 and Figure S3, Supporting Information). Furthermore, it is worth mentioning that the TiN membrane was used for a considerable number of experiments (i.e., above 30) in various experimental conditions (e.g., under simulated solar light, under natural solar light, in fresh water and in salt water with a concentration of 35 g L −1 ) with total illumination time above 30 h. The data reported in Figure 3c were acquired after the membrane had been used for more than 25 h. No considerable change in the evaporation rate (as well as the formation of air bubbles presented in the next part) was observed. This indicates an excellent stability of the TiN nanodonuts. RGO-Sodium alginate-CNT aerogel Self-floating 1.0 1.622 [13] 2D GO film Cellulose-wrapped Polystyrene foam 1.0 1.45 [46] Carbon black coated PMMA nanofiber on PAN nanofiber Self-floating 1.0 1.3 [47] Bi-layered rGO film Polystyrene foam 1.0 1.31 [48] Carbon nanotubes Porous Silica 1.0 1.32 [49] Flame-treated wood Self-floating 1.0 1.05 [50] Carbonized mushrooms Polystyrene foam 1.0 1.475 [51] Importantly, we observed that within 30 s of illumination, bubbles were formed at the TiN membrane surface (Figure 4). Under continuous cw illumination, the bubbles expanded their volume and eventually detached from the membrane surface and moved to the water-air interface, where the air contained in the bubbles was released (Videos S1 and S2, Supporting Information). Only sporadic bubbles were observed for the TiON membrane and no bubble was observed for the blank and the TiO 2 membranes ( Figure S4, Supporting Information). This can be explained by the higher temperature of the TiN membrane (i.e., 60 • C), as shown in Figure 3b. The bubble formation due to the thermoplasmonic effect has been described in detail by Baffou et al. [2,52]. Two important conclusions emerge from their analysis: (i) the bubbles contain air, and (ii) the NPs generate a high localized temperature in the range of 200-220 • C, which is required to initiate bubble generation [2,3,52]. From their second conclusion, the bubble formation observed in our work suggests that the local temperature obtained for the TiN nanodonuts under cw illumination could be significantly higher than the measured value at the surface of the TiN membrane (i.e., 60 • C). This seeming discrepancy can be attributed to the fact that the infrared temperature probe has a spot size of about 2 mm and thus provides a spatially averaged value, while the bubble formation occurs locally. We note that bubble formation caused by the thermoplasmonic effect has been observed for Au NPs by many research groups [2,15,17,53]. However, this phenomenon has not been reported for TiN, although high local heat has been suggested for various TiN nanostructures under simulated solar light illumination [19,21,23,[27][28][29].
Conclusions
In summary, we have demonstrated a low-cost and feasible approach for the fabrication of TiN nanodonuts that exhibit strong and broad plasmon resonance absorption in the visible region centered at 560 nm. The SWE performance was studied using a floating structure prepared by drop-coating the TiN nanodonuts on a polymer membrane. Using simulated solar light with an illuminance of 550 W m −2 , our experiments reveal two important observations. First, the TiN nanodonuts provide an evaporation rate of 1.38 kg h −1 m −2 . This value is comparable to previously reported rates obtained for higher illuminance, proving that the TiN nanodonuts are highly efficient light harvesting materials. Second, the formation of the bubbles at the membrane surface is observed, providing firm evidence of high local heat generated by the TiN nanodonuts, which has not been previously reported.
Supplementary Materials: The following are available online at https://www.mdpi.com/2079-499 1/11/1/76/s1, Figure S1: Representative TEM images of TiN nanodonuts. The particle and cavity sizes were measured using Gatan Micrograph Suite ® software, Figure S2: UV-Vis absorption spectrum of TiN nanodonuts and solar emission spectrum, Figure S3: SWE performance of the synthesized photothermal materials (TiO 2 , TiON and TiN) in comparison with the graphene nanoplatelets and carbon nanopowders, Figure S4: Photographs of (a) blank polymer membrane, (b) TiO 2 membrane, (c) TiON membrane and (d) TiN membrane taken after 600 s of exposure to simulated solar light generated by the Xenon arc lamp with an illuminance of 550 W m -2 , Table S1: Evaporation rate of water without the membranes, the blank membrane and the membranes deposited with photothermal nanomaterials (
|
v3-fos-license
|
2016-01-15T18:20:01.362Z
|
2010-07-21T00:00:00.000
|
14366550
|
{
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://thescipub.com/pdf/10.3844/ajeassp.2011.448.460",
"pdf_hash": "3262cfa7a3144dfd6aaf2ffc715704baced52ef6",
"pdf_src": "IEEE",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42400",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "6efb514447df3b8559fbcfcb1d4f1326cd1628fa",
"year": 2010
}
|
pes2o/s2orc
|
Single step optimal block matched motion estimation with motion vectors having arbitrary pixel precisions
This paper proposes a non-linear block matched motion model with motion vectors having arbitrary pixel precisions. The optimal motion vector which minimizes the mean square error is solved analytically in a single step. Our proposed algorithm can be regarded as a generalization of conventional half pixel search algorithms and quarter pixel search algorithms because our proposed algorithm could achieve motion vectors with arbitrary pixel precisions. Also, the computational effort of our proposed algorithm is lower than that of conventional quarter pixel search algorithms because our proposed algorithm could achieve motion vectors in a single step.
INTRODUCTION
Motion estimations play an important role in motion tracking applications, such as in a respiratory motion tracking application [1] and in a facial motion tracking application [2]. The most common motion estimation algorithm is the block matched motion estimation algorithm [3]. The curr ent frame is usually partitioned into numbers of macro blocks with fIxed or variable sizes. Each macro block in the current frame is compared with a number of macro blocks in the reference frame translated within a search window. Block matching errors are calculated based on a predefmed cost function. The macro block in the reference frame that gives the minimum block matching error is considered as the best approximation of the macro block in the current frame. Each macro block in the current frame is represented by the best macro block in the reference frame, the motion vector (the motion vector is the vector representing the translation of the macro block in the reference frame.) and the residue (the residue is the difference between the macro block in the curr ent frame and the best translated macro block in the reference frame).
The most common block matched motion estimation algorithm is the full integer pixel search algorithm. The full integer pixel search algorithm is a centre based algorithm in which all integer pixel locations in the search window are examined. However, the motion vectors are not necessarily represented by integer pixel precisions and a large portion of macro blocks in the curr ent frame are best approximated by the macro blocks in the reference frame translated within a plus or a minus one pixel range 978-1-86135-369-6/10/$25.00 ©2010 IEEE around integer pixel locations. Hence, block matching errors could be further reduced if motion vectors are represented by non-integer pixel precisions. Conventional non-integer pixel search algorithms start searching pixels at half pixel locations. Half pixels are interpolated by nearby pixels at integer pixel locations. Block matching errors at some or all half pixel locations are evaluated. The half pixel location with the minimum block matching error is chosen. Similarly, quarter pixels are interpolated by nearby pixels at half pixel and integer pixel locations. The quarter pixel location with the minimum block matching error is chosen. Finer pixel locations could be evaluated successively. Since the block matching errors at fIner pixel locations are evaluated via interpolations from the coarser pixel locations, if motion vectors with very fIne pixel precisions are required, then many pixel locations are required to be evaluated. Hence, computational efforts of these algorithms are very heavy and these algorithms are very ineffIcient. Also, existing pixel search algorithms could only achieve motion vectors with rational pixel precisions. If the true motion vector is with an irrational pixel precision, then an infmite number of pixel locations have to be evaluated.
Interpolations are implemented via some predefmed functions, such as a real valued quadratic function with two variables [4], a paraboloid function [5] and a straight line [6]. As the block matching error is a highly non-linear and non-convex function of the motion vector, it is very diffIcult to solve the motion vector that globally minimizes the block matching error. Hence, many pixel locations are still required to be evaluated and the pixel location with the lowest block matching error is chosen. Similar to conventional quarter pixel search algorithms, computational efforts of these algorithms are still very heavy and these algorithms are still very ineffIcient. Also, if the true motion vector is with an irrational pixel precision, then an infInite number of pixel locations still have to be evaluated.
In this paper, we propose a non-linear block matched motion model with motion vectors having arbitrary pixel precisions. The optimal motion vector which minimizes the mean square error is solved analytically in a single step. Our proposed algorithm has the following salient features. 1) The block matching error is evaluated in a single step which globally minimizes the mean square error. As the calculation of the mean square error at a fine pixel location is not derived from the coarser pixel locations, the computational effort of our proposed algorithm is much lower than that of conventional quarter pixel search algorithms. 2) Our proposed algorithm could achieve the true motion vector even though the true motion vector is with an irrational pixel precision. Computer numerical simulations show that the mean square errors of various video sequences based on our proposed algorithm are lower than that based on conventional half pixel search algorithms and quarter pixel search algorithms. The rest part of this paper is organized as follows. Section II describes our proposed non-linear block matched motion model. Section III derives analytically the optimal motion vector which minimizes the mean square error. Computer numerical simulations are presented in Section IV. Finally, a conclusion is drawn in Section V.
v=,
Zk ,l ,p Z k ,l ,P = 0 , then we do not consider that the global minimum is on the boundary qk = 1 V Pk E [0 ,1 ]. For these Similarly, Vk E Z+ , denote the set of motion vectors corresponding to the stationary points of MS EfR( Pk ,q k )' MS E f (Pk ,q k ) and MS E ! / (Pk ,q k ) (including the point ( 0,0 ) ) as F k uR, F /L and F /R, respectively. The algorithm for finding the globally optimal motion vector can be summarized as follow:
Algorithm
Step 1: Implement an existing full integer pixel search algorithm so that (PO ,k ,q o ,k ) is obtained Vk E Z+ .
Step 2: Vk E Z+ , evaluate Fk uL , Fk uR , Fk u and Fk LR • Step 3 : Vk E Z+ , evaluate arg{ min UL MS Ef L (Pk ,q k )}' (Pk,q,)EFk arg{ min Pk ,q k =arg arg{ min LL MS Et L (Pk ,q k )} ' (p"q,)EF, Vk E Z+ , take (p; ,q ; ) as the globally optimal motion vector of B k • Since the global minimum of the mean square error is not necessarily located at rational pixel locations, while the full integer pixel search, full half pixel search and full quarter pixel search algorithms only evaluate at rational pixel locations, the mean square errors based on these conventional methods are very large and these conventional methods are very ineffective. On the other hand, our proposed method guarantee to fmd the motion vector that globally minimizes the mean square error no matter the motion vector is located at either rational pixel locations or irrational pixel locations. Hence, our proposed method is more effective that conventional methods. Besides, as integer pixel locations, half pixel locations and quarter pixel locations are particular locations represented by our proposed model, the mean square error based on our proposed method is guaranteed to be lower than that based on these conventional methods.
The computational effort of our proposed algorithm can be analyzed as follows. As the orders of the polynomials in (1), (2) and ( 3) are 5, 4 and 2, respectively, 0:::; Mf L :::; 5 V k E Z+ . Hence, V k E Z+ , if Mf L ;:: 1 , then the maximum evaluation points of our proposed method are less than or equal to 2 1. Vk E Z+ , if Mf L = 0 , as the maximum number of points in � U L is 5, the maximum evaluation points of our proposed method are less than or equal to 17. For full quarter pixel search algorithms, there are 25 evaluation points. Hence, the total number of evaluation points of our proposed method is lower than that of full quarter pixel search algorithms. As conventional block matched motion estimation algorithms evaluate block matching errors from coarse pixel locations to fme pixel locations, the computational efforts grow exponentially as the pixel precisions get fmer and fmer. From this point of view, the conventional methods are very inefficient. On the other hand, our proposed method does not require searching from the coarse pixel locations to the fme pixel locations. Our proposed method is more efficient than the conventional methods particularly when the required pixel precision is higher than or equal to the quarter pixel precisions.
IV. SIMULATION RESULTS
In order to have complete investigations, video sequences with fast motion, medium motion and slow motion are studied. The video sequences, Foreman, Coastguard and Container [7], are, respectively, the most common fast motion, medium motion and slow motion video sequences. Hence, motion estimations are performed to these video sequences. Except the first frame of these video sequences, the mean square errors of all the frames of these video sequences are evaluated. Each curr ent frame takes its immediate predecessor as the reference frame. The sizes of the marco blocks are chosen as 8 x 8 and 16 x 16 and the sizes of the search windows are chosen as 32 and 40, which are the most common block sizes and window sizes used in international standards. The comparisons are made with the full integer pixel search algorithm, the full half pixel search algorithm and the full quarter pixel search algorithm.
The mean square error performances of our proposed method, the full integer pixel search algorithm, the full half pixel search algorithm and the full quarter pixel search algorithm with the size of the marco blocks 8 x 8 and the size of the search windows 32 applied to the video sequences Coastguard, Container and Foreman are shown in Figure It can be seen from the Figure 1 that the improvements on the average mean square errors of the full half pixel search algorithm, the full quarter pixel search algorithm and our proposed method over the full integer search algorithm for the video sequences Coastguard are 1.4894xlO-4 , 2.2242xl0-4 and 2.7294xlO-4 , respectively, which correspond to 17.8531% , 28.8039% and 37.5835% , respectively, that for the video sequences Container are 1.4406 x 10-6 , 3.6476 X 10-6 and 2.0374xlO-5 , respectively, which correspond to 1.0115%, 4.4170% and 32.3070%, respectively, and that for the video sequences Foreman are 1.5788xl0-4 2.2863 X 10-4 and 2.5897 x 10-4 , respectively, which correspond to 24.7674% , 39.1977% and 46.4394% , respectively. Similar results are obtained for different size of marco blocks and different size of the search windows. Figure 2 shows the improvements on the average mean square errors of various algorithms with the size of the marco blocks 16x16 and the size of the search windows 40 applied to the same set of video sequences. The improvements on the average mean square errors of the full half pixel search algorithm, the full quarter pixel search algorithm and our proposed method over the full integer search algorithm for the video sequences Coastguard are 1. 7838 x 10-4 , 2.5650 X 10-4 and 3.0888 x 10-4 , respectively, which correspond to 18.4666% , 27.6579% and 34.6995% , respectively, that for the video sequences Container are 1.8757 x 1 0-6 , 2.5444xl0-6 and 1.8031xl0-5 , respectively, which correspond to 0.7710% , 1.5106% and 26.9046% , respectively, and that for the video sequences Foreman are 2.1073xl0-4 2.9528xlO-4 and 3.3051xlO-4 respectively, which correspond to 21.6021%, 34.2148% and 40.4725% , respectively. From the above computer numerical simulations, it can be concluded that the mean square error performances of our proposed method are always better than the full integer pixel search algorithm, the full half pixel search algorithm and the full quarter pixel search algorithm for all of the above three video sequences. In particular, for slow motion video sequences, such as the video sequence Container, our proposed
|
v3-fos-license
|
2019-04-22T13:08:52.854Z
|
2017-01-01T00:00:00.000
|
5036441
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.4172/2155-6180.1000383",
"pdf_hash": "da427fdded6fed5147f61fc10701028fd80910c2",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42401",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "deacadb1cd6fee615c1eb4ce3338d8311c714c3e",
"year": 2017
}
|
pes2o/s2orc
|
Latent Growth Curve Modeling of Ordinal Scales: A Comparison of Three Strategies
Ordinal scales can be used in latent growth curve modeling in three ways: mean, weighted mean scores, and factors measured by scale items. Sum and mean scores are commonly used in growth curve modeling in spite of certain discouragement. It was unclear how much bias these practices could produce in terms of the change rates and patterns. This study compared three methods with Monte Carlo Simulations under different number of response categories of the items, in terms of five key parameters of growth curve modeling. The hypothetical population models were derived from real empirical data to generate datasets of binary, trichotomous, fiveand seven-point scales with sample size of 300. Latent growth curve modeling of mean, weighted mean, and factors measured by the ordinal scales were respectively fit to these datasets. Results indicated that modeling the factors that are measured with ordinal scales yield the fewest biases. Biases of modeling the means and weighted of the scales were under one decimal point in the change rates, whereas biases in the variances and covariance of the intercept and slope factors were large. In conclusion, it is inadvisable to use means or weighted means of ordinal scales for latent growth curve modeling. It produces the best results modeling the factors that are measured with the ordinal scales. Citation: Yang C, Olsen JA, Coyne S, Yu J (2017) Latent Growth Curve Modeling of Ordinal Scales: A Comparison of Three Strategies. J Biom Biostat 8: 383. doi: 10.4172/2155-6180.1000383
Introduction
Ordinal scales that consist of ordered sets of categorical response options are widely used in research and are not true measures of psychological traits or states, which are supposed to be normally distributed random variables. Researchers have recommended that ordinal scales not be used directly as true measures of latent psychological traits or states, alternatively referred to as latent variables, latent constructs, or factors. "In strictest propriety the ordinary statistics involving means and standard deviation ought not to be used with these scales, for these statistics imply a knowledge of something more than the rank order of the data" [1]. Hayes [2] commented that "the problem of measurement, and especially attaining interval levels scales, is an extremely serious one for social and behavioral sciences. It is unfortunate that in their search for quantitative methods researchers sometimes overlook the question of level of measurement…" Treating ordinal scales as continuous data in statistical modeling would produce biased estimates [3][4][5]. It is common to use the sum or mean scores of scales items for latent growth curve modeling that particularly involves mean and variance of a change. It is not clear to what extent such practice could bias the change and variance estimates in latent growth curve modeling, as compared to an appropriate procedure.
Ordinal indicators reflect latent variables best through probability models [6]. The original items can be specified to measure latent variables through various measurement models in growth curve modeling. In contrast, the means of sets of ordinal items cannot be directly equated to the latent variables. Figure 1 below illustrates an appropriate way to estimate change in a repeatedly measured factors with ordinal indicators [7]. In this "curve of factors" multiple-equation growth curve model, three ordinal items (labeled as Y) can be linked to first-order factors at each of four time-points via probit or logistic factor loadings. Details of the equations are provided by Muthén and Shedden [8].
Some parameters are critical in the latent growth curve modeling. The estimated initial level and change over time are captured by two second-order latent variables, namely, the intercept and slope. The variances of the intercept and slope factors indicate the individual differences in their initial levels and change rates. The covariance of the two factors indicates the extent to which the initial level is associated with the change rate. The fixed loadings for the slope factor serve to scale the time variable, which is alternatively referred to as time scores. A logarithmic curve pattern can be estimated by specifying the slope factor effects on the repeatedly measured factors to be 0, 0.69, 1.10, and 1.39, which are respectively the natural logs of 1, 2, 3, and 4 (a linear pattern). Different patterns can be estimated by changing the times scores. A model with the best pattern can be selected through model comparisons in terms of smallest Bayesian information criteria or χ 2 of model fit.
Using sums or means of the observed Y variables for each factor in Figure 1 reduces the size of the model such that the four factors are replaced by four observed variables, namely, the sum or mean of scales. Technically, when scale sums or means are used, multivariate normality is assumed and the variables are treated as continuous measures, often using maximum likelihood estimation. However, the multivariate normality assumption is usually violated, resulting in potentially biased estimation of the structural parameters. When the original ordinal observed variables in Figure 1 are specified as categorical, a probit model is fit to an item-level polychoric correlation matrix instead of a scale-level Pearson covariance matrix and estimated with weighted least squares Muthén, [4], resulting in more accurate estimates. This Besides the technical modeling differences, ordinal scales may have different statistical properties, depending on the treatment and the number of choices from binary (e.g., yes/no, or true/false), trichotomous (e.g., never, sometimes, always), seven (e.g., strongly disagree, disagree, slightly disagree, neutral, slightly agree, agree, strongly agree) to even more. For instance, the mean of binary scale with 0=false/incorrect and 1=true/correct always lies between 0 and 1. In contrast, the mean of a five-choice scale, if coded as -2, -1, 0, 1, and 2, may better reflect the range of a latent variable. Although it seems plausible to sum the correct answers of dichotomous items to produce a "total correct" sum score, such sum scores have been shown to be biased against extreme cases on the latent variable dimension. Sum scores of other ordinal scales can also deviate significantly from the mean and variance of the corresponding latent variable.
Another way to treat ordinal scales in growth curve modeling is to apply weights to the different items and then average the item scores. This is intended to overcome the drawback of the implicit equal-weighting of all the items in mean/sum scores, which ignores the differential sensitivity of individual items in measuring a latent trait. There are many weighting schemes for creating composite scores [9]. For instance, maximal reliability weighting involves a confirmatory factor analysis (CFA) as the first step to identify the factor loading and residual variance of each item. The weight for each item can be generated by dividing the factor loading by the residual variance [10]. For ordinal scales, it could be sufficient to maximize the reliability of composite scores by weighting each item with its factor loading [11]. As composite scores with items weighted by factor loading are still not equivalent to the true estimates of latent variables of probability models, it remains unclear to what extent these weighted composite scores reflect the true parameters of growth curve modeling. Hereafter, we refer to this method as growth modeling of weighted means.
This study was aimed to compare the potential biases of using scale-level mean and weighted mean composite scores of ordinal items curve-of-factors in growth modeling under different numbers of response choices. We adopted the growth curve modeling of factors with ordinal indicators as the golden standard and posed no specific hypotheses about the biases of other approaches.
Method Empirical and hypothetical population data
Two empirical ordinal datasets were used as the population data to ensure generalizability of the findings. The first empirical dataset was extracted from an ongoing Flourishing Family Project, which was designed to monitor multifarious aspects of over 600 families of two US western areas. More information about this project can be found at (https://familycenter.byu.edu/Pages/Sponsored-Research/2007/ Flourishing.aspx). Data for this study involve adolescents' ratings of their parents' psychological control on a scale of eight questions (Table 1) The participants were asked to choose one of the following options for each question: 1=never, 2=rarely, 3=sometimes, 4=often, and 5=very often.
To simulate binary and trichotomous scales, this dataset of five categories were recoded and collapsed. Specifically, never and rarely were collapsed to have a value of zero and sometimes, often, and very often were combined into a value of one. The five categories were also collapsed such that 0=never and rarely, 1=sometimes, and 2=often and very often.
The second dataset was adopted from a longitudinal project on the first-generation bilingual Chinese immigrant families with young children. These families were followed four times during a two-year period. Participants were recruited from various organizations across the Maryland-Washington DC region. An ordinal scale of maternal life satisfaction of five questions were used in this study. There are seven rating points for the participants to choose for each question, including 1=strongly disagree, 2=disagree, 3=slightly disagree, 4=neither agree nor disagree, 5=slightly agree, 6=agree, and 7=strongly agree. Thus, there were four datasets in total, three that measured psychological control respectively using binary, trichotomous, and five-point scales, and one that measured life satisfaction using a seven-point scale.
Procedure
The analysis and simulations were carried out in the following steps. First, the four empirical longitudinal datasets were subject to confirmatory factor analyses (CFA) to examine their measurement properties, including measurement invariance over time. The estimation method was weighted least square estimator with χ 2 test and degrees of freedom adjusted for the means and variances (WLSMV). The reliability (ω) for each measurement was calculated using the variance approach of McDonald [12]. We reported in detail only the CFA of the two empirical datasets of five-and seven-point scales for brevity.
Second, a latent growth curve modeling of factors ( Figure 1) was respectively fit to these four datasets of binary, trichotomous, five-, and seven-point scales. The four models with their parameter estimates served as population models to generate random data for simulations.
Third, the random datasets were generated with a sample size of 300, which was presumed to yield moderate sampling variations. This process was tantamount to drawing random samples from the population represented by the population models. A CFA was conducted with each dataset to obtain the standardized factor loadings, which were used to weight the individual items and create weighted means of the scales.
Last, latent growth curve modeling of factors (Figure 1) was fit to all these generated datasets to examine how well the population parameters can be recovered with the "golden standard" method. All new variables of the mean scores and weighted mean scores of binary, trichotomous, five-, and seven-category variables were subject to growth curve modeling for comparisons. As an exception, the weighted sums of binary scales of the simulated datasets were modeled, because the estimates of growth curve modeling were closer to the population parameters than those of weighted means.
The average estimates and standard deviations of the key parameters were compared to the original model parameters to examine potential biases. The modeling of the empirical data and simulations were conducted mainly with the latent variable modeling software Mplus (v8.0).
Measurement of the ordinal scales
The measurement model of psychological control for the first data set fit the data well, with χ 2 (730) =1909.11, p<0.01, CFI=0.94, TLI=0.93, RMSEA=0.05. The contents and factor loadings of the eight items scale are listed in Table 1. Invariance of factor loadings over time was tested by comparing this model with a model constraining the factor loadings to be equal across time. The χ 2 difference test indicated that the majority of the factor loadings were invariant over time (χ 2 diff (27)=38.04, p=0.08), except the last item at the first measurement that is indicated by an asterisk in Table 1 (χ 2 diff (1)=9.29, p<0.01). The high factor loadings and the reliabilities suggest that the psychological control was measured well over time.
The measurement model of the maternal life satisfaction in the second data set also fit the data well, with χ 2 (156)=332.47, CFI=0.99, TLI=0.99, RMSEA=0.07. Factor loadings were found to be largely invariant over time (χ 2 diff (11)=9.45, p=0.05), except the last item at the fourth occasion as indicated by the asterisk (χ 2 diff (1)=14.42, p<0.01). The item content and factor loadings are listed in Table 2. The high factor loadings and the reliabilities suggest that the construct of life satisfaction was also measured well over time. Thus, factor loadings were constrained to be invariant in subsequent latent growth curve modeling. The same tests and constraints were also applied to datasets of binary and trichotomous scales.
Latent growth curve modeling of the empirical data
A latent growth curve model with a logarithmic trajectory was identified to fit the empirical data of psychological control very well (χ 2 (730)=1909.11, p<0.01, CFI=0.94, TLI=0.93, RMSEA=0.05). As a latent construct, the initial value was set to a hypothetical mean of zero. The time scores for the model were specified as 0, 0.69, 1.10, 1.39, and 1.61, which takes the natural log of a linear trend of 1, 2, 3, 4, and 5. The growth rate was found to be α=0. 13
Estimates of simulated data
Listed in Table 3 below are population parameters, the biases, mean estimates, and standard deviations of the five key parameters of the growth curve modeling under three different treatments of the simulated scales. A bias is defined by the difference between an average estimate of the simulated data and population parameter. The key estimates of the three treatments of the ordinal scales were compared to the population parameters with one-sample z tests. Table 1 suggest the following findings. First, growth curve modeling of the factors that are measured by the ordinal scales reflected the changes of the hypothetical true population with a maximum of 0.02 differences. Biases in the variances of the intercept and slope factors and the covariance of the intercept and slope factors approximated 0.06 when using binary scales. Onesample z-tests indicated some of the population parameters can be recovered without any biases, as underlined in the all the estimates of growth curve modeling of the mean scores and weighted mean scores were significantly different from the population means. Second, using the mean scores of binary scales for growth curve modeling resulted in appreciable underestimation of the slope mean, while using the mean scores of other ordinal scales for growth curve modeling reflected the change well (bias ≤ 0.04). Third, biases in the slope means of the population were similar whether using the sum of binary scales or weighted means of binary, trichotomous, or five-point scales. The appreciable bias in the slope mean occurred when using the weighted means of the seven points scale (bias=0.05). Biases in the variances and covariance of the intercept and slope factors were no better than the other two approaches. Fourth, the means of the intercept factors depended on the number of response options: the more response options, the higher the intercept means (initial levels). Fifth, average estimates of growth modeling of both means and weighted means of the ordinal scales were all significantly different from the population parameters, as the z tests suggested.
Discussion
This simulation study compared latent growth modeling of mean and weighted mean scores of ordinal items to full curve-offactors modeling of the original ordinal items. The reference values for these comparisons were population model parameters derived from empirical data, so that they are more plausible and generalizable than arbitrary specifications. The change of psychological control showed a logarithmic increase, which is decelerated upward trend. This perception seemed to be reasonable as adolescents try to gain more independence and autonomy, their parents gradually increase psychological control and abandon physical and verbal coercion. As for life satisfaction of the first generation Chinese immigrants, it may be expected to decrease as adaptations to a new culture might have been accompanied by financial and job stresses.
The simulations suggested that growth modeling of factors that were measured by the ordinal scales provide good estimates of the hypothetical population parameters. Although some average estimates were significantly different from the population parameters, the magnitude of these differences are minimal, or practically trivial. In contrast, modeling the means or weighted means of ordinal items would bias the variances of the intercept and slope factors, especially the intercept factor. Large biases in variances of the intercept and slope factors could mislead practical efforts in dealing with individual differences. It is comforting that modeling the means or weighted means of ordinal scales resulted in negligible biases in the change rates of the population, except binary scales, as publications of changes estimated this way could be still credible. As means or weighted means of the ordinal scales are dependent on the number of response categories, it is difficult to compare them with the latent continuous factors measured by the ordinal scales.
Weighted means of the ordinal scales did not perform any better than the means of ordinal scales. One explanation is that weights make a difference in the composite only when the variables are not correlated. As all the scale variables are highly correlated, their contributions to the variances overlap and thus do not appear as expected [13].
This study has some limitations. We have not included other data conditions such as various distributions of the ordinal scales and sample sizes that might contribute to the biases. It was suggested by Coenders, et al. [14] that a five-point scale with middle value of zero and normal distributions could result in negligible biases in the latent variable relations, as in the case the covariance of the intercept and slope factors in this study. This is because the range of the five point scale are close to that of a typical latent variable. In addition, it could be expected that smaller samples would result in larger variances of the simulated estimates, whereas skewed distributions may result in greater Note: Italicized numbers indicate non-significant differences from the population parameters. deviation from the true means. Moreover, growth curve modeling of sums of ordinal scales was not examined with simulations, because the sum differs from the mean just by a constant (division by the number of items), offering little extra information.
Another limitation of this study is that we have omitted a twostep approach (latent scoring and modeling) in the comparison. This method first obtains the estimated factor scores from measurement models and then uses these scores as observed variables in subsequent growth curve modeling [15]. This practice conforms to item response theory modeling that is widely accepted in the education field. The requirement of measurement invariance may be satisfied by testing and constraining discrimination and threshold (difficulty) parameters to be the same over time. In addition to the widely accepted theoretical basis, the advantages of this method may be less computationally timeconsuming than direct modeling of the ordinal items, which may be particularly useful when a model with many items is fit to relatively small samples. This approach works well to model relations among latent constructs [16]. However, given a shortened scale of four or six items could function as well as a long one Embretson and Hershberger [17]; Kenny [18] and could be modeled directly, this approach does not appear to be advantageous for growth curve modeling, but might be examined in the future.
Conclusion
It is not advisable to use means or weighted means of ordinal items for latent growth curve modeling. Ordinal scales can best be modeled directly in latent growth curve modeling. Published reports of growth curve modeling with ordinal scales may be evaluated with findings of this study as a reference.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2009-09-01T00:00:00.000
|
2356131
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3201/eid1509.090438",
"pdf_hash": "d3dc89c637cab7abefccfcdb7e8ccd68002d761e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42404",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "22918d46ed773059af850a4fe0ebf4bf05ae74d4",
"year": 2009
}
|
pes2o/s2orc
|
Coxsackievirus A6 and Hand, Foot, and Mouth Disease, Finland
During fall 2008, an outbreak of hand, foot, and mouth disease (HFMD) with onychomadesis (nail shedding) as a common feature occurred in Finland. We identified an unusual enterovirus type, coxsackievirus A6 (CVA6), as the causative agent. CVA6 infections may be emerging as a new and major cause of epidemic HFMD.
H and, foot, and mouth disease (HFMD) is a common childhood illness characterized by fever and vesicular eruptions on hands and feet and in the mouth ( Figure 1). It is caused by members of the family Picornaviridae in the genus Enterovirus. Complications are rare, but pneumonia, meningitis, or encephalitis may occur. Outbreaks of HFMD have been mainly caused by 2 types of enterovirus A species, coxsackievirus (CV) A16 (CVA16) or enterovirus 71 (1). Some outbreaks have been associated with CVA10, but only sporadic cases involving other members of the enterovirus A species have been reported (2,3).
During fall 2008, a nationwide outbreak of HFMD occurred in daycare centers and schools in Finland, starting in August and continuing at least until the end of the year and possibly into the following year. From vesicle fluid specimens of hospitalized children, we identified the etiologic agent as coxsackievirus A6.
The Study
In August 2008, vesicle fluid specimens were collected from 2 children and 1 parent with HFMD at the Central Hospital of Seinäjoki, Southern Ostrobothnia. Specimens were sent to the Department of Virology, University of Turku, for identification of the causative agent. After detection of CVA6 in these index cases, the virus was also found in specimens obtained from the Pirkanmaa Hospital District (Tampere), Turku University Hospital (Turku), Pori Central Hospital (Pori), and Central-Ostrobothnia Central Hospital (Kokkola) (Table).
Nucleic acids were extracted from specimens by using the NucliSens EasyMag automated extractor (bioMèrieux, Boxtel, the Netherlands). When the extracts were analyzed for enteroviruses by using real-time reverse transcriptase-PCR (RT-PCR) specific for the 5′ noncoding region (NCR) of picornaviruses (4), amplicons with melting points indistinguishable from each other and typical to enteroviruses were obtained.
To identify the enterovirus type in the specimens, RT-PCR, specific for a partial sequence of the viral protein 1 (VP1) region, was performed by using the COnsensus-DEgenerate Hybrid Oligonucleotide (CODEHOP) Primers (bioinformatics.weizmann.ac.il/blocks/codehop.html) (5). The amplicons were separated by agarose gel electrophoresis, purified with the QIAquick PCR Purification Kit (QIAGEN, Hilden, Germany), and sequenced in the DNA Sequencing Service Laboratory of the Turku Centre for Biotechnology. The virus type in the 3 index specimens, 3 samples of vesicular fluids, and 1 throat swab was successfully identified with sequencing and BLAST (www.ncbi. nlm.gov/BLAST) analysis as CVA6. Phylogenetic relationships of the sequences were examined by using CVA6 (Gdula strain), CVA16 (G10), and enterovirus 71 (BrCr) prototype strains as well as selected clinical CVA6 isolates obtained from GenBank. Sequence alignments were generated with the ClustalW program (www.ebi.ac.uk/clustalw), and the phylogenetic tree was computed by using the Jukes-Cantor algorithm and the neighbor-joining method. Phylogenetic analyses were conducted by using MEGA4 software (www.megasoftware.net) and the bootstrap consensus tree inferred from 1,000 replicates (6) Phylogenetic analysis placed all CVA6 strains from the HFMD outbreak in 1 cluster (97%-100% identity), whereas the nucleotide identities between those isolates and CVA6 prototype strains Gdula, CAV16, G-10, and enterovirus 71 BrCr were 82.5%-83.2%, 55.6%-56.6%, and 55.6%-57.3%, respectively. The closest preceding CVA6 strain was isolated from cerebrospinal fluid in the United Kingdom in 2007 and had 92%-94% nucleotide identity with the strains described here (7).
To improve the detection of the novel CVA6 strains in clinical specimens, we designed specific VP1 primers from the aligned sequences. CVA6vp1 reverse primer (5′-ACTCGCTGTGTGATGAATCG-3′) and CVA6vp1 for- using the following cycling conditions: initial denaturation at 95ºC for 10 min, 45 cycles at 95ºC for 15 s, 60ºC for 30 s, and at 72ºC for 45 s, followed by generation of melting curve from 72°C to 95°C with temperature increments of 0.5°C/s. Partial 5′ NCR sequence of the strains in clinical specimens was determined as described (4) and compared with the known sequences by using BLAST (http://blast. ncbi.nlm.nih.gov/Blast.cgi). During autumn 2008, a total of 47 acute-phase specimens, including 12 vesicle fluid samples, 23 throat swabs, 2 tracheal aspirates, 5 fecal samples, and 5 cerebrospinal fluid specimens from 43 patients yielded amplicons with similar melting points as the originally identified CVA6 strains in 5′ NCR RT-PCR. All specimens were subjected to the specific CVA6-VP1 real-time RT-PCR, and a positive result was obtained for 11 vesicle fluid samples, 14 throat swabs, 2 tracheal aspirates, and 4 fecal samples (Table). The virus in 1 throat swab was identified as CVA6 from the result of 5′ NCR sequencing alone. None of the CVA6positive specimens were positive by an RT-PCR assay with CVA16-and EV71-specific primers (8). Attempts to cultivate the virus from 8 CVA6 RT-PCR-positive specimens were unsuccessful, whereas the prototype strain could be propagated in rhabdomyosarcoma cells.
Onychomadesis was 1 characteristic feature in patients during this HFMD outbreak; parents and clinicians reported that their children shed fingernails and/or toenails within 1-2 months after HFMD (Figure 1). Only a few published reports of nail matrix arrest in children with a clinical history of HFMD exist in the medical literature (9)(10)(11). We obtained shed nails from 2 siblings who had HFMD 8 weeks before the nail shedding. The nail fragments were stored at -70°C for a few weeks and treated with proteinase K before nucleic acid extraction. The extracts were enterovirus positive in 5′ NCR RT-PCR. The virus in one of them was identified as CVA6 by the specific RT-PCR and yielded a 5′ NCR sequence that was similar to the novel CVA6 strains.
Conclusions
Enterovirus CVA6 was a primary pathogen associated with HFMD during a nationwide outbreak in Finland in autumn of 2008. HFMD epidemics have primarily been associated with CVA16 or enterovirus 71 infections; those caused by enterovirus 71 have occurred more frequently in Southeast Asia and Australia in recent years (12). Reportedly, CVA10 has been found in minor outbreaks; other coxsackievirus A types have been found in only sporadic cases of HFMD (2,3). In general, CVA6 infections have been seldom detected and mostly in association with herpangina (13,14). In Finland, CVA6 has been identified only on 4 occasions over 8 years during enterovirus surveillance from 2000 to 2007 (15).
Although the CODEHOP primers were elementary for rapid genotyping of the novel CVA6 strains, we identified more viruses with the designated CVA6-VP1 specific primers. Onychomadesis was a hallmark of this HFMD outbreak. To our surprise, we detected CVA6 also in a fragment of shed nail. The same virus could have given rise to the outbreak in Spain in 2008 (10). Supposedly, virus replication damages nail matrix and results in temporary nail dystrophy. Whether nail matrix arrest is specific to CVA6 infections remains to be shown. This study demonstrates that CVA6, in addition to CVA16 and enterovirus 71, may be emerging as a primary cause of HFMD.
|
v3-fos-license
|
2020-03-02T02:00:30.535Z
|
2020-02-28T00:00:00.000
|
211572752
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41535-020-00270-w.pdf",
"pdf_hash": "a36cfd24ec7edf38d1a7bc474a4aeeb88fc3b90c",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42406",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "4f58a03e44e486628d1e641ff600bb86593c190f",
"year": 2020
}
|
pes2o/s2orc
|
Tuning magnetic confinement of spin-triplet superconductivity
Electrical magnetoresistance and tunnel diode oscillator measurements were performed under external magnetic fields up to 41 T applied along the crystallographic b-axis (hard axis) of UTe$_2$ as a function of temperature and applied pressures up to 18.8 kbar. In this work, we track the field-induced first-order transition between superconducting and magnetic field-polarized phases as a function of applied pressure, showing a suppression of the transition with increasing pressure until the demise of superconductivity near 16 kbar and the appearance of a pressure-induced ferromagnetic-like ground state that is distinct from the field-polarized phase and stable at zero field. Together with evidence for the evolution of a second superconducting phase and its upper critical field with pressure, we examine the confinement of superconductivity by two orthogonal magnetic phases and the implications for understanding the boundaries of triplet superconductivity.
Previous work on uranium-based compounds such as UGe 2 , URhGe and UCoGe has unearthed a rich interplay between superconductivity and ferromagnetism in this family of materials [1], with suggestions that ferromagnetic spin fluctuations can act to enhance pairing [2]. The recent discovery of superconductivity in UTe 2 has drawn strong attention owing to a fascinating list of properties -including absence of magnetic order at ambient pressure [3], Kondo correlations and extremely high upper critical fields [4] -that have led to proposals of spintriplet pairing [4][5][6][7], and a chiral order parameter [8,9]. In addition, at least two forms of re-entrant superconductivity have been observed in high magnetic fields, including one that extends the low-field superconducting phase upon precise field alignment along the crystallographic b-axis [10], and an extreme high-field phase that onsets in pulsed magnetic fields above the paramagnetic normal state at angles tilted away from the b-axis [11]. Applied pressure has also been shown to greatly increase the superconducting critical temperature T c in UTe 2 [12,13], from 1.6 K to nearly double that value near 10 kbar, and to induce a second superconducting phase above a few kbar [13]. Upon further pressure increase, evidence of a suppression of the Kondo energy scale leads to an abrupt disappearance of superconductivity and a transition to a ferromagnetic phase [12]. Together with the ambient pressure magnetic field-induced phenomena [10,11,14,15], the axes of magnetic field, temperature and pressure provide for a very rich and interesting phase space in this system. One of the key questions is in regard to the field-polarized (FP) phase that appears to truncate superconductivity at 34.5 T under proper b-axis field alignment [10,11], in particular regarding the nature of the coupling of the two phases and whether superconductivity could persist to even higher fields in the absence of the competing FP phase. The relation between the FP phase and the pressure-induced magnetic phase, which also competes with superconductivity [11], is similarly not yet fully understood.
In this work, we perform magnetoresistance (MR) and tunnel diode oscillator (TDO) measurements under both high hydrostatic pressures P and high magnetic fields H along the crystallographic b-axis to explore the (H, T, P ) phase diagram. We find that the FP phase that interrupts superconductivity at ambient pressure is strengthened with increasing pressure, so as to suppress the transition field until there is no trace of superconductivity down to 0.4 K above 16 kbar. At higher pressures, we find evidence of a distinct magnetic phase that appears to be ferromagnetic in nature and is also bordered by the FP phase at finite fields. Together with previous observations at ambient pressure, these results suggest a spectrum of magnetic interactions in UTe 2 and a multifaceted ground state sensitive to several physical tuning parameters.
Single crystals of UTe 2 were synthesized by the chemical vapor transport method as described previously [4]. The crystal structure of UTe 2 is orthorhombic and centrosymmetric, and the magnetic easy axis is the a-axis. Experimental measurements were conducted at the DC Field Facility of the National High Magnetic Field Laboratory (NHMFL) in Tallahassee, Florida, using a 41 T resistive magnet with a helium-3 cryostat. Resistance and magnetic susceptibility measurements were performed si- multaneously on two individual samples from the same batch positioned in a non-magnetic piston-cylinder pressure cell. The pressure medium was Daphne 7575 oil, and pressure was calibrated at low temperatures by measuring the fluorescence wavelength of ruby, which has a known temperature and pressure dependence [16,17]. The TDO technique uses an LC oscillator circuit biased by a tunnel diode whose resonant frequency is determined by the values of LC components, with the inductance L given by a coil that contains the sample under study; the change of its magnetic properties results in a change in resonant frequency proportional to the magnetic susceptibility of the sample. Although not quantitative, the TDO measurement is indeed sensitive to the sample's magnetic response within the superconducting state where the sample resistance is zero [18][19][20]. Both the current direction for the standard four-wire resistance measurements and the probing field generated by the TDO coil are along crystallographic a-axis (easy axis). The applied dc magnetic field was applied along the b-axis (hard axis) for both samples.
The magnetic field response of electrical resistance R at low pressures is similar to previous results at ambient pressure, which showed that the superconducting state persists up to nearly 35 T for H b, and re-entrant behavior can be observed near T c for slight misalignment of the field [10]. As shown in Fig. 1(a), application of 4 kbar of pressure reduces the cutoff field H * to 30 T at 0.38 K (T c = 1.7 K without applied field), but retains the very sharp transition to the FP state above which a negative MR ensues. Upon temperature increase, a re-entrant feature emerges below H * similar to previous reports [10] but only above about 1.3 K, indicating either nearly perfect alignment along the b-axis or a reduced sensitivity to field angle at finite pressures.
Upon further pressure increase, T c increases as previously shown [12,13], up to 2.6 K and 2.8 K at 8.5 kbar and 14 kbar, respectively. However, H * is continuously reduced through this range and changes in character. As shown in Fig. 1(b) and (c), at higher pressures H * and H c2 dissociate, beginning as a single sudden rise with a broadened peak (denoted H p ) in resistance at 0.4 K that becomes better-defined upon increasing from lowest temperature, before separating into two distinct transitions at higher temperatures. Interestingly, the transition is the sharpest when the H c2 transition separates from H * and moves down in field. Further, the coupled transitions slightly decrease in field until about 2 K, above which the resistive H c2 continues to decrease while H * stalls (e.g. at about 12 T for 14 kbar) until washing out above approximately 20 K. This indicates a strong coupling between the two transitions that is weakened both on pressure increase and temperature increase, despite the first-order nature of the FP phase. At 18.8 kbar, shown in Fig. 1(d), where no superconducting phase is observed down to 0.37 K, the sharp feature associated with H * is gone, and only a broad maximum in R remains near 8 T. Figure 2 presents the frequency variation ∆f in the TDO signal, which is due to the changes in magnetic susceptibility of the sample and therefore sensitive to anomalies in the zero-resistance regime. In addition to a sharp rise at H * , which corresponds to a diamagnetic to paramagnetic transition, and changes in slope consistent with the re-entrant behavior mentioned above [ Fig. S3 in SI], there is another feature in the 4 kbar data within the superconducting state observable at lower fields. At temperatures below 1 K, ∆f initially increases with field before abruptly transitioning to a constant above a characteristic field H c2 (2) , and finally jumping at the H * transition. As temperature is increased, H c2(2) decreases in field value until it vanishes above T c , tracing out an apparent phase boundary within the superconducting state. As shown in Fig. 3, the path of H c2(2) merges with the zero-field critical temperature of the second superconducting phase "SC2" discovered by ac calorimetry measurements [13]. As shown in Fig. 3(a), these data identify SC2 as having a distinct H c2 (T ) phase boundary from the higher-T c "SC1" phase, with a zero-temperature upper critical field of approximately 11 T at 4 kbar. Upon further pressure increase, the H c2(2) transition is suppressed in field, tracing out a reduced SC2 phase boundary [TDO data for 8.5 kbar in SI] that is absent by 14 kbar. In essence, it appears that the SC2 phase is suppressed more rapidly than the SC1 phase, which will provide insight into the distinction between each phase [21].
In contrast to the abrupt increase of ∆f upon crossing H * into the FP phase at lower pressures, the TDO signal exhibits a qualitatively different response in the high pressure regime where superconductivity is completely suppressed. As shown in Fig. 2(b), at 18.8 kbar ∆f decreases at a characteristic field H M (= 12.5 T at 0.37 K), indicating a decrease of magnetic susceptibility upon entering the FP phase that is opposite to the increase observed in ∆f at lower pressures (e.g. from the normal state above T c to the FP state, in Fig. 2a). The drop at H M increases in field value and gradually flattens out as temperature increases, consistent with a ferromagneticlike phase transition that gets washed out with magnetic field. Based on observations of hysteresis in transport ( Fig. 1(d) inset) that are consistent with this picture, as well as evidence from previous pressure experiments identifying similar hysteretic behavior [12], we label this phase as a ferromagnetic (FM) ground state that evolves from zero temperature and zero magnetic field, and, similar to superconductivity at lower pressures, is truncated by the FP phase and therefore distinct from that ground state.
Compiling this data, we summarize the observed features and phase boundaries in both resistance and TDO measurements in Fig. 3. We identify five phases: two superconducting phases (labeled SC1 and SC2), the normal phase (labeled N), the FP phase and the FM phase, which is only observed at 18.8 kbar. The first three phase diagrams (4, 8.5 and 14 kbar) show a smooth growth of the FP phase with pressure and the emergence of a more Fig. 2(a)), with pink diamonds indicating critical temperature T c(2) obtained from Ref. [13]. Green triangles label the position Hp of the peak in magnetoresistance in panels (a)-(c), and the purple downward triangles label the magnetic transition HM identified in TDO measurements (c.f. Fig. 2 Fig. 3(a) inset). Tracking the resistance peak H p to fields above H * traces a non-monotonic curve that, when be-low T c , mimics the extension of H c2 (T ) of the SC1 phase, again suggesting an intimate correlation between the two phases. This is corroborated by the fact that at 18.8 kbar, when superconductivity is completely suppressed, the onset of the FP phase show a more conventional monotonic evolution with increasing field and temperature.
In an effort to explain the qualitative features of the phase diagram, we consider the phenomenological Ginzburg-Landau (GL) theory describing the superconducting order parameter η. For simplicity we shall consider η to be single-component, relegating to the Supplementary Materials the consideration of a multicomponent order parameter proposed theoretically for UTe 2 [22,23]. The free energy consists of three parts: , with the first term describing the superconducting order parameter in the applied field [24]: with D i = −i∇ i + 2π Φ0 A i denoting the covariant derivative in terms of the vector potential A and Φ 0 = hc/2e the quantum of the magnetic flux, where K ij = diag{K x , K y , K z } is the effective mass tensor in the orthorhombic crystal. The simplest way in which the superconducting order parameter couples to the field-induced microscopic magnetization M, is via the biquadratic interaction F c = gM 2 |η| 2 , where the internal magnetic field B/µ 0 = M + H. The metamagnetic transition is described by the Landau theory of magnetization with a negative quartic term (u, v > 0): Taking the field H||b, and hence A = (Hz, 0, 0), we minimize the GL free energy to obtain the linearized gap equation of the form from which one determines the H c2 as the lowest eignevalue of the differential operator in a standard way, similar to the problem of Landau levels for a particle in magnetic field [25]: where dT Tc is related to the slope of H c2 at T c in the absence of magnetization and α 0 = 2 2mξ0 is expressed in terms of the correlation length. The upshot of Eq. (4) is that the upper critical field is reduced from its bare value H 0 (T c − T )/T c by the presence of the magnetization M . The latter is a function of magnetic field, M (H), to be determined from Eq. (2), and while its value depends on the phenomenological coefficients of the Landau theory, qualitatively the metamagnetic transition results in a sudden increase of M at H * (by ∆M ≈ 0.6µ B at H * = 34 T at ambient pressure [10]). This then drives H c2 down according to Eq. (4) [26] and pins the upper critical field at the metamagnetic transition, explaining the sudden disappearance of superconductivity at the the field H * that marks the onset of the FP phase in Fig. 4(c).
Focusing on the evolution of the ground state of UTe 2 with field and pressure (i.e., at our base temperature of ∼0.4 K), we present summary plots of the resistance and TDO data as well as the ground state field-pressure phase The resultant phase diagram at base temperature is presented in panel (c), where the phase boundary between SC1 and FP phases is determined by midpoints of resistance transitions (black circles, using average of upsweep and downsweep curves) and TDO transitions (red triangles), with error bars indicating width of transitions. Brown squares indicate the phase boundary of SC2 based on kinks in TDO frequency, and green diamonds indicate the transition between FM and FP phases determined from the midpoint of drops in TDO frequency response. Zero-pressure and zero-field data points are obtained from Refs. [11] and [13], respectively. All lines are guides to the eye. diagram in Fig. 4. As shown, the field boundaries of both SC1 and SC2 superconducting phases decrease monotonically with increasing pressure. However, we point out that, while the boundary of SC2 appears to be an uninterrupted upper critical field, that of SC1 is in fact the cutoff field H * . It follows from Eq. (4) that this cutoff field is reduced compared to the putative H c2 , which would lie at higher fields if it were derived from an orbital-limited model without taking metamagnetic transition into account.
While the T c of SC1 increases with pressure, the cutoff imposed by H * introduces difficulty in determining whether its putative H c2 would also first increase with pressure. On the contrary, the unobstructed view of H c2 for SC2 shows a decrease with increasing pressure that is indeed consistent with the suggested decrease of the lower T c transition observed in zero-field specific heat measurements [13]. Between 15.3 and 18.8 kbar, the H * cutoff is completely suppressed and the FM phase onsets. While it is difficult to obtain a continuous measure of the pressure evolution through that transition, the step-like increase in the TDO frequency at a field near 12.5 T (c.f. Fig. 4(b)) measured at P = 18.8 kbar suggests that the low-field FM phase is the true magnetic ground state of the system, separate from the FP phase. Upon closer inspection, we note that the step-like change in the TDO frequency in Figs. 2(b) and 4(b) is in fact an inflection point, suggesting that the FM and FP phases are in fact separated by a crossover, rather than a true phase transition. This is entirely natural from the Landau theory perspective, since the external magnetic field is conjugate to the FM order parameter M in Eq. (2), and the metamagnetic crossover at field H M leads to a step-like increase in the magnetization, reflected in our TDO measurement.
This crossover boundary H M between the FM and FP phases appears much less sensitive to pressure for P > P c , as evidenced by the minimal change in field value between 18.1 and 18.8 kbar. Because the experimental pressure cannot be tuned continuously, it is difficult to extract the behaviour of the crossover boundary at P c . However, the previously observed discontinuity between the FM and SC1 phases as a function of pressure [12] suggests that the FP phase should extend down to zero field at a critical point of P c ∼ 17 kbar, exactly where previous zero-field work has shown an abrupt cutoff of T c and the onset of a non-superconducting phase [13].
In summary, we have explored the pressure evolution of multiple superconducting and multiple magnetic phases of UTe 2 as a function of applied pressures and magnetic fields applied along the crystallographic b-axis, where superconductivity is known to extend to the highest fields. The field-induced metamagnetic transition results in a field-polarized phase which cuts off superconductivity prematurely, as explained by a phenomenological Ginzburg-Landau theory. Under increasing pressure, the superconducting phase eventually becomes completely suppressed, at the critical pressure where we observe an onset of a distinct ferromagnetic-like ground state.
|
v3-fos-license
|
2020-06-18T14:36:32.910Z
|
2020-06-18T00:00:00.000
|
219731007
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/s13052-020-00848-x",
"pdf_hash": "ee623a00983353efb7e7ac56dc7a0b1127d5dbe7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42408",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "ee623a00983353efb7e7ac56dc7a0b1127d5dbe7",
"year": 2020
}
|
pes2o/s2orc
|
Prevalence and factors associated with disruptive behavior among Iranian students during 2015: a cross-sectional study
Background Disruptive behavior can have lifetime consequences for youth. Prevention, early identification and treatment of disruptive behavior can improve outcomes for these youth. The purpose of the present study was to assess the prevalence of disruptive behavior among a sample of Iranian youth, and the relationship of disruptive behavior to other psychological phenomena that may be targeted for prevention, early identification and treatment. Method The sample consisted of 600 high school students (300 boys and 300 girls; ages 15 to 18 years old) selected through multi-stage random sampling in Saveh city, of Iran, in 2015. Questionnaires assessed several phenomena including demographics, life satisfaction, social support, depression, stress, smoking and hopefulness. The Disruptive Behavior Scale was also utilized. Univariate analyses were followed by multiple logistic regressions to examine relations among disruptive behavior and other constructs. Results Prevalence of disruptive behavior was 7.5%, in boys and 3.1%, in girls. Mean scores were 22.97 ± 1.17 for boys and 19.15 ± 1.06 for girls, with a significant difference between them (P < 0.05). The results of regression revealed low life satisfaction (OR = 3.75; 95% CI: (2.37–5.91), social support (OR = 0.72; 95% CI: (0.56–0.82) and hopefulness (OR = 0.85; 95% CI: (0.62–0.92); and smoking (OR = 3.65; 95% CI: (2.19–6.06), being male (OR = 2.55; 95% CI: (1.54–4.22), and higher stress (OR = 1.92; 95% CI: (1.60–2.91) and depression (OR = 2.76; 95% CI: (1.82–4.88) were significant factors in predicting disruptive behavior. Conclusion Disruptive behavior was associted with life satisfaction, smoking, being a boy, social support, hopefulness, stress, and depression. Targeting constructs (e.g., support, stress) associated with disruptive behavior may assist in prevention, early identification and treatment of problem behavior. For example, health promotion programs to increase hopefulness, satisfaction and support, and reduce stress, depression and smoking might be of importance for prevention and treatment of disruptive behavior.
Introduction
Disruptive behavior disorders (DBDs) is defined as "student behavior that systematically disrupts educational activities, undermines the habitual development of the tasks carried out in the classroom and causes teacher to invest a significant amount of time in dealing with it, time that should otherwise be devoted to the processes of teaching and learning" [1]. DBD's definition based on DSM-5 is a repetitive and persistent pattern of behavior in which the basic rights of others or major ageappropriate societal norms or rules are violated, as manifested by the presence of criteria such as aggression to people or animals, and destruction of property [2]. Disruptive behavior has different forms. One example is the student who talks continually while the teacher is teaching, interrupts the class by asking questions and making different sounds, uses different forbidden gadgets like cell phones in class [3] and becomes angry when the teacher opposes his/her inappropriate behavior [4].
Early onset DBD can have life-time consequences, including school absences, poor school achievement, substance use, aggression and anxiety; and DBD tends to continue to adulthood [5]. Adolescents with DBD have low self-control, conflictual relationships, and low empathy. These youth have difficulty with interpersonal relationships, and managing behavior, putting them at high risk for violence and substance abuse [6].
A 2016 survey in Amsterdam revealed that the most prevalent disorders among adolescents were disruptive behaviors [7]. Prevalence rates were 8.5% according to the DSM-IV and 7.1% according to the ICD-10 in Brazilian youth in 2010 [8]. Most studies in this field are from western countries. For example, a 2012 Dutch population study indicated a mean prevalence rate of 12.8% for DBDs; with 9.3% for girls and 15.2% for boys [9]. Although a 2016 community-based study in Iranian children and adolescents revealed the prevalence of psychiatric disorders was 10.55%, the study did not specifically screen for disruptive behavior and did not attend to gender differences in prevalence rates. In addition, this study did not include youth attending schools in noncapital cities, nor did it include important psychosocial factors [10] that might be targeted for prevention, early detection or treatment. Despite problems resulting from disruptive behavior, it has received little attention in the literature [11]. Furthermore, compared to boys, the study of contributing factors of disruptive behavior in girls is under developed [1]. As such, it is important to identify possible predictors of disruptive behavior in both boys and girls in order to establish prevention and treatment programs [12].
It has been reported that almost 22% of children and adolescents suffer from some form of psychiatric disorder [13]. In a 1997 study of Iranian elementary school children, %1.8% of boys and 12.1 of girls had disruptive behavioral disorders [14]. In another study in Iranian Children and Adolescents in 2016, the prevalence of oppositional defiant disorder (ODD) was 4.45 [13]. There is a strong need to better understand the prevalence of mental disorders, and to understand factors related to mental disorders, in children and adolescents in Iran. Addressing mental health services needs is a priority. Understanding psychiatric disorders in the context in which they occur is necessary in order to provide effective psychiatric services [15]. Although many studies have been carried out on disruptive behavior in western countries, no study has investigated the prevalence of disruptive behavior using a culturally adapted instrument so far in Iran. Additionally, studying psycho-social phenomena associated with DBD may assist in better understanding how to mitigate this behavior disorder [16]. For example one study showed depressive symptoms mediated the relation between marijuana use and disruptive behavior [17], whereas another found that personal characteristics, such as maladaptive parenting, predict disruptive behavior [18]. Therefore, the purpose of the present research was to evaluate the prevalence of disruptive behavior, and its association with other psychological phenomena in a sample of Iranian youth.
Study design
This was a cross-sectional study. The research was conducted in Saveh city, center of Iran. The adolescent population is estimated at 47425 inhabitants. Of students invited to participate, the response rate was 98% (600 out of 612 surveyed among 10-12th grade students). Students completed paper-and-pencil, selfadministered questionnaires in their classrooms. Questionnaires were delivered in a packet, with the same unique identification number on each questionnaire within the packet. Student identifying information was not collected. Questionnaires were completed in the presence of a researcher who explained the procedures and the aim of study. Teachers left the schoolroom during completion of questionnaires. Students took 20 min to answer the questions. Once completed, students put their questionnaires into a box in order to maintain anonymity.
Participants and sampling
The study consisted of 600 students -from 15 to 18 years old, who lived in Saveh, Iran in 2015. There were 300 female participants and 300 male participants in the research. Approximately 83% (n = 503) of adolescents were born in the Saveh City, whereas all other adolescents were born outside of the Saveh City. Written informed consent was obtained from parents (youth provided assent). All procedures were reviewed and approved by both the Saveh University of Medical Science and the Ministry of Education)Saveh county department of education). The research team clarified to participants that their answers would remain confidential. Participant inclusion criteria were: Ability to provide informed assent/consent, aged 15-18 years and, attending high school in Saveh city. There are no specific exclusion criteria, other than that the participants must be willing to participate and comply with the study protocol.
The sample was obtained using multistage sampling with three stages. Multistage sampling methods can be used to recruit participants in experimental or observational studies. Schools were selected from 32 high schools from two city regions. Each school was given a specific number. Using a random numbers table, 4 high schools (2 girl high schools and 2 boy high schools) were selected from each region, which constituted a total of 8 high schools. The quota of students from any school was based on the proportion of the students in the school, and then all classes were included in the selected schools. In addition, from each school, equal numbers of the student in each grade were selected. Finally, subjects were selected randomly from each class based on their identification numeral.
1.
Demgraphics questionnaire: This questionnaire contained 15 items on age, gender, smoking status (yes/no), housing status, scores at school, number of friends, pocket money, parents' jobs, parents' education levels, and life satisfaction)yes/no). 2. Disruptive Behavior Scale for Adolescents (DBSA): This questionnaire was comprised of four constructs derived from 29 items [19,20]. Reponse options were rated on a four-point scale ranging from 0 (never) to 3 (always). The four constructs with sample items include: Intentional Violations -"I deliberately break or damage school equipment;" Mistakes -"I make noise and disrupt the class;" Distraction/ Transgression -"I don't turn up on time for school;" and Aggression to School Authorities -"I argue with school authorities." Higher scores indicate higher level of disruptive behaviors. The reliability of the instrument was confirmed using Cronbach alpha coefficient (Intentional Violations = 0.82, Mistakes = 0.91, Distraction/Transgression = 0.77, Aggression to School Authorities = 0.86). Validity of this questionnaire was demonstrated through content and construct validity. Content Validity Ratio and Content Validity Index were confirmed with 0.82 and 0.87 respectively. The model's fit was confirmed for all scales (goodness-offit index > 0.90) [21]. 3. Perceived social support: This was assessed using the 12-item instrument (sample item, "Every time I , ve needed it, I , ve always found a certain person to be there for me") developed by Zimet et al. [22]. Response options range from 0 (very strongly disagree) to 6 (very strongly agree). Reliability of the Farsi version of the instrument has been found to be 0.84 for the scale [23]. In the present study, Cronbach's alpha scores for the scale have obtained the level of 0.84. 4. Perceived vulnerability: This measure is composed two scales [24] including perceived depression (4 items) and stress (3 items). Reponse options range from 0 (never) to 3 (always). In this study scales showed good internal consistency with Cronbach's alpha of 0.89. Cronbach's alpha coefficients, in the previous study in Iran, were found to be 0.79 [25]. 5. The Snyder Hope Scale: This scale includes 8 items (sample itme, "I usually find myself worrying about something") rated from Definitely False (1) to Definitely True (8). This scale is valid for use in Iran, and reliability of the Farsi version of the instrument has been found to be 0.82 [26]. In our study reliability was confirmed through a Cronbach's alpha value of 0.78.
Statistical analysis
Data were analyzed with Statistical Package for Social Sciences-version 15 (SPSS-15) software, International Business Machines Corporation (IBM) located in the United States. Before analysis, data were examined using histograms, the Kolmogorov-Sminov test, and normality of residuals. All were normaly distributed. Demographic data were subjected to simple descriptive analyses. Oneway analysis of variance (ANOVA) and independent sample t-tests were performed to examine significant differences between DBD mean scores by gender, education level, and so forth. Correlations were performed between continuous variables to determin associations with DBD (e.g., hopefulness and DBD). Multiple logistic regression was used to determine constructs that were signficantly associated with DBD. In order to identify the effects of social support, hopefulness, perceived stress and depression and demographic variables (e.g. education, gender, etc) a multiple unconditional logistic regression analysis was conducted, with disruptive behavior as the dependent variable. In the multiple logistic regression model, only variables significantly associated with disruptive behavior in univariate analysis were included (e.g., gender, smoking, life satisfaction, social support, hopefulness, perceived depression and stress, scores at school and parent education). To conduct logistic regression, we coded scores less than the mean as 0, and scores more than or at the mean as 1 [27]. Logistic regression is a widely used test to assess independent effects of a variable on binomial outcomes in medical literature [28,29]. P-values less than or equal to 0.05 were considered significant.
Ethics
All participants were informed about study confidentiality. Informed consent was obtained from all the participants and/or their parents. The study was approved by the ethics committee of Saveh University of Medical Sciences.
Sample description
Participants consisted of 600 adolescents aged 15 to 18 with a mean age of 16.7 ± 0.87 years, with equal numbers of males and females. Of students, 16.7% were in the last year of high school (seniors) and 40 and 43.3% were in first and second year of high school (freshman and juniors) respectively. It should be noted that in Iran we have 3 grade levels (10th, 11th, 12th grades; or freshman, junior, senior, respectively). Regarding the housing status, 91.8% of students were living with parents, 5.2% with one parent, and the rest with the grandfather or grandmother or others. More than half of the students (58%) reported feeling "life satisfaction" in the past 12 months. Prevalence of smoking experience was 26% ( Table 1).
The prevalence of disruptive behavior was 7.5%, in boys and 3.1%, in girls; also, average score of disruptive behavior for all participants was 21.17 ± 1.94. This score was 22.97 ± 1.17 for boys and 19.15 ± 1.06 for girls, with a significant difference between them (P < 0.05). Means and standard deviations of subscales of disruptive behavior including Intentional violations, Distration/transgression, Mistakes, and Aggression to school authorities was 8.5 ± 8.1, 4.6 ± 4.2, 5.2 ± 5.5 and 3.0 ± 4.1 respectively. Significant differences were not found among the scores of boys and girls in constructs (subscales) of disruptive behavior except the intentional violations construct. The mean score of disruptive behavior was significantly higher for smokers than non-smokers; and independent sample t-tests showed that there were significant differences between non-smokers and smokers in all constructs of disruptive behavior. Disruptive behavior mean score was significantly higher for youth with less life satisfaction that for those with more life satisfaction; and similarly, disruptive behavior score was significantly higher for students with lower school scores than for students with higher school scores. Finally, mean disruptive behavior score differed significantly by mother (and separately by father) education using analysis of variance. Disruptive behavior scores were not associated with living situation, number of friends, pocket money or parent employment (Table 1). Univariate tests indicated a significant relationship between each of the following constructs and DBD using correlations (P ≤ .05): Social support, hopefulness, stress and depression.
Only variables significantly associated with disruptive behavior (P < 0.05), including: Gender, parent education, school scores, smoking status, life satisfaction, social support, hopefulness, perceived stress and perceived depression were entered in furthur analysis. In multiple logistic regression analysis, results of the Hosmer and Lemeshow test showed acceptable goodness of fit of the model (P > 0.05). Results of multiple unconditional forward logistic regression analysis revealed that the following constructs were significantly associated with disruptive behavior: Table 2.
Discussion
This study aimed to determine the prevalence of disruptive behavior among a sample of Iranian youth and the relationship of disruptive behavior to other psychological phenomena. Identifying factors associated with disruptive behavior in classrooms can be helpful in improving community health [3]. According to results of this study, significant gender differences in disruptive behavior among Iranian adolescents were revealed, which is consistent with previous research in other countries on adolescent disruptive behavior [30,31]. This result may be due to the relatively higher levels of parental monitoring of girls as compared to boys in Iranian culture. This finding may also be related to relatively higher testosterone levels found in male as compared to female adolescents, as testosterone has been linked to aggression [32].
Similar to previous studies, results of this study demonstrate that life satisfaction is negatively related with adolescent problem behaviour [33][34][35], and that perceived stress and depression levels are positively associated with disruptive behavior. For example, Estevez et al. showed that aggressive behavior in adolescence has been significantly related to high levels of perceived stress, depressive symptoms and low life satisfaction [36]. In a study by Musitu et al. perceived stress was significantly associated with student aggression [37]. Another study by Desousa et al. showed that life satisfaction was negatively related with adolescent problem behavior [35]. In addition, McKnight et al. demonstrated that life satisfaction mediated the association between stressful life events and adolescent problem behaviour [38]. In another study, Suldo and Huebner found that life satisfaction had a mediating effect between adolescent problem behavior and parental involvement [39].
In our study, there were significant differences between smokers and non-smokers in disruptive behavior, with smokers having higher mean scores. Results of logistic regression analysis indicated smoking was significantly associated with disruptive behavior. Similar results have been reported in previous researchs [34,40]. For instance, in study of Upadhyaya et al., high rates of disruptive behavior disorders were found in adolescent smokers [40].
Social support has been related with positive mental health outcomes in many populations, including adolescents with disruptive behavior. Social support provided by important others affects an individual's actual and perceived behavioral control [41]. Consistent with other research, our study indicated that increased preceived social support decreased likelihood of disruptive behavior. Similarly, Forouzan, et al. found that social support promotes healthy behaviors in an individual's life [42], including prosocial behaviors that are inconsistent with disruptive behaviors. Results of our study indicated that youth hopefulness is also associated with disruptive behavior. Hope has been found to be an important factor in good behavioral and mental health [26]. Adolescents with high levels of hope evidence better general health maintenance, problem solving, and mental health [43].
To summarize, results of univariate tests demonstrated the following factors were associated with higher levels of disruptive behavior: Being male; smoking; less life satisfaction, hope and social support; and higher stress and depression. This is consistent with prior research outside of Iran, and it is important to demonstrate similar associations within Iran so that existing interventions might be adapted to Iranian culture. Of note, when these factors were enterred into multiple logistic regression, parent education and grades were no longer significant. Results of logistic regression indicate that life disatisfaction, smoking and depressive symptoms were among the constructs most highly associated with disruptive behaviors.
Given the design of the study, we cannot say whether disruptive behavior causes these associated problems (e.g., depression, less social support), whether these problems cause disruptive behavior, or whether some third factor causes a cluster of poor behaviors (e.g., poor parential monitoring contributes to later smoking and disruptive behavior). However, this study suggests that interventions to improve disruptive behavior in youth may also benefit by first targeting and improving life satisfaction, reducing smoking as appropriate, and treating depressive symptoms. More longitudianl work is needed to establish causal effects among these constructs.
Limitations
There are several limitations of the current study. Participants were recruited from high school. Thus, findings may not extend to the general adolescent population, or to youth with severe disruptive behavior who may not attend school. On the other hand, it may behoove researchers and clinicians to study disruptive behavior in youth not yet severely disordered, and in settings like schools where problem behavior can have consequences for an entire class. Secondly, results rely on self-report, so that youth may under or over-report behaviors, although we believe this is somewhat mitigated with assurances of anonymity. Third, although number of friends was not associated with disruptive behavior, it may be that type of friend (i.e., delinquent vs prosocial friend) is. Fourth, living situation was also not found to be associated with disruptive behaviors, but it may be that there was not enough variability in the sample (e.g., over 90% lived with both parents). Finally, data were cross-sectional, therefore as stated above, causal associations cannot be inferred.
Conclusions
Disruptive behavior in high school students is comparable to rates found in prior studies; and social support, hopefulness, stress, depression, gender, smoking and life satisfaction were significantly associated with disruptive behavior. Results may be of interest to the Ministry of Health, and the Ministry of Education and Training in terms of demonstrating the prevalence of disruptive behaviors in boys and girls, and identifying and adapting interventions that address disruptive behaviors and associated constructs (i.e., smoking, depression, life satisfaction). Health promotion programs might be of importance for prevention and treatment of disruptive behavior. Conducting longitudinal studies are recommended to better understand causal relations among disruptive behavior and different psychosocial variables in adolescents.
Abbreviations CVI: Content validity index; CVR: Content validity ratio
Acknowledgments
We gratefully acknowledge the participants who devoted their time to the research. The Authors are grateful to the Vice Chancellor for research, and Saveh University of Medical Sciences for their assistance with study implementation and re-analyzing data from an earlier research. In addition, the Authors would like to thank the Saveh education office for helping us with some parts of data collection and study implementation. Special thanks are extended to respected reviewers for providing us with their valuable and constructive comment.
|
v3-fos-license
|
2019-09-15T03:08:24.703Z
|
2019-06-05T00:00:00.000
|
241161736
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-021-06424-w",
"pdf_hash": "704cc8630a7ba381392c875d71fc57d41697bbf3",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42410",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "826a0812bb06ee1b638fdd3a419a7ba3271207b3",
"year": 2019
}
|
pes2o/s2orc
|
A clinical scoring system for the diagnosis of pediatric hand-foot-mouth
Background
The aim of the present study was to develop a clinical scoring system for the diagnosis of hand-foot-mouth disease (HFMD) with improved accuracy.
Methods
A retrospective analysis was performed on standardized patient-history and clinical-examination data obtained from 1435 pediatric patients under the age of three years who presented with acute rash illness and underwent enterovirus nucleic-acid-detection testing. Patients were then divided into the HFMD (1094 patients) group or non-HFMD (341 patients) group based on a positive or a negative result from the assay, respectively. Multivariate logistic regression was performed on 15 clinical variables (e.g. age, exposure history, number of rash spots in a single body region) to identify variables highly predictive of a positive diagnosis. Using the variables with high impact on the diagnostic accuracy, we generated a scoring system for predicting HFMD.
Results
Using the logistic model, we identified seven clinical variables (age, exposure history, and rash density at specific regions of the body) to be included into the scoring system. While the final scores ranged from −4 to 23 (higher score positively predicted a HFMD diagnosis), a cutoff score of 7 resulted in a sensitivity of 0.74 and specificity of 0.69.
Conclusions
This study establishes an objective scoring system for the diagnosis of typical and atypical HFMD using measures accessible through routine clinical encounters. Due to the accuracy and sensitivity achieved by this scoring system, it can be employed as a rapid, low-cost method for establishing diagnoses in children with acute rash illness.
However, health care settings lacking this resource must continue to rely on clinical markers of the disease. In this retrospective study of more than 1400 children with acute rash illness, we analyzed multiple clinical variables to devise a scoring system that relies on elements that can be obtained during a routine patient encounter. To date, no studies have systematically investigated or identified clinical variables predictive of HFMD.
Methods
We performed a retrospective analysis of patients who presented with acute rash illness to the Department of Infectious Diseases at the Capital Institute of Pediatrics Affiliated Children's Hospital between January 2013 and December 2017. Prior to this period, clinicians were trained to complete an acute-rash-illness observation form, which collected information including patient age, gender, date of illness onset, exposure history, fever 4 course, rash distribution and density. For rash quantification, the number of ulcers/sores in the oral cavity was rated as few (1-3 spots) or many (≥4), while the degree of rash was rated as low for 1-5 spots per body part and high for more than five spots per body part.
The inclusion criteria were as follows: (1) manifestation of an acute rash, (2) onset of illness of less than three days, (3) age of three years or less, and (4) positive for enterovirus throat-swab nucleic-acid-detection test. Patients were excluded if they had a definitive diagnosis of measles, rubella or chickenpox.
Definitive diagnoses in all cases were established using enterovirus nucleic-acid detection testing performed via throat swabs. Total RNA was extracted from all specimens and the ABI7500 real-time fluorescence quantitative PCR system was then used for enterovirus nucleic-acid detection.
Data analysis was performed using the SAS 9.4 software package (Windows, SAS Institute, Cary, North Carolina). Continuous variables, distributed normally, are expressed as mean ± standard deviation. Comparisons across groups were made using the independent t-test.
Variables across categories were compared with the Chi-square test. Multivariate logistic regression analysis of clinical variables associated with HFMD was performed using stepwise regression to identify explanatory variables. Diagnostic HFMD scores were constructed using the Framingham study multi-factor model [13]. In this study, the ß value was divided by a constant B = 0.262 to obtain an integer value. The performance of the scoring system was assessed by calculating the area under the receiver operating characteristic curve as follows: 0.5-0.7 represented low diagnostic value, 0.7-0.9 represented intermediate diagnostic value, and >0.9 represented high diagnostic value.
Statistical significance was defined as P < 0.05. Our study protocol was reviewed and approved by the Capital Institute of Pediatrics Ethics Committee (SHERLL2019012).
5
A total of 1435 (823 males) patients were included in this study, where 1094 patients tested positive (HFMD group) for enterovirus RNA while 341 patients (non-HFMD group) tested negative (Table 1). While no difference in gender composition was found between the two groups, HFMD patients were older and had longer illness duration when compared with non-HFMD patients (Table 1).
A subset of children in both groups (442 in the HFMD group and 39 in the non-HFMD group) endorsed a history of close contact with patients with HFMD or herpangina. The proportion of patients with clear exposure history was higher in the HFMD group than in the non-HFMD group (Table 1).
Since HFMD rashes are often concentrated in specific locations of the body, we quantified the rash severity by dividing the body into discrete regions. The oral cavity was divided into the hard palate, soft palate, tongue, buccal mucosa, lip mucosa and gums. The remainder of body regions were divided into the face, chest, back, buttocks, upper limbs, lower limbs, palms, back of the hands, fingers, feet, dorsum of the feet, plantar surface of the feet, and toes. In each patient, the number of rash spots/ulcers/sores were counted in each body region. We observed significant differences in rash densities in the upper jaw, soft palate, tongue, buccal mucosa, gums, chest, back, buttocks and toes (Table 2).
Additionally, we analyzed additional clinical information such as fever severity, length of fever, fever-to-rash interval, presence of cough, gastrointestinal symptoms, WBC count, and neutrophil percentage. Between-group differences were found with fever frequency, WBC count and neutrophil percentage (Table 3).
Multivariate logistic regression was performed using 15 clinical variables. A total of seven statistically significant clinical variables were identified and subsequently included in the scoring model. These included the following: (1) age, (2) exposure history, the number of ulcers on the (3) hard palate, (4) soft palate, (5) buccal mucosa, and cutaneous rash 6 distributed on the (6) back and (7) buttocks (Table 4). To test the predictive accuracy of this scoring system, we applied this model on data from all patients included in this study.
The median score of the HFMD group was 10 (6,13). The median score of the non-HFMD group was 4 (2, 7), which is significantly lower than that in the HFMD group (independent sample t test P < 0.001). The final scores ranged from −4 to 23 points with predictive accuracies of 0.15 to 0.99. The area under the ROC curve was 0.790 (95% CI: 0.764-0.817) with a sensitivity of 0.74 and a specificity of 0.69 (Fig. 1). We found the optimal cut-off point to be seven; hence, a score of seven or greater suggested a positive HFMD diagnosis, while a score of less than seven could be diagnosed as non-HFMD.
Discussion
Accumulating evidence implicates enteroviruses as the most common pathogens associated with acute rash illness in children under three years of age [7] and often manifest as HFMD, affecting the mouth, hands, feet, and buttocks. With increased accuracy and availability of sophisticated laboratory testing, recent studies have found that the distribution of rashes in atypical HFMD differs significantly from that of classic HFMD [8], leading to increased difficulty in making a clinical diagnosis. While definitive diagnosis requires the detection of enterovirus nucleic acid from throat swabs [9], availability of such technology may be limited in many healthcare settings. In the present study, we analyzed clinical data collected from patients suffering from acute rash illness with confirmatory viral assays to establish an objective, accessible, and sensitive diagnostic scoring system for the rapid identification of HFMD in children under three years of age.
All patients included in this study were children presenting with acute rashes of less than three days in duration. By comparing a large set of clinical data obtained from patient history, physical examination, and routine laboratory tests, we determined the strength of 7 each variable in affecting the accuracy of the final diagnosis. This study demonstrated that older age is predictive of an increased likelihood of HFMD diagnosis, consistent with the established age distribution of the disease [14]. Additionally, the large impact of positive exposure history on diagnostic accuracy supports existing epidemiological findings [15]. Our detailed characterization of rash distribution and density is in agreement with one of the defining features of HFMD, where ulcer/sores of the oral cavity (hard palate, soft palate, and buccal mucosa) have high sensitivity in predicting the illness.
In typical HFMD, rash spots are often presented in the hands and feet, leading to diagnoses being made without using a complex diagnostic scoring system. However, in children presenting with atypical rash distributions, this study revealed that examination of the rash severity on the back and the buttocks, regions that may often be overlooked during a clinical encounter, can be critical. We found that rash on the buttocks is more common in children with HFMD while the presence of rash in the back reduces the likelihood of HFMD. For these reasons, clinicians should routinely perform a thorough skin examination in children with acute rash illness to achieve the greatest diagnostic accuracy.
To date, no uniform guidelines have been devised in quantifying rash severity and distribution. Based on the observations from this study, the atypical HFMD rash was qualitatively less fused and flakier, improving discriminability of individual rash spots.
Nonetheless, the continued development of objective rash classification is subject to ongoing and future research efforts. For the quantitative assessment of rash covering multiple regions across the body, one method may involve the estimation of the percentage of body surface occupied.
Clinical scoring systems are designed as a tool to help clinicians make rapid and accurate 8 clinical diagnoses. Our study identified seven clinical variables that impact the accuracy of diagnostic prediction. We defined a score of seven or greater as being suggestive of a clinical diagnosis of HFMD. The diagnostic accuracy of the scoring system was 73% with a sensitivity of 0.74 and a specificity of 0.69, consistent with that of moderate diagnostic performance. All clinical variables of this scoring system may be obtained from clinical history and physical examination without the need for specialized equipment or examination. The scoring scheme is easy to remember and may be utilized across a spectrum of clinical settings. Since the scoring system requires only a rash count, it is cost-effective and can be employed by clinicians in hospitals with limited diagnostic resources.
One limitation of this study is that the applicability of the scoring system has not been validated in a separate cohort or at other institutions. Future multi-center prospective studies may confirm or improve the accuracy of our scoring system. Overall, our scoring system was designed to assist the efficient and accurate diagnoses of acute rash illnesses with the goal of early identification, treatment, and triage of HFMD patients to reduce childhood morbidity and disease transmission.
Conclusions
In this large retrospective analysis of children with acute rash illness, we identified seven clinical variables with significant impacts on the accuracy of HFMD diagnosis. Due to the systematic and detailed collection of the physical examination data, this study not only confirms existing diagnostic criteria but also emphasizes the importance of examining body regions often ignored during a routine clinical encounter. While future research should focus on validation of this scoring system, its improved diagnostic accuracy is not only limited to typical HFMD but can also extend to atypical presentations of HFMD.
Consent for publication
Not applicable.
Availability of data and material
Not applicable.
Competing interests
The authors declare that they have no competing interests. Authors' contributions HH contributed to the study design, carried out the statistical analysis and drafted the initial manuscript. LD conceptualized and designed the study, coordinated and supervised data collection, and assisted with the writing of the manuscript. LJ and RZ contributed to the conceptualization and design of the study, collected samples and completed the examination of them, and reviewed and revised the manuscript. All authors approved the 10 final manuscript as submitted and agree to be accountable for all aspects of the work. HFMD hand-foot-mouth disease Figure 1 Area under the receiver operating characteristic curve for the scoring system
|
v3-fos-license
|
2020-10-29T09:03:51.021Z
|
2020-11-01T00:00:00.000
|
226344609
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/2053-1591/abc442",
"pdf_hash": "ba8606555d5a5eb822a51a4bfee1d03d22b89403",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42411",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "8d8e34f3349564f041cb3e4248b9fa1273166c29",
"year": 2020
}
|
pes2o/s2orc
|
Biomass porous carbon-based composite for high performance supercapacitor
Biomass porous carbons with abundant pore structure, surface functional groups, and excellent electrical conductivity, are widely used for electrode materials. In this work, biomass carbon with 3D mesh structure was loaded with metal sulfide to synthesize porous carbon-based ultrathin Ni3S2 layered nanosheet structure supercapacitor composite material electrode. C/Ni3S2-16 showed 600 F g−1 of high specific capacitance with 20 A g−1 current density. After 5,000 charge and discharge cycles, 88.8% of the maximum (imital) specific capacitance was maintained at 10 A g−1. The assembled C/Ni3S2-16//C asymmetric supercapacitor achieved an energy density of up to 35.7 Wh Kg−1, which was remained at 27.9 Wh Kg−1 with a high power density at 1500 W kg−1. Developed pore structure of biomass porous carbon provide sufficient space for the electrolyte in and out of the electrode, therefore the potentially high electrochemical activity and energy density of the transition metal sulfide were fully exhibited.
Introduction
Carbon material is the most widely used as electrode material in the field of supercapacitor, with good conductivity, abundant pore structure, good corrosion resistance, low thermal expansion coefficient and low density. Nonetheless, the problems of low capacitance and low energy limit its further development [1]. Therefore, the pseudo-capacitive electrode material is introduced into the carbon material, which synergistically complements each other and synthesizes a carbon-based pseudo-capacitive composite material with high energy density, high conductivity, good rate performance and cycle stability [2][3][4]. Porous carbon obtained from biomass has become a promising electrode material for its advantages such as cheap cost and multistage pore network structure, which composites with pseudo-capacitive material to improve the electrochemical performance has become a research hotspot [5].
Many studies [6][7][8][9] have shown that Ni 3 S 2 is a kind of layered sulfide with excellent performance such as large theoretical capacity, good rate performance and good conductivity, which can be applied in electrochemical energy areas, photocatalysis areas, and so on. Dai et al used carbon nanotubes to composite granular Ni 3 S 2 . The prepared supercapacitor obtained high capacitance, but the rate performance was poor [10]. Chou et al prepared a foamed nickel-based sheet-like Ni 3 S 2 electrode material by electrodeposition, and obtained 717 F g −1 specific capacitance at 2 A g −1 . It had perfect cycle performance at a large current density (4 A g −1 ) [11]. For the sake of higher specific capacitance, rate performance, cycle stability and other electrochemical properties, the current research of Ni 3 S 2 is mainly to improve the active surface of nanomaterials by controlling the morphology of the materials.
In this paper, the biomass porous carbon (C) was prepared from the rice husk charcoal, which was the byproduct of power generation by gasification. Industrial composition analysis results shows that the gasified rice husk carbon contains 49.3% fixed carbon and 46.3% ash content (silicon dioxide). The porous carbon prepared by gasified rice husk charcoal without carbonization process, realizing the secondary utilization of rice husk energy conversion after the gasification power generation, and possessing environmental protection and Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. economic benefits. Based on the pore structure of biomass carbon itself, and by carbon-silicon separation of biomass carbon, primary pore-rich carbon material was obtained, and then porous carbon with multiscale pore structure prepared with physicochemical activation method. The three-dimensional network structure of porous carbon had a good dispersion effect on metal sulfides. The electrode (C/Ni 3 S 2 ) was manufactured directly on the current collector by hydrothermal method. Then, an asymmetric supercapacitor was made up of the composite and the biomass porous carbon, which has the advantages of environmentally friendly, high efficiency and low cost. The coupling effect of the active surface of the carbon and metal sulfides enhanced the electrochemical performance of the porous carbon based composite electrode and the assembled supercapacitor.
Methods
The biomass charcoal was crushed and sifted by 200 mesh, and boiled by the mass fraction of 30% KOH solution at 110°C for 1 h. Then, the carbon was separated by suction filtration quickly, and calcined in a CO 2 atmosphere at 850°C for 1 h for physicochemical activation. The activated product was washed by distilled water, 0.5 mol l −1 hydrochloric acid, and boiling distilled water respectively until neutrality, and vacuum drying at 100°C for 8 h, obtained the porous carbon recorded as C.
3 m mol Ni (NO 3 ) 2 ·6H 2 O and 3 m mol NH 2 CSNH 2 were dissolved in 35 ml of DI water and magnetically stirred. Porous carbon (60 mg) was added and stirred again. The resulting homogeneous solution was then transferred into a ptfe reactor. Then, 1 cm×2 cm of acetone and hydrochloric acid treated nickel foam (NF) was put in the reaction mixture. The reaction still was sealed and maintained the temperature of 120°C with a certain time. NF was removed after cooling at room temperature, deionized water and ethanol rinsed in turn for 3 times, and vacuum drying at 80°C for 10 h. According to the hydrothermal reaction time of 9, 12 and 16 h, the three C/Ni 3 S 2 composites prepared were recorded as C/Ni 3 S 2 -9, C/Ni 3 S 2 -12 and C/Ni 3 S 2 -16. The preparation of porous carbon and the synthesis of C/Ni 3 S 2 composite material, are shown in figures 1 and 2.
Morphology and structure measurements
Structural identification of C/Ni 3 S 2 was performed by an x-ray diffractometer XRD (D/Max-2200/PC, Rigaku) and a Raman spectrometer (DXR532, Thermo Scientific). The morphologies of the products were examined by SEM (JSM-7600F, JEOL) and TEM (JEM-2100F, JEOL). The surface chemical states were investigated by XPS (Escalab250xi, Thermo Scientific), and fitted the curves with Gaussian function and Lorentz function for analysis.
Electrochemical measurements
Cyclic voltammetry, galvanostatic charging/discharging, and electrochemical impedance spectroscopy, were performed on a CHI 760E electrochemical workstation, and 6.0 mol l −1 KOH was used as the electrolyte. NF-C/Ni 3 S 2 through a one-step hydrothermal reaction as the working electrode, Pt plate and Ag/AgCl as the counter electrode and reference electrode, respectively.
The three-electrode test mainly studies the performance of a single working electrode material, while the two-electrode test evaluates the performance of the entire supercapacitor. C/Ni 3 S 2 composite material was used as the positive electrode, while the biomass porous carbon as the negative electrode to form the asymmetric supercapacitor for two-electrode test. Figure 3 shows the x-ray powder diffraction pattern of C/Ni 3 S 2 composite with scanning rate of 5°min −1 and 2θ range of 10°-80°. The peak near 2θ=26°is ascribed to the (002) diffraction of graphitic carbon. The (002) peak clearly reveals that the graphitized carbon exists in porous carbon based composite, and the low peak of x-ray diffraction pattern may be due to the low content of porous carbon in the composite or poor crystallinity, while the diffraction peak of other active substances and substrates are too strong. The weak peak at 2θ=51.7°can be attributed to the standard cubic phase of residual Ni, which is in good agreement with the JCPDS No. 65-2865 [12]. Since the diffraction peak of carbon in the XRD pattern was relatively weak, the composite materials synthesized under different reaction times were further analyzed by Raman spectroscopy, as exhibited in figure 4. Three products with different reaction times have two characteristic peaks at 1341 cm −1 and 1592 cm −1 , which represents the D peak and G peak of carbonaceous materials. The D peak evaluates the defect and disorder degree of the porous carbon material, and the G peak represents the graphite-structured carbon caused by the vibration of the sp2 carbonaceous material [14]. The intensity ratio I G /I D of the G peak and D peak represents the graphitization degree of carbon materials. The higher of the ratio, the graphitization degree and the conductivity are better. The ratio is beneficial to improve the electrode performance [15]. It can be seen that the I G /I D of the composite materials produced at three different times are similar, all around 1.04, and have the same graphite-like structure.
Structure Analysis of C/Ni 3 S 2
The morphological of the products were test by the scanning electron microscope picture. In figure 5(a), porous carbon formed a nano-sheet ribbon morphology under the 200 nm scale, which helps to improve the electrode performance [16]. Figures 5(b)-(d) shows the scanning electron microscopy image of the C/Ni 3 S 2 composite when the reaction time was 9 h, 12 h, and 16 h at a reaction temperature of 120°C. According to the difference reaction time, relatively uniform Ni 3 S 2 nanoparticles were formed, and the dispersion was well supported on the surface of porous carbon after the reaction time reached 9 h, as shown in figure 5(b). When the reaction time was extended to 12 h, the size of Ni 3 S 2 nanoparticles gradually increased, as shown in figure 5(c), which was consistent with the Ostwald ripening mechanism [17]. When the reaction time was extended to 16 h, the product was composed of ultra-thin nanosheets, a large number of nanosheets were connected to each other, forming a criss-crossed feather-like microstructure, can be seen in figure 5(d).
As shown that there are abundant gaps between the nanosheets, which facilitates the movement of electrolyte ions into and out of the active material to form double-layer capacitance [17]. In addition, the electrolyte and the composite material were in better contact, so that the redox reaction occurred and the pseudocapacitance can be produced.
In order to further determine the phase and specific composition of the C/Ni 3 S 2 -16 with ultra-thin nanosheet morphology, the sample was characterized by SEM, EDS, TEM and XPS. Figure 6(a) shows the threedimensional multi-stage pore interconnection network structure of biomass porous carbon at a size of 2 μm. On one hand, the rich pore structure was conducive to the rapid diffusion and absorption of electrolyte ions, forming a stable double-layer capacitance; on the other hand, the skeleton of porous carbon was beneficial to disperse the loaded metal sulfide well, and promoted Faraday redox reaction to generate pseudo-capacitance, and thus enhanced the specific capacitance and the energy density [18]. Figures 6(b)-(c) show the SEM morphology of the C/Ni 3 S 2 -16 composite under different magnifications. It can be seen that Ni 3 S 2 was more uniformly loaded in porous carbon, and the nanosheets connected to each other were well dispersed, and there were abundant pore structures between the sheets. Figures 6(d)-(f) show the SEM-based EDS element mapping of the C/Ni 3 S 2 -16 composite material, revealing that the main component of the composite material was composed of C elements, while S and Ni were small amount uniformly dispersed in porous carbon.
The C/Ni 3 S 2 -16 composite material was analyzed by TEM, and the results are shown in figure 7. As seen from figure 7(a) that C/Ni 3 S 2 nanosheets are very thin and partially transparent, and the wrinkles or corrugations of the nanosheets exhibit a sheet structure. From the HRTEM image of figure 7(b), it can be clearly seen that the crystal structure of C/Ni 3 S 2 is complete, with lattice spacings of 0.41 nm and 0.28 nm, which correspond to (101) and (110) crystal planes of Ni 3 S 2 crystal respectively, indicating the formation of Ni 3 S 2 [19]. figure 8(b). There is a sharp peak at 284.7 eV, which can be ascribed to the characteristics of the sp2 graphite lattice (C-C/C=C), and the relative peak area accounts for 48.3% of the carbon-related groups in the composite. The elemental composition and carbon-related groups of the composite obtained by XPS analysis is shown in table 1. Furthermore, the other peaks from 286 eV to 289 eV attribute to the bonds of C-O, C=O, and O-C=O, confirming the existence of porous carbon in the composite material [20]. Figure 8(c) shows the spectra of Ni 2p in C/Ni 3 S 2 -16 composite material, which divided into Ni 2p3/2 and Ni 2p1/2 characteristic peaks at 856.1 eV and 873.5 eV, respectively, as well as two satellite peaks [21]. Figure 8(d) is the spectra of S 2p in C/Ni 3 S 2 -16 composite material. The S 2p energy spectrum can be divided into two peaks. The peak at 163.8 eV was a typical metal sulfur bond, and the peak at 162.6 eV can be ascribed to sulfur ions with lower surface coordination [22]. These results indicated that Ni 3 S 2 was successfully loaded on porous carbon.
Electrochemical properties 3.2.1. C/Ni 3 S 2 nanocomposite electrode
CV and GCD were test by the three-electrode in 6.0 mol l −1 KOH alkaline electrolyte on C/Ni 3 S 2 composite materials, which synthesized at different times. Figure 9(a) shows the CV curves of the three samples C/Ni 3 S 2 -9, C/Ni 3 S 2 -12, C/Ni 3 S 2 -16 at 80 mV s −1 sweep speed with the test voltage range of −0.1∼0.7 V. As clearly seen that CV curve was not a kind of rectangular curve, generating obvious pseudocapacitive behavior [23]. The performance of the composite materials not only lie on the double electric layer, but also lie on the redox process. The C/Ni 3 S 2 -9 sample has the smallest integrated area of the CV curve and has a relatively low specific capacitance, while the C/Ni 3 S 2 -16 sample has the largest integrated area and has symmetrical oxidation and reduction peaks. It shows that the redox process between the composites and electrolyte was invertible, and a higher specific capacitance was obtained. This pair of redox peaks of C/Ni 3 S 2 -16 can be ascribed to the invertible reaction of Ni (II) ↔ Ni (III) [24] in KOH electrolyte. At the current density of 3 A g −1 , the charge-discharge curves (0∼0.5 V) of the heterostructure products C/Ni 3 S 2 -9, C/Ni 3 S 2 -12 and C/Ni 3 S 2 -16 can be seen in figure 9(b). Three curves display obvious bending, which indicate the existence of pseudocapacitive behavior, and consistent with the analysis result of cyclic voltammetry curves. The specific capacitance values corresponding to three GCD curves were 436, 492 and 972 F g −1 , respectively. The specific capacitance of three samples varies greatly under the condition of large current density, indicating that the electrochemical performance of nano-flake composite material was better than that of nano particles, and nano-flake composite material was more suitable for the electrode material of supercapacitor [16].
For better appraise the electrochemical properties of biomass porous carbon-based composite C/Ni 3 S 2 -16, the CV curves of different scanning speeds and GCD of different current densities were tested in 6 mol l −1 KOH, which are shown in figures 10(a) and (b).
As seen that well-described redox peaks were attributed to Ni 2+ ↔Ni 3+ . Moreover, the CV curves can be well maintained at 80 mV s −1 , indicating C/Ni 3 S 2 -16 electrode possess a good reversibility and stability. In figure 10(b), the GCD curves of C/Ni 3 S 2 -16 electrode with different current densities at the potential window of 0∼0.5V. The specific capacitances were calculated as 974 F g −1 , 972 F g −1 , 864 F g −1 , 724 F g −1 , 663 F g −1 , and 600 F g −1 corresponding 1.5 A g −1 , 3 A g −1 , 6 A g −1 , 10 A g −1 , 15 A g −1 , and 20 A g −1 current densities, respectively. The specific capacitance is decreased with the increasing of current density. At the current density of 1 A g −1 , the specific capacitance of C/Ni 3 S 2 -16 is 1080 F g −1 , which is higher than the porous carbon electrode (143 F g −1 ) [25] and the porous carbon-based composite C/SnO 2 electrode (228 F g −1 ) [26]. Compared with Ni 3 S 2 nanoparticles (911 F g −1 at 0.5 A g −1 ) [27], C/Ni 3 S 2 -16 electrode also has a higher specific capacitance and a good development prospects.
The capacitance performance of C/Ni 3 S 2 -16 as seen in figure 10(c). Up to 80% of the initial capacitance can be remained when the current density increases from 1 A g −1 to 6 A g −1 , revealing that the C/Ni 3 S 2 -16 electrode has good rate performance. As shown in figure 10(d), the cyclic stability of C/Ni 3 S 2 -16 was investigated by 5,000 charging and discharging at 10 A g −1 . Remarkably, the specific capacitance of C/Ni 3 S 2 -16 electrode gradually increases in the initial cycle, and up to 737.6 F g −1 after 300 cycles, which is related to the full activation of the current electrode. After extended cycling to 5000 cycles, the capacitance can still maintain 88.8% of the highest value, showing superior cycle stability. EIS measurements were also used to investigate the electrochemical properties of C/Ni 3 S 2 -16, as shown in figure 11. The frequency range of AC impedance tests for C/Ni 3 S 2 -16 electrode is from 0.1 Hz to 100 kHz, and the direct current (DC) bias voltage is 0. At high frequency region, the equivalent series resistance is 0.63 Ω. The diameter of the arc corresponded to Faradic charge transfer resistance, and the value is extremely low, about 0.05 Ω. A straight line with a slope greater than 45°in the low frequencies, indicating that adsorption/desorption on the electrolyte surface was very rapid due to outstanding ion transport and electron conduct [28].
Asymmetric supercapacitors
In order to further estimate the electrochemical property of C/Ni 3 S 2 -16, two-electrode system was test with C/Ni 3 S 2 -16. Due to the connection of C and C/Ni 3 S 2 -16 two electrode voltage range, which were -1∼0 V and 0∼0.5 V, therefore the working voltage of the assembled C/Ni 3 S 2 -16//C asymmetric supercapacitor in 6 mol l −1 KOH solution can reach 1.5 V.
According to positive and negative charges balance, match the mass of positive electrode and negative electrode as follows: Where m (g) is the quality of the electrode active material, C (F g −1 ) is the specific capacitance, Δ V (V) is the voltage window.
The optimal mass ratio between C/Ni 3 S 2 -16 and porous carbon is calculated to be 0.25. In order to evaluate the exact mass of the C/Ni 3 S 2 -16 active materials, the residual Ni-foam framework after hydrothermal reaction was removed after soaking in the FeCl 3 solution for 3 days. The fabrication process of the C/Ni 3 S 2 -16 sample was optimized through changing reaction temperature and time, the electrochemical results showed that the active materials of C/Ni 3 S 2 -16 sample with ∼1.5 mg cm −2 obtained at 120°C for 16 h possess the excellent performance and better superior capacitance. So, the exact mass of the porous carbon electrode was about 6.0 mg cm −2 . Figure 12(a) compares the CV curves of the two electrodes of C/Ni 3 S 2 -16 and C with a scan rate of 60 mV s −1 under a stable operating voltage window. Figure 12(b) reveals the CV curves of asymmetric capacitors at sweep speeds of 30, 50, 70, and 90 mV s −1 , which with obvious redox peaks, indicating pseudo capacitance behavior. With scan rate increased, current response also increased. When scan rate up to 90 mV s −1 , the peak shape can be well maintained, indicating good rate performance. Figure 12(c) exhibits the GCD curves with different current densities region of 0.2 to 2 A g −1 , which are classified into linear scope and plateau scope. The discharge curve in the linear region is the electric double layer capacitance behavior, and the bending of the curve in the plateau region indicates the pseudo capacitance behavior of the redox reaction. The specific capacitances are 114.3 F g −1 , 107.2 F g −1 , 106 F g −1 , 96 F g −1 and 89.3 F g −1 , which obtain at current densities of 0.2 A g −1 , 0.4 A g −1 , 1 A g −1 , 1.5 A g −1 , 2 A g −1 , respectively. Due to the redox reaction of the active substance is weakened at higher current density, so, the capacitance decreases with the increasing of current density. Furthermore, C/Ni 3 S 2 -16//C asymmetric supercapacitor has excellent rate performance, 78.1% of the initial capacitance can be remained. C/Ni 3 S 2 -16//C asymmetric supercapacitor achieves an operating voltage of 1.5 V, which improves the energy density of the supercapacitor. The Ragone diagram is drawn by calculating, as shown in figure 12(d). When the operating voltage is 1.5V and the power density is 150 W kg −1 , the energy density is as high as 35.7 Wh Kg −1 . And the energy density remains at 27.9 Wh Kg −1 as the power density is 1500 W kg −1 , which shows advantages of C/Ni 3 S 2 -16//C and evidently better than those of other metal compounds [29,30].
Based on the above analysis, C/Ni 3 S 2 -16 composite has good electrochemical performance. Firstly, a multistage pore structure of porous carbon [23] and loaded the nanosheet structure of Ni 3 S 2 , being beneficial for the faradic reactions and facilitating the adsorption-desorption processes of the charge. Simultaneously, porous carbon also provides active centers of EDLC, leading to a high specific capacitance. Secondly, Ni 3 S 2 has ultrathin nanosheets and open structure, which enhances the touch of electrolyte and electrode, thereby significantly increasing the capacitance. Thirdly, the direct contact between the underlying NF and C/Ni 3 S 2 -16 composite avoids using conductive additives and polymer binder, which reduces the electrode resistance significantly, ensuring the electrode material with high electrochemical utilization rate.
Conclusions
In this work, a simple one-step hydrothermal method was used to prepare a three-dimensional biomass porous carbon loaded Ni 3 S 2 nanosheet composite material grown directly on nickel foam. Structure designed by adjusting the reaction time, synthesizing porous carbon-based ultrathin Ni 3 S 2 layered nanosheet structure composite electrode. The network structure of porous carbon was used to avoid the agglomeration of metal sulfide, expanded the active area, and assembled the C/Ni 3 S 2 -16//C asymmetric supercapacitor, which possessed outstanding electrochemical performance.
(1) At the reaction temperature of 120°C, when the reaction time was extended to 16 h, the product was ultrathin nanosheets. A large number of nanosheets were connected to each other to form a feather-like microstructure, and the gaps between the nanosheets were large. In addition, the electrochemical performance of nano-sheet composites were better than nano-particles, C/Ni 3 S 2 -16 was more suitable as the electrode material of supercapacitors.
(2) The biomass porous carbon with rich pore structure compounded Ni 3 S 2 , which leaded to the exposure of more units of the Ni 3 S 2 ultrathin nanosheets, promoted the electrolyte to enter the electrode material, meanwhile, porous carbon also afforded the electric double layer, improved the conductivity, and stabilized the volume structure of the metal sulfide in the electrode material.
(3) C/Ni 3 S 2 -16 exhibited a high specific capacitance of 600 F g −1 at a current density of 20 A g −1 . After 5000 charge-discharge cycles at a current density of 10 A g −1 , the specific capacitance retention rate was 88.8%. The C/Ni 3 S 2 -16//C asymmetric supercapacitor was assembled. An energy density obtained as high as 35.7 Wh Kg −1 with the power density of 150 Wkg −1 , and remained at 27.9 Wh Kg −1 with a high power density at 1500 W kg −1 , which showed the advantages of C/Ni 3 S 2 -16//C.
|
v3-fos-license
|
2024-04-19T13:06:22.965Z
|
2024-04-19T00:00:00.000
|
269215137
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://eglj.springeropen.com/counter/pdf/10.1186/s43066-024-00330-x",
"pdf_hash": "db7493efa8d745bd99129444654f15cc0661819c",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42412",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "5c6dda3985a0504e514f8faa0f957edaea884936",
"year": 2024
}
|
pes2o/s2orc
|
Role of magnetic resonance imaging (MRI) in liver iron quantification in thalassemic (thalassemia major) patients
Background Iron overload is a major problem in beta thalassemia patients due to repeated blood transfusions. The liver is the first organ to be loaded with iron. An accurate assessment of iron overload is necessary for managing iron chelation therapy in such patients. Iron quantification by MRI scores over liver biopsy due to its non-invasive nature. Methods Fifty-one patients with thalassemia major were subjected to 3.0-T MRI. Multiecho T2* sequence was used to cover the entire liver. Region of interest (ROI) was placed in three areas with maximum signal change, and an average T2* value was obtained. Similarly, a single ROI was placed at the mid-interventricular septum in the heart, and T2* value was obtained. T2* values so obtained were converted to iron concentration with the help of a T2* iron concentration calculator. The liver iron values were correlated with serum ferritin value. Results There was a significant negative correlation between liver iron concentration (LIC) and T2* value of the liver ( r = − 0.895, p < 0.01) and between cardiac iron concentration (CIC) and T2* value of the heart ( r = − 0.959, p < 0.01). There was a slight positive correlation between LIC and serum ferritin ( r = 0.642, p < 0.01) and no correlation between CIC and serum ferritin ( r = − 0.137, p = 0.354). Conclusions MRI is a useful tool to titrate the doses of chelating agents as it is accurate and non-invasive, does not involve radiation hazards and hence can be repeated as and when needed. Simultaneous assessment of cardiac iron overload is an added advantage of MRI.
Background
Thalassemia is an autosomal recessive disorder of haemoglobin synthesis.Most patients, particularly those with thalassemia major, need frequent blood transfusions, which leads to the accumulation of iron in various organs especially the liver, heart and endocrine organs.The liver is the first organ to be loaded with iron [1].
Initially, transferrin binds with excess iron, and when the capacity of transferrin to take excess iron is surpassed, free iron appears and starts accumulating in the organs.Free iron is toxic to the cells and leads to tissue damage and causes morbidity and mortality.It leads to liver fibrosis, heart failure, arrhythmia, etc. Humans have no active mechanism to excrete excess iron.Almost all patients with thalassemia major who regularly receive blood transfusions accumulate toxic amounts of iron by the age of 10 years or earlier and acquire potentially lethal iron burden by early adolescence, if not given any treatment to remove excess iron.Iron toxicity is the most important factor in causing organ damage in thalassemia patients and makes effective chelation an absolute requirement to alleviate its impact.Accurate assessment of iron overload is therefore necessary for managing iron chelation therapy in beta thalassemia patients who receive multiple transfusions [2].
Adjustment in doses of iron chelation therapy is very crucial in patient management which in turn depends on the extent of iron overload.There are different methods to measure iron overload.Serum ferritin is a widely used marker to measure iron overload, but it is also an acute phase protein, and hence, its levels can be influenced by inflammation, use of chelation therapy, infection, vitamin C level and liver damage [3].
Clinically, it is believed that the amount of serum ferritin (SF) reflects the amount of iron stored in the liver.Although there is a broad correlation between SF level and liver iron, the prediction of iron loading from SF can be unreliable [4].
Liver biopsy is the gold standard to detect iron overload, but it is invasive and associated with various complications.Also, the distribution of iron in the liver is uneven, so it is difficult to get the best biopsy specimen.
Recently, biopsy is being replaced by magnetic resonance imaging (MRI) technique.T2* MRI can measure the concentration of iron in the liver and heart, is noninvasive and is beneficial for appropriate chelation treatment for individual persons [5].
Methods
The study was conducted in 51 patients with thalassemia major (age 5-25 years, mean age 15.23 ± 5.2 years, 37 males and 14 females) more than 5 years of age receiving repeated blood transfusions registered with thalassemia day care centre of a tertiary care centre.Patients already having any cardiac complication not associated with iron overload were excluded from the study.All patients except 2 had thalassemia facies.The frequency of blood transfusion ranged from once in 15 days to once in 20 days.Most patients were given oral chelation therapy (deferiprone and/or deferasirox).The haemoglobin level of most of the patients was maintained between 8 and 9 g/dl.The total leukocyte count of all the patients except 1 (who had a mild increase in leukocyte count-11,500/ cc) was within the normal range.All the patients were screened for hepatitis B (HBsAg).None of our patients had hepatitis B.
ROI was placed in three areas with maximum signal change, and an average T2* value was obtained (Fig. 1).Major vessels of the liver were avoided while placing ROI.Similarly, a single ROI was placed at the mid-interventricular septum in the heart on a short-axis view, and a T2* value was obtained (Fig. 2).T2* values so obtained were converted to iron concentration with the help of a T2* iron concentration calculator (available as freely downloadable software).
LIC was calculated in all 51 patients while CIC was calculated in 48 patients.
Serum ferritin measurement
MRI examination was done within 5 days of blood transfusion.Blood sample was taken from each patient within 15 days of MRI examination, and serum ferritin concentration was obtained by ELISA technique using a standard Calbiotech, ferritin SA ELISA kit to evaluate iron overload.
A blood specimen was collected, and the serum was separated immediately by centrifugation method.The serum was stored at 2-8 °C for the duration (not more than 30 days) before the ferritin analysis was done.Using the reagent provided in the kit, the ELISA test was first performed in the six ferritin standards (concentrations of these standards were known) which were available in the ferritin kit, and their absorbance values were noted at 450 nm optical density (OD) (Table 1).
A standard concentration curve was then drawn by plotting the absorbance value at OD 450 nm (horizontal axis) versus the concentration of six ferritin standards (already known) (vertical axis) as shown in Fig. 1, and the value for each sample was calculated using the standard curve (Fig. 3).The liver iron values were correlated with serum ferritin value.
Statistical analysis
Numerical data was expressed as mean ± 2 standard deviations.The Spearman correlation test was used to correlate the T2* value of the liver with liver iron concentration and serum ferritin, liver iron concentration with serum ferritin, T2* value of the heart with cardiac iron concentration and serum ferritin, cardiac iron concentration with serum ferritin and T2* values of the liver and heart.A two-sided p value < 0.01 was regarded as statistically significant.Statistical analysis was performed using IBM SPSS Statistics V22.0.
Results
Out of 51 patients studied for liver iron concentration, 4 showed normal iron levels, 45 had mild iron deposition, 2 had moderate and none had severe iron deposition.Of the 48 patients studied for cardiac iron concentration, 45 were normal, 3 had mild to moderate iron deposition and none had severe iron deposition.The reference range for liver and cardiac iron concentrations along with the number of cases is shown in Table 2.
The mean T2* value for the liver in the study group was 5.63 ± 2.38 ms, and the range was 2.2-12.9ms.The mean LIC was 3.38 ± 1.36 mg/g dry weight, and the range was 1.30-7.30mg/g dry weight.
There was a strong negative correlation between the T2* value of the liver and LIC with a correlation coefficient − 0.895 (p value < 0.01) (Fig. 4), a negative correlation between T2* value of the liver and serum ferritin with a correlation coefficient of − 0.636 (p value < 0.01) and a positive correlation between LIC and serum ferritin with a correlation coefficient of 0.642 (p value < 0.01) which is moderately significant.
The mean T2* value of the heart obtained at the midinterventricular septum of 48 patients was 33 ± 16.5 ms, and the range was from 8.3 to 78.8 ms.The mean cardiac iron concentration was 0.61 ± 0.31 mg/g dry weight, the and range was 0.30-1.80mg/g dry weight.
There was a strong negative correlation between the T2* value of the heart and CIC with a correlation coefficient of − 0.959 (p value < 0.01) (Fig. 5).There was no correlation between the T2* value of the heart and serum ferritin with a correlation coefficient of 0.135 (p value = 0.359).There was no correlation between CIC and serum ferritin with a correlation coefficient of − 0.137 (p value = 0.354).
Serum ferritin of all 51 patients was obtained.The mean serum ferritin level in the study group was 1871 ± 594 ng/ These results show that T2* values as calculated on the T2* multiecho sequence of MRI very accurately predict both liver and cardiac iron concentration, while serum ferritin levels only moderately predict liver iron concentration and cannot accurately determine cardiac iron concentration.Hence, MRI is a better tool to titrate the doses of chelating agents as it is accurate, non-invasive, does not involve radiation hazards and can be repeated as and when needed.Moreover, the iron load in the liver also does not give an idea about the iron load of the heart.Hence, it is a better idea to image the heart along with the liver (it is a comparatively short sequence and does not add much to scanning time) as both liver iron level and serum ferritin do not predict cardiac iron status which is important to assess as cardiac overload can lead to fatal heart problems.
Discussion
The majority of our patients (88.2%) had mild iron deposition in the liver.Only 3.9% had moderate iron deposition while none had severe iron deposition.This may be because all our patients were under institutional care and already receiving chelation therapy.Out of 48 patients imaged for cardiac iron, 93.7% were normal, 6.3% had mild to moderate cardiac iron deposition and none had severe iron deposition.Wahidiyat et al. [6] found that 85.2% of the subjects had normal cardiac iron stores while 70.4% of the subjects had severe liver iron overload.Suthar et al. [7] found a negative correlation between serum ferritin and T2* value of the liver (r = − 0.448, p < 0.01), but no correlation was found between serum ferritin and T2* value of the heart (r = − 0.221, p = 0.060).We also observed a negative correlation between serum ferritin and T2* value of the liver (r = − 0.636.p = < 0.01), but no correlation was seen between the T2* value of the heart and serum ferritin (r = 0.135, p = 0.359).Our study correlates well with Suthar et al. 's [7] study but showed a slightly stronger negative correlation between serum ferritin and liver T2* value (− 0.636 vs − 0.448).Leung et al. [8] also found an inverse correlation between liver T2* value and both current and 12-month average serum ferritin (r = − 0.44, p = 0.003; r = − 0.46, p = 0.002).Zamani et al. [9] found a moderate negative correlation between serum ferritin levels and liver MRI T2* values (r = − 0.586, p = 0.000).Mandal et al. found [10] a moderate correlation between LIC and serum ferritin levels (r = 0.522; p < 0.001).Wahidiyat et al. showed a slight correlation (r = 0.37) between LIC and serum ferritin [6].Angulo et al. 's [11] retrospective study, however, showed no correlation between mean ferritin and LIC.
Similar to our study, most of the previous studies like those of Leung et al. [8] also failed to correlate serum ferritin level and T2* value of heart except for the study by Azarkeivan et al. [12] (r = − 0.361) and Wahidiyat et al. [6] (r = − 0.28) who showed poor negative correlation between the two.Mandal et al. [10] showed a slight positive correlation between CIC and serum ferritin (r = 0.483).
In fact, most of the studies have shown the same trend, weak correlation between liver iron concentration and serum ferritin and no correlation between cardiac iron concentration and serum ferritin.This may be because though acceptable for clinical purposes, the value of cardiac iron concentration as measured by T2*W imaging is less reliable than that of the liver due to two regions, first is susceptibility artefacts due to interface with air in the lungs and second due to continuous cardiac motion.This susceptibility effect is more prominent in the 3.0-T system as compared to the 1.5-T system.Moreover, the TE value cannot be reduced beyond a certain limit in case of cardiac imaging to accommodate for cardiac gating.This also affects the results.Kolnagou et al. [13] observed that serum ferritin correlated with T2* of the spleen (r = − 0.81), liver (r = − 0.63) and pancreas (r = − 0.33) but not with the heart.A similar trend was observed in the correlation of liver T2* with the T2* of the spleen (r = 0.62) and pancreas (r = 0.61) and none with the heart.These studies contradict previous assumptions that serum ferritin and liver iron concentration is proportional to the total body iron stores in thalassemia and especially cardiac iron load.Previously, it was thought that the liver being the largest storage site of iron, if overloaded, will proportionately affect the organs such as the heart also.However, we failed to correlate these two parameters.The correlation coefficient between T2* values of the liver and heart in our study was r = 0.051 with a probability of 0.365 both Fig. 4 The scatter dot chart showing a negative correlation between the T2* value of the liver and liver iron concentration Fig. 5 The scatter dot chart showing a negative correlation between T2* cardiac and cardiac iron concentration of which are insignificant.Our results are consistent with the findings of Kolnagou et al. [13] and Azarkeivan et al. [12].Azarkeivan et al. [12] observed that the correlation coefficient between T2* liver and heart was 0.281, which is insignificant.
Voskaridou et al. [14] showed that heart T2* values correlated with left ventricular ejection fraction in thalassemia major, but further suggested that results of T2 relaxation for the heart become reliable only when there is heavy iron deposition.Carpenter et al. studied the role of T2* magnetic resonance in monitoring iron chelation therapy.They suggested that the lowest values of myocardial T2 * < 10 ms predict a high risk of the development of cardiac failure (p < 0.001).Analysis of T2* revealed an increasing risk of developing heart failure with progressively lower T2* values with the greatest risk in patients with T2* < 6 ms.It can be used to monitor chelation, allowing individually tailored chelation therapy to improve outcomes and prevent cardiovascular complications [15].In our study, three patients had mild to moderate cardiac iron deposition.One was a 14-year-old girl who had mild LIC and normal CIC when first scanned, but then she defaulted on chelation therapy.A repeat MRI done 6 months later showed mild LIC, but now, there was mild to moderate CIC (1.4 mg/g) also with cardiac T2* value of 10.81 ms, and her ejection fraction was 45%.She developed septicemia also and succumbed (though her cardiac iron was mild to moderate, the already compromised heart probably could not tolerate the stress of septicemia).During this 6-month period, her serum ferritin shot from 1119 to 10,000 ng/ ml, which could be due to a combination of increased body iron (due to default on chelation therapy) as well as due to inflammation associated with septicemia.Two boys (aged 17 and 12 years) also had mild to moderate cardiac iron deposition (1.6 mg/g and 1.8 mg/g, respectively).The dose of chelation was increased for both the boys, and both are doing fine now (previously, both were on oral deferasirox 30 mg/kg OD, after cardiac involvement was found on MRI, desferrioxamine subcutaneous was added).
Casale et al. [15] suggested serum ferritin ≥ 2000 ng/ ml and liver iron concentration ≥ 14 mg/g/dry weight as the best threshold for predicting cardiac and hepatic iron overload (p = 0.001 and p < 0.0001, respectively).A homogeneous pattern of myocardial iron overload was associated with negative cardiac remodelling and significantly higher liver iron concentration (p < 0.0001).Myocardial fibrosis by late gadolinium enhancement was detected in 15.8% of the patients [15].We, however, could not find late gadolinium enhancement probably because the patients had only mild to moderate CIC, and as they were under institutional care, titration of chelation was immediately done.The only girl child who deteriorated was dead before a detailed workup could be done.Even the threshold of ≥ 2000 ng/ml ferritin for cardiac iron overload did not hold true in our study.All three patients with cardiac iron deposition had serum ferritin well below 2000 ng/ml.
There are some important technical points which need to be kept in mind while measuring liver/cardiac iron concentration using T2*W imaging.TE needs to be minimum (the first echo should preferably be < 1 ms on the 1.5-T system and < 0.5 ms on the 3.0-T system).If the organ iron load is high, it may give fallacious readings or no values.In such conditions, try to minimise the TE.This can be achieved by decreasing frequency or increasing bandwidth.Even after these corrections, if the values seem to be doubtful, a three-parametric fitting algorithm should be used instead of the default two-parametric fitting algorithm.The greyscale images of the multiecho sequence should always be analysed before calculating the T2* value/LIC.This will help in identifying false lower values of iron in cases of heavy iron load.The first two images of the multiecho sequence should have at least some liver signal.The collapse of signal in the first few images of the multiecho sequence is an indicator of heavy iron overload and such patients should be rescanned with parameter modification as described (Fig. 6).Another way to take care of fallacious values in severe iron load is to take the ratio of signal intensities of liver and paraspinal muscles, but for this, a body coil should always be used and not surface coil.The two methods (T2* valuebased LIC and the ratio of signal intensities of liver and paraspinal muscles) can, in fact, be used together also.It is important to standardise the method used and stick to that particular method in follow-up examinations.
This study faced some limitations, single measurement of serum ferritin which was done within 15 days of MRI examination.An average value of the last 6-month ferritin level might have reflected the true status, but some of the recent studies also have used a single measurement of serum ferritin [14].Interpretation of serum ferritin values may be confounded by a variety of conditions that alter serum ferritin concentrations independently of changes in the body's iron burden including vitamin C deficiency, fever, acute and chronic hepatic damage, hemolysis and ineffective erythropoiesis; all of which are common in patients with b-thalassemia major [3].Fatty infiltration of the liver may also affect the T2* and hence liver iron concentration values.
Out of the 51 cases, 4 cases showed normal T2* values of the liver despite moderately high serum ferritin concentration.These variable results could be because of the differences in clinical, genetic and demographic characteristics of the study population such as age, sample size, serum ferritin levels, chelating protocols and iron kinetics of different organs.
To conclude, MRI is the most sensitive and specific imaging modality in the diagnosis of parenchymal iron overload in thalassemia patients on regular blood transfusion.The susceptibility effect caused by the accumulation of iron leads to signal loss in the affected tissues, particularly with the T2*-weighted sequences, which makes the diagnosis of iron overload possible in a noninvasive way, thereby avoiding repeated biopsies [5].As the involvement of the heart and liver are the major determinants of mortality in thalassemia, these organs need to be screened regularly for iron deposition during chelation therapy.Now, MRI-based organ assessment is considered the gold standard and should be used for assessing iron concentrations in various organs mainly the liver and heart.
Conclusions
MRI is a useful tool to titrate the doses of chelating agents as it is accurate, non-invasive, does not involve radiation hazard and hence can be repeated as and when needed.Simultaneous assessment of cardiac iron overload is an added advantage of MRI.
Fig. 1 A
Fig. 1 A, B Greyscale T2*W and corresponding colour-coded axial image of the liver showing ROI with a T2* value of 6.33 ms, corresponding LIC being 2.6 mg/g.C T2 decay graph in the region of ROI in the liver
Fig. 2 A
Fig. 2 A, B Greyscale T2*W and corresponding colour-coded image of the heart showing ROI in interventricular septum with a T2* value of 8.38 ms, corresponding CIC being 1.8 mg/g.C T2 decay graph in the region of ROI in the heart
Fig. 3
Fig. 3 Standard concentration curve of six standard ferritin reagents
Fig. 6
Fig.6T2* images representing a normal (upper row) and severely iron-overloaded liver (lower row).In the normal liver, the signal intensity does not change significantly as the echo times (TEs) increase.In comparison, a severely overloaded liver is dark even at TE of 1.3 ms and completely black in the subsequent TEs
Table 1
Absorbance values at OD 450 nm and concentration of six ferritin standards
Table 2
Reference range for T2* value and iron concentration for mild, moderate and severe cases of liver and cardiac iron overload and number of patients falling in each category
|
v3-fos-license
|
2022-08-28T15:09:04.240Z
|
2022-08-25T00:00:00.000
|
251879762
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/19/17/10594/pdf?version=1661419503",
"pdf_hash": "c3506179d6b2fb97fd63a3cc4c674b7f27024f18",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42413",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "820cf78e7113511ea340f5cec477734dca84f644",
"year": 2022
}
|
pes2o/s2orc
|
Machine Learning and Criminal Justice: A Systematic Review of Advanced Methodology for Recidivism Risk Prediction
Recent evolution in the field of data science has revealed the potential utility of machine learning (ML) applied to criminal justice. Hence, the literature focused on finding better techniques to predict criminal recidivism risk is rapidly flourishing. However, it is difficult to make a state of the art for the application of ML in recidivism prediction. In this systematic review, out of 79 studies from Scopus and PubMed online databases we selected, 12 studies that guarantee the replicability of the models across different datasets and their applicability to recidivism prediction. The different datasets and ML techniques used in each of the 12 studies have been compared using the two selected metrics. This study shows how each method applied achieves good performance, with an average score of 0.81 for ACC and 0.74 for AUC. This systematic review highlights key points that could allow criminal justice professionals to routinely exploit predictions of recidivism risk based on ML techniques. These include the presence of performance metrics, the use of transparent algorithms or explainable artificial intelligence (XAI) techniques, as well as the high quality of input data.
Introduction
Recidivism rates have a major impact on public safety and increase the cost of incarceration. An estimate of economic and social costs of recidivism has been provided by one study conducted in England and Wales on a 12-month follow-up, indicating GBP 18.1 billion for the 2016 criminal cohort [1]. This incidence is also influenced by the high rate of criminal recidivism, stated as high as 50% in many international jurisdictions [2]. In addition to the social costs, it is also important to mention the devastating consequences that recidivism causes for victims, communities and offenders as well as their own families, which are usually not even mentioned among the injured parties. So, it is very important to try to reduce the high rate of criminal recidivism and reduce these effects.
A discipline that in recent years has provided an important contribution by attempting to provide for criminal recidivism is data science. Indeed, data science is the application of quantitative and qualitative methods to solve relevant problems and predict outcomes [3][4][5]. In particular, using the potential of risk assessment tools and machine learning (ML) algorithms in predicting the risk of criminal recidivism in order to reduce its spread has been outlined in literature since the 1920s [6]. Over the years, the methodologies improved becoming more reliable due to the development of various datasets and ML models able to support judicial decisions on probation, length of sentence or application of better rehabilitation strategies. ML models can indeed be applied in various criminal justice areas. At the group level, they can be profitably used to monitor or predict the effects of criminal
Eligibility Criteria
The methods for systematic review have been structured according to the PRISMA Statement. No language, publication date, or publication status restrictions were imposed. The features of included studies are the following: (1) the aim of the study is to predict recidivism; (2) the study has explicit accounting of data collection methods; and (3) the study has proper description of the methodologies, including the applied machine learning methods.
In contrast, the authors did not include studies in which one or more of the following criteria were satisfied: (1) the main purpose of the study is to reduce the bias of the ML model (e.g., race-based bias); (2) the study has the aim to predict the psychiatric characteristics in the reoffender (e.g., mental illness); and (3) the study lacks the accuracy (ACC) or the Area Under the Curve (AUC) metrics necessary to evaluate machine learning models.
Information Sources and Search Strategy
Studies have been selected from two online databases: Scopus and Pubmed. We used the following equation searching by title, abstract and keywords for: (("crim*") OR ("offen*") OR ("violat*")) AND (("recidiv*") OR ("relapse")) AND (("machine learning") OR ("artificial intelligence") OR ("deep learning")) resulting in a total of 79 bibliographic records. References listed in the included papers have also been examined to identify studies meeting our inclusion and exclusion criteria. The search was last performed in January 2022. We first screened titles and excluded those clearly not meeting the eligibility criteria. Then abstracts were examined, and lastly, full texts were read, eventually leading to the inclusion or exclusion of the papers according to the above described criteria. The screening of the literature was performed in blind by two investigators (S.B. and F.P.) In the case of disagreement, a third reviewer (M.B.) assessed the paper to achieve a consensus.
Assessment of Risk of Bias
The ROBIS tool was used to assess the risk of bias of the included systematic reviews [20]. The ROBIS tool is a method for assessing bias in systematic reviews that consists of three phases: (1) assessing relevance (optional); (2) identifying concerns with the review process; and (3) judging risk of bias in the review. Phase two involves assessing the review across four domains: (1) study eligibility criteria; (2) identification and selection of studies; (3) data collection and study appraisal; and (4) synthesis and findings. The third phase summarizes the concerns identified during phase two. The first two phases were performed independently by two authors (SB, FP) and resulting discrepancies were addressed by a third author (GT or MB).
Study Selection
A total of 16 duplicates were identified and removed from the 79 preliminary results. In addition, 33 articles were removed by looking at the title and abstract only since they did not match the criteria. The remaining papers were selected or excluded after a comprehensive analysis of the full text. Among those papers, 18 were excluded since they did not satisfy eligibility criteria for the following reasons: • The purpose of fourteen papers was to reduce model bias; • One paper assessed only the psychiatric characteristic of repeat offenders; • Three papers did not clearly describe the methodology.
A total of 12 studies were finally selected. Figure 1 summarizes the results of the study selection, represented as a PRISMA flow diagram obtained following the guidelines published by Page and colleagues [19]. inclusion or exclusion of the papers according to the above described criteria. The screening of the literature was performed in blind by two investigators (S.B. and F.P.) In the case of disagreement, a third reviewer (M.B.) assessed the paper to achieve a consensus.
Assessment of Risk of Bias
The ROBIS tool was used to assess the risk of bias of the included systematic reviews [20]. The ROBIS tool is a method for assessing bias in systematic reviews that consists of three phases: (1) assessing relevance (optional); (2) identifying concerns with the review process; and (3) judging risk of bias in the review. Phase two involves assessing the review across four domains: (1) study eligibility criteria; (2) identification and selection of studies; (3) data collection and study appraisal; and (4) synthesis and findings. The third phase summarizes the concerns identified during phase two. The first two phases were performed independently by two authors (SB, FP) and resulting discrepancies were addressed by a third author (GT or MB).
Study Selection
A total of 16 duplicates were identified and removed from the 79 preliminary results. In addition, 33 articles were removed by looking at the title and abstract only since they did not match the criteria. The remaining papers were selected or excluded after a comprehensive analysis of the full text. Among those papers, 18 were excluded since they did not satisfy eligibility criteria for the following reasons: • The purpose of fourteen papers was to reduce model bias;
•
One paper assessed only the psychiatric characteristic of repeat offenders; • Three papers did not clearly describe the methodology.
A total of 12 studies were finally selected. Figure 1 summarizes the results of the study selection, represented as a PRISMA flow diagram obtained following the guidelines published by Page and colleagues [19].
Study Characteristics
To present the results of the studies, we divided them into three sections. The first one analyses dataset and ML techniques applied within the study. We first focused on the characteristics of datasets. Then we checked whether in the studies the authors have used ML techniques such as data pre-processing or cross validation (CV). The first one is a technique to transforming raw data into an understandable format for the ML models. The CV, instead, is a technique to estimate if the ML models are able to correctly predict data not yet observed. This first section is important because different datasets and different ML techniques can greatly influence the final performance of ML models. In the second section, we analysed the type of recidivism that each study aimed to predict. Then we selected the ML model that in the study obtained the best performance. Finally, in the third section, by dividing the studies into four categories based on their aim, we compared the performance of each ML model based on specific metrics.
Characteristics of Dataset and ML Techniques
The main features of the considered studies are listed in Table 1. The datasets are different in each study. Two of them used only data from the correctional institution or the justice system. The first one was based on 3061 youth charged with a sexual offense in Florida who have been monitored over two years after the initial charge to determine sexual recidivism [26]. The aim of Ozkan and colleagues was to examine whether ML models could provide a better prediction than classical statistical tools. Thus, the authors took advantage of the statistical models using several predictor variables, including historical risk assessment data and a rich set of developmental factors for all youth reported for delinquency by the Florida Department of Juvenile Justice (FDJJ). The second study published by Butsara [21] used a dataset from a central correctional institution for drug addicts and a central women's correctional institution in Thailand. The sample consists in 300 male and 298 female inmates. The authors proposed a method to find the crucial factors for predicting recidivism in drug distribution and investigated the power of ML in predicting recidivism.
Five papers refer to risk assessment tools created specifically to predict recidivism across the country. Among them Karimi-Haghighi and Castillo used RisCanvi, a risk assessment protocol for violence prevention introduced in the Catalan prison during 2009 in which professionals conducted interviews resulting in the creation of a risk score through some risk elements [25]. The elements are included in five risk areas of prisoners: criminal/penitentiary, family/social, clinical and attitudinal/personal factors. The dataset used includes 2634 cases. The Duwe and Kim study considers 27,772 offenders released from Minnesota prisons between 2003 and 2006 [22]. The authors used a dataset from the Minnesota Screening Tool Assessing Recidivism Risk (MnSTARR), which assesses the risk of five different types of recidivism, taking advantage of the Minnesota Sex Offender Screening Tool-3 (MnSOST-3), used to analyse sexual recidivism risk for Minnesota sex offenders. Tollenaar and colleagues in two different studies used the StatRec scale with static information from the Dutch Offender Index (DOI) [31,32]. In both studies, the recidivism prediction is divided into three categories: general, criminal and violent recidivism. The dataset is based on offenders over the age of 12 found guilty during a criminal case that ended in 2005. In the more recent one, the authors also included public access data from the North Carolina prison in the dataset to investigate the generalizability of the results. These data feature all individuals released from July 1977 to June 1978 and from July 1979 to June 1980. Both cohorts were tested with ML models, but for this review we consider only the 1977-1978 data excluding the Tollenaar's 2019 study [32] because the 1980 cohort showed a worse calibration probability. The dynamic elements of the Finnish Risk and Needs Assessment Form (RITA) were used by Salo and colleagues [27] to predict general and violent recidivism. The sample included 746 men sentenced to a new term of imprisonment. All individuals must have the full RITA, which considers 52 items such as aggression, alcohol problems, drug use, work problems, coping with economic problems and resistance to change.
Another study that combined statistical features with other specific risk assessment tools is by Tolan and collaborators [30]. The authors compared different combinations of datasets and ML models in terms of AUC and then showed the results with Structured Assessment of Violence Risk in Youth (SAVRY) features in which the ML model outperforms. The SAVRY is a violent risk assessment tool including 24 risk factors and 6 predictive factors. These risk factors are divided into historical, individual and social/contextual categories. The data analysed in the study were extracted from the Catalonia juvenile justice system and included 853 juvenile offenders between the ages of 12 and 17 who finished a sentence in 2010 and were subjected to a SAVRY analysis.
Only one study used the Historical, Clinical and Risk Management-20 (HCR-20) with 16 other clinical and non-clinical risk assessment factors in order to determine the likelihood of recidivism among first-time offenders (FTOs) [28]. Data were collected from various prisons in the Indian state of Jharkhand. The study was conducted on 204 male inmates aged between 18 and 30, most of them below the poverty line.
Finally, it is important to mention the paper of Haarsma and his working group [24] in which the authors used the NeuroCognitive Risk Assessment (NCRA), a neurocognitive test-based risk assessment software able to measure key criminogenic factors related to recidivism. In this study, 730 participants in the Harris County Department of Community Supervision and Corrections self-administered the NCRA. The individual's recidivism risk score by NCRA combined with a set of demographic features was quantified using a ML model.
After compiling the dataset, each study used different ML techniques to analyse and improve the reading of the data and results. Four studies used pre-processing to improve the datasets. Two of them preferred a feature selection technique [24,26]. One study used a generic data standardization and then applied feature selection [21]. The last one used ANOVA to identify relevant attributes for the current dataset [28].
Another relevant aspect to mention is the author's choice to use a ML technique called cross validation (CV) in order to estimate the ability of ML models to generalize to data not yet observed. Among the studies included in the review, nine used CV.
Aim of the Studies and ML Model Applied
For a better comparison of the studies, it is possible to perform a sorting by the type of recurrence they aim to predict. The sorting leads to four categories: general [22][23][24]28,31,32], sexual [31,32], violent [22,25,31] and all other recidivism. The last category includes studies that considered a specific type of crime [21] or referred only to males [27] or youth [26,29,30].
The previously observed datasets (Table 1) were used to train ML models to generate recidivism predictions. There are different types of models that you can select based on the available data and type of target you want to predict. So, all the studies compared different models to obtain the best results in predicting recidivism. The metrics used to compare the models were accuracy (ACC) and area under curve (AUC). The ACC is used to measure how often the algorithm correctly classifies a data point and represents the ratio of correctly classified observation to the total number of the predictions. The AUC measures the ability of the ML models to distinguish recidivism from non-recidivism. Both metrics provide a result in the range [0,1]. When the score is near 0, it means that ML models have the worst ability to predict, a score close to 0.5 means that the models randomly predict the probability of recidivism. On the other hand, a score close to 1 means that ML models have a perfect ability to distinguish recidivism from non-recidivism.
In this review, for each study, we considered only the ML models that performed better, according to the author's observation ( Table 2). The most used ML model is the logistic regression [21,[30][31][32] and two of its variants, the LogitBoost [22] and the generalized linear models with ridge and lasso regularization (Glmnet) [24]. The second most popular model is the random forest [23,26,27,33]. The other ML models to mention are multi-layer perceptron (MLP) [25], linear discriminant analysis (LDA) [31] and penalized LDA [32]. Tables 3-6. Table 3 shows the results obtained from the different studies predicting generic recidivism. The ensemble model trained with an HCR-20+ dataset leads to improved performance and seems to be the most effective method in terms of the ACC [28]. Considering the AUC, the logistic regression yields the highest score with the use of the MnSTARR+ and StatRec datasets [22,31]. In Tables 3-5, the comparison between the StatRec and DOI datasets shows that they produce the same results except for general recidivism with a difference of 0.05 [31,32]. However, this result was not surprising since both studies used data from the Dutch Offender's Index. In contrast, for violent recidivism (Table 5), the multi-layer perceptron (MLP) trained with the RisCanvi dataset performed better than the other techniques [26]. Table 6 shows the results of studies that include all other recidivism. These studies are difficult to compare because each one has a different sample type, such as juvenile offenders or males or aims to predict a specific recidivism (e.g., recidivism in drug distribution). However, we observed that the results obtained reflect an overall effectiveness of prediction models. The Thailand dataset has a particular relevance with an ACC of 0.90 obtained with logistic regression [21]. In term of AUC, a model obtained a significant score of 0.78 using the RITA+ dataset trained with random forest [27].
Factors Involved in Predicting Recidivism
In some of the studies considered in this review, the authors described the variables that contributed most to the final evaluation. Some considerations about these results are discussed in Section 4. Below we report the variables most implicated in the results of each model (when provided by the authors).
Four top factors are identified: royal pardons or suspension, first offending age, encouragement of family members and frequency of substance abuse [21]. Among the items in the LS/CMI they identify: Items A18 (charge laid, probation breached, parole suspended during prior community supervision), A 14 (three or more present offenses), A 423 (could make a better use of time) and A 735 (current drug problem) [23]. A total of 13 tests are selected within NCRA, in particular: balloon analog risk task (time collected), point-subtraction aggression paradigm (grow, punish ratio), reading the mind through the eyes (correct, time median), emotional stroop (test time, black time, icon color time, Pos Neg time) and Tower of London (aborted, dup moves, illegal moves first move frac) [24]. The stronger predictors in this model (sexual recidivism) are prior felony sex offense referrals, number of prior misdemeanor sexual misconduct referrals and number of prior felony offenses [26]. Among the dynamic factors extracted from RITA's items, the most important seem to be problem's managing one's economy for general recidivism and aggressiveness for violent recidivism [27]. The most significant variables affecting the model accuracy is the total YLS score followed by difficulty in controlling behavior, age at first arrest, history of running away and family circumstances [29]. The most important features for the logistic regression model include almost all static features (sex, ethnicity, age at main crime, crime in 10 years, etc.,) and only one SAVRY feature (evaluation of the expert). For the MLP model, all static features are more important than SAVRY features [30]. For general recidivism (logistic regression model), the most powerful predictors are age, conviction density, specific offense types (property offence and public order offence), the number of previous offences and home country. Considering violent recidivism (logistic regression model), the largest effects can be seen in the number of previous convictions, the most severe offence type (property crime with violence), offence type present in the index case (property crime without violence, public order, other offence) and country of origin. Lastly, for sexual recidivism (linear discriminant analysis), three main coefficients are identified: previous sexual offences and country of origin have the greater positive affect on the probability of sexual recidivism, while the number of previous public prosecutor's disposals has the largest negative effect [31].
Reporting Biases
In this systematic review, the datasets and ML models included have been compared simultaneously. However, for a critical evaluation of the results of ML models, it is also necessary to focus on the type of dataset used to train the model.
In each study, the method of data collection is different. In some papers, the data come from the countries' institutions, while in others they come from the checklists of risk assessment tools or neurocognitive tests. Consequently, datasets have different characteristics such as sample size, mean age, type of crime and years of recidivism follow-up. These variables can significantly modify the assessment of the ML model [34].
Another relevant aspect to mention is that not all the papers declare the use of data pre-processing or are clearly explicit about the process used. As mentioned above, we applied the ROBIS assessment to highlight any possible systematic distortion. Table 7 and Figure 2 summarize the risk of bias. In detail, in Table 7, for each study the relative risk of bias in each of the four domains is assigned on a three-level scale indicating "low" or "high" risk of bias or "unclear" risk when no information is available to judge a particular item. Figure 2 depicts the relative risk of bias (as in table N, from "high" to "low" and "unclear" risk) among all the included studies for each domain assessed as well as the "overall risk". Of the considered studies, nine were evaluated as low risk overall, one as high risk, and two studies resulted in an unclear risk. The main concerns relate to the inhomogeneity of the samples and the poor description of data pre-processing and analysis which makes it harder to compare different ML techniques.
Discussion
The current state of data science reveals that ML algorithms may be very efficient. Accordingly, the results of the papers considered in this review show that each ML model has a good performance. Taking into consideration the ACC, the mid-score is 0.81. In contrast, considering the AUC the mid-score is 0.74. Considering that the range of ACC and 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Discussion
The current state of data science reveals that ML algorithms may be very efficient. Accordingly, the results of the papers considered in this review show that each ML model has a good performance. Taking into consideration the ACC, the mid-score is 0.81. In contrast, considering the AUC the mid-score is 0.74. Considering that the range of ACC and the AUC are from 0 (no predictability) to 1 (perfect predictability), both mid-scores show a good predictability of the models. Some studies use a risk assessment for a specific type of recidivism, crime or population, while others measure the risk of general recidivism. However, comparing the two types of studies, no significant differences emerge. The only difference observed in this review is an increase in ACC of 0.03 for the specific type of recidivism. Thus, there is no evidence that the use of a more specific risk assessment could significantly improve the ability to predict criminal recidivism.
This review compares different ML methods, models and datasets used to predict recidivism. Analysing the available models, the most common technique used in these papers to predict recidivism is logistic regression, while more complex algorithms are less common. In terms of performance, a simpler prediction model such as logistic regression and more complex ones such as the random forest show similar predictive validity and performance. What emerges from the results is that it does not seem essential to focus on model complexity to improve the predictability of criminal recidivism.
However, literature analysed in this systematic review pinpointed some limitations. First of all, the performance of each model does not depend only on the ML model or the dataset. The method of data collection and data pre-processing are also important. Both are aspects that can significantly affect model performance [34,35]. We observed that in the literature, it is uncommon to focus on data pre-processing techniques, which makes it difficult to compare different studies. In addition, all ML models used in the literature return a binary result, and for many of them, depending on the model and its description, it is difficult to understand which variables most influenced the final assessment. As a matter of fact, since the use of risk assessments has become more widespread and successful, this issue also emerged in recent studies in which the possibility of racial bias has been highlighted [36]. Age estimation-related biases, which in the forensic field are of significant relevance, should also be considered [37][38][39].
In Section 3.6, we reported the variables that have the greatest impact on the assessment of the risk of recidivism for each model. In this regard, it is good to make some considerations. As specified by the authors themselves, the most important variables for the final evaluation should be assessed limited only to the ML model used and to the specific dataset of that ML model. Hence, the results may not generalize to more specific subsamples or different subpopulations [31]. Moreover, for the final evaluation, it is not always possible to consider the single variables separately since the ML algorithms take into account the way in which the individual variables are combined [23]. Lastly, the complexity of the models that use interactions and nonlinear effects makes it challenging to explain individual-level predictions, and the difficulty grows along with the complexity of the model [22].
Using the ML method for risk assessment in criminal justice to predict recidivism has increased in recent years [14,23]. However, it is still a controversial topic due to the large amount of research on algorithmic fairness [40]. The purpose of this review is to analyse the state of the art of the techniques applied to predict criminal recidivism. Clearly, we did not aim to show the perfect prediction of the ML method nor to claim that it is possible to rely solely on the ML model to predict recidivism. Conversely, we highlight the strengths and limitations of data science applied to the humanities.
First, we would like to point out that it would be important to pay more attention to the dataset and data processing. Taking a few steps back and focusing on these aspects could improve model performance and reduce possible bias [35].
Moreover, in order to facilitate comparison, it would be useful to learn to compare models by having the same evaluation metrics available. Considering metrics available in the analysed papers, we observed a good overall performance of the models. This allows us to emphasize the concrete support that these tools can bring to human judgment, which is also not free of bias [41]. The binary result is a limitation for this approach, as well as algorithmic unfairness [17].
The latest machine learning models are like 'black boxes' because they have such a complex design that users cannot understand how an AI system converts data into decisions [42]. The lack of accessibility of the models and algorithms used in judicial decisions could undermine the principles of transparency, impartiality and fairness and lead to the development of discrimination between individuals or groups of individuals [43]. It would be useful to develop transparent algorithms or use explainable AI. The explainable AI consists in AI systems that can explain their rationale to a human user and characterize their strengths and weaknesses [44]. With these techniques, we could know how much each variable affected the outcome, helping to form knowledgeable opinions usable from criminal justice professionals to motivate their decision [45,46]. In this regard, it would be useful to use a human-in-the-loop approach that leverages the strengths of collaboration between humans and machines to produce the best results, reinforcing the importance of the synergistic work [47,48].
Conclusions
The implementation of quantitative and qualitative methods to predict criminal recidivism could be a useful tool in the field of criminal justice [3][4][5] However, although research in this area is steadily increasing [7], its use in judicial practice is still limited [8] due to the controversial views [9][10][11][12]. This systematic review shows the state of the art regarding the application of ML techniques to the risk of reoffending and highlights key points useful for criminal justice professionals to exploit these new technologies. Each method applied achieves good performance, with an average score of 0.81 for ACC and 0.74 for AUC. However, the application of artificial intelligence in this field is still a controversial topic due to the significant critical issues [37]. To overcome these critical issues, it will be imperative to face and overcome a new challenge, that of making algorithms transparent and accessible so that the application of these new technologies contributes to decisions based on the principles of transparency, impartiality and fairness. In this regard, the integration of methods from the natural and social sciences according to a systemic orientation would allow the correlation between e-tech data and the human interpretation of the same [49], keeping the human operator at the head of the human-computer system in accordance with the integrated cognitive system [50].
The use of artificial intelligence in judicial proceedings and the resulting decisionmaking processes will be a field of wide reflection among scientists, jurists and bioethicists. This systematic review is a thorough synthesis of the best available evidence, but it is also a contribution in a field that presents many ethical, deontological and legal critical issues in the state of the art.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2023-01-19T20:37:59.183Z
|
2022-11-11T00:00:00.000
|
255972033
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "ac6f17afbb5b8def83bef010d1cc77ccd46fb569",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42414",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "f039dadff35b5fdd556de1e4706ff17e01fefec3",
"year": 2022
}
|
pes2o/s2orc
|
LncRNA LINC01232 Enhances Proliferation, Angiogenesis, Migration and Invasion of Colon Adenocarcinoma Cells by Downregulating miR-181a-5p
LncRNAs play crucial roles in the progression of colon adenocarcinoma (COAD), but the role of LINC01232 in COAD has not received much attention. The present study was designed to explore the related mechanisms of LINC01232 in the progression of COAD. LINC01232, miR-181a-5p, p53, c-myc, Bcl-2, cyclin D1, p16, Bax, VEGF, E-cadherin, vimentin, N-cadherin and SDAD1 expressions were determined by western blot and qRT-PCR. CCK-8, tubule formation, and Transwell assays were employed to detect proliferation, angiogenesis, and migration/invasion of COAD cells, respectively. The relationship between LINC01232 and miR-181a-5p was predicted by LncBase Predicted v.2, and then verified through dual luciferase reporter gene assay. According to the results, LINC01232 was highly expressed in COAD cells and enhanced proliferation, angiogenesis, migration, and invasion of COAD cells. Downregulated LINC01232 promoted expression of p53 and p16, and inhibited c-myc, Bcl-2 and cyclin D1 expressions in COAD cells, while upregulation of LINC01232 generated the opposite effects. LINC01232 was negatively correlated with miR-181a-5p while downregulated miR-181a-5p could reverse the effects of siLINC01232 on cell proliferation, angiogenesis, migration, and invasion. Similarly, miR-181a-5p mimic could also offset the effect of LINC01232 overexpression. SiLINC01232 increased the expressions of Bax and E-cadherin, and decreased the expressions of VEGF, vimentin, N-cadherin and SDAD1, which were partially attenuated by miR-181a-5p inhibitor. Collectively, LINC01232 enhances the proliferation, migration, invasion, and angiogenesis of COAD cells by decreasing miR-181a-5p expression.
pancreatic cancer by regulating TM9SF2 in pancreatic adenocarcinoma progression [12]. Similarly, Meng et al. demonstrated that LINC01232 is a promising target molecule for pancreatic cancer treatment [13]. However, the role of LINC01232 in COAD is unclear.
One way in which lncRNAs participate in cellular biological processes is through competitive binding to target micro (mi)RNAs, thereby affecting the expressions of downstream genes [14,15]. MiRNAs are also a kind of noncoding RNA, and have aroused considerable concern among researchers in recent years. According to a previous study, LINC01232 could competitively bind to miR-654-3p, and reduce its expression in ESCC cells, thus promoting the expression of HDGF [16]. Nevertheless, the regulation of miRNA by LINC01232 in colon cancer is still undefined. Through bioinformatics analysis, we found that LINC01232 could bind to miR-181a-5p. It is noteworthy that the involvement of miR-181a-5p in the progression of COAD has been confirmed [17,18]. On this basis, we hypothesized that LINC01232 affects the progression of COAD by modulating miR-181a-5p.
In the present study, we first measured the expressions of LINC01232 and miR-181a-5p in COAD and normal cells. The roles of LINC01232 and miR-181a-5p in the progression of COAD were then determined by regulating their expressions. Therefore, our study may provide a significant therapeutic target for the treatment of COAD.
qRT-PCR
Total RNA of cells was extracted by Trizol reagent (15596-018, Invitrogen), the concentration of which was determined by a microplate reader (Molecular Devices, USA). Next, 1 μg RNA was reversely transcribed into Table 1. Specific primer sequences for quantitative reverse transcription polymerase chain reaction.
Gene
Primer sequence Species miR-181a-5p Human cDNA, which was used for qPCR, with a First cDNA Synthesis Kit (RR037A, TaKaRa, Japan). In short, 20 μl reaction solution was prepared as follows: 2 μl cDNA, 10 μl SYBR Mix (RR820A, TaKaRa), 0.8 μl forward primer, 0.8 μl reverse primer and 6.4 μl sterile water. Then, the mixture was amplified under the following reaction conditions: 95°C for 30 s, 40 cycles of 95°C for 3 s, and 60°C for 30 s. The primers were obtained from Sangon (China) and RIBIBIO, and the sequences were listed in Table 1. U6 and β-actin were internal references. Finally, the CT value obtained from a 7900 Real-Time PCR System (Biosystems, USA) was calculated by 2 -ΔΔCT method [19].
Cell Counting Kit (CCK)-8
After transfection for 24 h, SW-620 and LOVO cells were inoculated in a 96-well plate (3000/well). Cell culture was continued for another 24 and 48 h, and the CCK-8 assay was carried out. Then, 10 μl CCK-8 reagent (C0038, Beyotime, China) was added to each well, mixed with cells gently, and placed in the incubator for 3 h. At the end of incubation, the culture plate was taken out and placed in a microplate reader to detect the light absorption at 450 nm.
Tubule Formation Assay
Fifty microliters of matrix glue (356234, Becton, Dickinson and Company, USA) was added to a 6-well plate, which was then incubated for 30 min. After transfection for 24 h, SW-620 and LOVO cells were collected and resuspended in 2 ml medium. Then, 50 μl cells were added to the pre-coagulated matrix glue for further culture in the incubator. After 12 h, the culture plate was photographed under an inverted microscope (POMEAS, China), and Image J (1.8.0, National Institutes of Health, Germany) was used to calculate the length of the tubule.
Transwell Assay
Transfected or untransfected cells were resuspended in 2 ml medium without FBS. For invasion detection, the upper chamber of the Transwell plate (3428, Corning, USA) was first uniformly coated with the matrigel (Corning) and placed in the incubator for 4 h. Then, 100 μl pre-prepared cells were put into the upper chamber, and 750 μl medium containing 10% FBS was put into the lower chamber. Following culture for 48 h, the cells of the upper chamber were gently removed with a cotton swab. Cells invading the lower chamber were immersed in 4% paraformaldehyde (P0099, Beyotime) for 30 min, and then stained by crystal violet (C0121, Beyotime) for 30 min. Next, the invading cells were photographed with an inverted microscope, and 5 fields were randomly selected to count the number of cells. The Transwell assay for migration detection was the same as above except that matrigel was not required.
Statistical Analysis
Data were expressed as the means ± SD. Differences among groups were determined statistically using analysis of variance (ANOVA). Statistical analyses were performed by the SPSS software (19.0, IBM, USA). p < 0.05 indicated statistically significant difference.
LINC01232 Was Highly Expressed in COAD Cells
For the first time, we identified the expression of LINC01232 in COAD cells. According to Fig. 1A, the qRT-PCR detection results showed the expression of LINC01232 in COAD cells to be higher than that in normal colon fibroblasts (p < 0.05, p < 0.01, p < 0.001). As the expression of LINC01232 was the highest in SW-620 cells and the lowest in LOVO cells, the two cells were selected as subjects for further experiments. In addition, due to the high expression level of LINC01232 in SW-620 cells, we transfected siLINC01232 into SW-620 cells to downregulate the level of LINC01232, and thus unveil its role (Fig. 1B, p < 0.001). By contrast, we then upregulated LINC01232 expression in LOVO cells through transfection of LINC01232 overexpression plasmid, and the transfection efficiency was exhibited in Fig. 1C (p < 0.001).
LINC01232 Was Negatively Correlated with miR-181a-5p
LncBase Predicted v.2 was employed to predict the binding site of LINC01232 and miR-181a-5p, and the LINC01232 sequence of the binding site mutation was designed (Fig. 4A). Notably, the prediction result was verified by dual-luciferase reporter gene assay. Co-transfection of LINC01232-WT and miR-181a-5p mimic could reduce the luciferase activity of cells, while co-transfection of LINC01232-MUT and miR-181a-5p mimic had no significant effect on the luciferase activity of cells, as compared with their control (Figs. 4B and 4C). Subsequently, we found that downregulation of LINC01232 enhanced, but upregulation of LINC01232 inhibited the expression of miR-181a-5p, when compared with their control (Figs. 4D and4E, p < 0.001).
Discussion
Colon adenocarcinoma is one of the most common cancers of the digestive tract globally, and lncRNAs have been evidenced to play vital roles in colon carcinogenesis and progression [20]. In this study, we found that the expression of LINC01232 was different in various cell lines, and abnormally high in COAD cells. We believed that this difference may stem from the different origins of cell lines, and the different phenotypes of each cell, which enable them to create the appropriate microenvironment [21]. Metastasis and invasion are the leading causes of death in COAD patients [22]. Here, we explored the relationship between LINC01232 and COAD cell migration and invasion by silencing or overexpressing LINC01232. From the results of Transwell assay, we could conclude that LINC01232 silencing hindered cell migration and invasion, and that LINC01232 overexpression had opposite effects. This suggested that LINC01232 was indeed involved in the migration and invasion of COAD p53, Bcl-2, Bax, VEGF, vimentin, E-cadherin, N-cadherin and SDAD1 in SW-620 and LOVO cells transfected or untransfected with siLINC01232, LINC01232 overexpression plasmid, miR-181a-5p mimic and miR-181a-5p inhibitor were determined by qRT-PCR or western blot. β-Actin served as an internal reference. * vs. IC+siNC, ^v s. I+siNC, # vs. IC+siLINC01232, & vs. MC+NC, Δ vs. M+NC, † vs. MC+LINC01232; * or # or & or † p < 0.05, ** or^^ or ## or && or ΔΔ or † † p < 0.01, *** or^^^ or ### or &&& or ΔΔΔ or † † † p < 0.001. p53, protein 53; Bcl-2, B-cell lymphoma-2; VEGF, vascular endothelial growth factor; SDAD1, SDA1 domain containing 1; Bax, Bcl-2-associated X; I, miR-181a-5p inhibitor; IC, inhibitor control; M, miR-181a-5p mimic; MC, mimic control; siNC, siRNA negative control.
cells. In addition, we also unraveled that LINC01232 overexpression enhanced the proliferation of COAD cells. Notably, abnormal cell proliferation is the basis of tumorigenesis [23]. Thus, LINC01232 has been confirmed to be implicated in the progression of colon cancer in our study.
To further verify the results of the above experiments, the expressions of p53, p16, c-myc, Bcl-2, and cyclin D1 were quantitated. P53 is a tumor suppressor gene that is mutated in about 50% of malignant tumors [24]. Thus, p53 could be used as a predictor of the progression from precancerous lesions to true malignant tumors [25]. Besides, an earlier study also proved that the expression of p53 was closely related to the invasion and lymphatic metastasis of COAD [26]. Likewise, p16 gene is also a critical tumor suppressor gene, which will lead to the proliferation of malignant cells after inactivation [27]. In line with a recent study, abnormality in the cyclinD1-CDK-p16-pRb pathway is the genetic basis of tumor development [28]. Overexpression of cyclin D1 protein will enhance the binding of cyclin D1 to cyclin-dependent kinases, further stimulate cell division, promote excessive cell proliferation, inhibit cell apoptosis, and finally lead to carcinogenesis [29]. Furthermore, as a nuclear transcription factor, c-myc could promote cell proliferation, enable cells at rest to enter into proliferation, and transform cells into undergoing malignant changes [30]. Predictably, we examined the expression of Bcl-2, a wellknown apoptotic inhibitor [31]. In light of the data, overexpression of LINC01232 could decrease p53 and p16 expressions, and increase c-myc, Bcl-2 and cyclin D1 expressions. These results further confirmed that LINC01232 is involved in the proliferation, migration, and invasion of COAD cells. Furthermore, we noted that LINC01232 affected angiogenesis in colon cancer cells. It is worth mentioning that angiogenesis is vital for the rapid growth and metastasis of solid tumors [32]. Our results suggested that LINC01232 silencing attenuated, but LINC01232 overexpression strengthened, the tubule formation of COAD cells. The angiogenesis promoted by LINC01232 creates a strong condition for the development of COAD.
MiR-181a-5p, a targeted binding molecule downstream gene of LINC01232, has been demonstrated to be associated with the progression of COAD [17,18]. The targeting relationship between miR-181a-5p and LINC01232 has also been verified in our study. We also corroborated that miR-181a-5p was dramatically lowly expressed in COAD [18,33]. To further explore the roles of miR-181a-5p and LINC01232 in the progression of COAD, we simultaneously knocked down or overexpressed miR-181a-5p and LINC01232 in COAD cells. As previously mentioned, LINC01232 knockdown would dampen COAD cell proliferation, migration, invasion, and angiogenesis. Expectedly, downregulation of miR-181a-5p boosted proliferation, migration, invasion, and angiogenesis of COAD cells. In addition, downregulation/upregulation of miR-181a-5p could attenuate the effects of LINC01232 silencing/overexpression on the proliferation, migration, invasion, and angiogenesis of COAD cells. This implied that LINC01232 could enhance the proliferation, migration, invasion, and angiogenesis of COAD cells by inhibiting miR-181a-5p expression.
Then, we continued to verify the above results by detecting proliferation-, migration-, invasion-, and angiogenesis-related proteins. In addition to p53 and Bcl-2 expressions, Bax, VEGF, E-cadherin, vimentin, Ncadherin and SDAD1 expressions were also detected. Bax is a well-known pro-apoptotic protein in contrast to Bcl-2, an anti-apoptotic protein [34]. Particularly, VEGF is a functional glycoprotein with high biological activity [35]. It is the only growth factor specifically acting on vascular endothelial cells, and is most directly involved in inducing tumor angiogenesis and enhancing vascular permeability [33]. Besides, E-cadherin, vimentin, and Ncadherin are all epithelial-mesenchymal transformation (EMT)-related proteins. EMT means that under certain conditions, epithelial cells lose their epithelial phenotypic characteristics and connections to each other, acquire stromal-like characteristics and motor ability, and can leave the in situ tissue [36]. EMT can enable stationary tumor cells to acquire motor ability, making tumor metastasis possible [37]. Furthermore, SDAD1 promotes the proliferation of COAD cells by reducing apoptosis [38]. Our results corroborated that the downregulated LINC01232 could promote p53, Bax and E-cadherin expressions, while suppressing Bcl-2, VEGF, vimentin, Ncadherin, and SDAD1 expressions, but such effects were reversed by miR-181a-5p inhibitor.
In summary, this study provides an overview of the role of LINC01232 in regulating COAD cell proliferation, migration, invasion, and angiogenesis. Further discussion revealed that the influence of LINC01232 on COAD progression is achieved by downregulating miR-181a-5p level. Our results may provide a potential therapeutic target for the treatment of COAD.
|
v3-fos-license
|
2021-05-10T00:03:31.480Z
|
2021-02-01T00:00:00.000
|
234035866
|
{
"extfieldsofstudy": [
"Physics",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/1742-6596/1825/1/012098",
"pdf_hash": "6dfca3d8a8f43ba2ce371cf8264097f3038dc691",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42420",
"s2fieldsofstudy": [
"Physics",
"Medicine"
],
"sha1": "93e17c520ceba734ed3e196a12cbb2cc94df659e",
"year": 2021
}
|
pes2o/s2orc
|
Study of the Effectiveness of Radiation Retaining Materials for the Entrance of LINAC 6 MV Radiotherapy Room
LINAC radiotherapy devices can produce scattering radiation and leak radiation from the gantry. The doses of scattering radiation result from the scattering radiations from the wall (HST) and the patient (HPS). Gantry leak radiation is gantry leak radiation through the labyrinth hall (HLS) and the one that goes directly to the entrance (HLT). These four components play a role in producing radiation doses at the entrance. The scattering radiation in the LINAC 6 MV radiotherapy installation can spread in all directions. Therefore, there is a need for a special review to examine the scattering radiation to the entrance of the room. The constituent of the radiation retaining wall also influences the reflection coefficient of the wall (α). Therefore, it is important to pay attention to the α value in evaluating the radiation dose at the labyrinth entrance. Radiation protection efforts for radiation workers and the community around the radiotherapy room need to be considered by creating a radiation barrier that can minimize the radiation received. With that in mind, the purpose of this study is to examine several possible materials for radiation shielding, especially at the entrance of the radiotherapy room. Materials used for the entrance are lead (Pb), borated polyethylene (BPE), aluminum (Al), and iron (Fe) with a thickness of 6 mm, respectively. The variation of the reflected angle used in the calculation of HST, HLS, and HPS values starts from an angle of 50° to 80°. The result showed that the most effective material for reducing the amount of radiation is lead with effectiveness of 86.79%.
Introduction
The most common external radiotherapy equipment used for cancer treatment is a linear accelerator (LINAC). One of the most frequently used types of external radiotherapy for the treatment of cancer patients is Intensity Modulated Radiation Therapy (IMRT) [1] [2]. Compensator-based IMRT is usually used in cancer treatment for the neck and head area [3].
Any equipment using an X-ray will certainly cause scattering radiation. Scattering radiation results from the interaction of radiation with the material, both in the patient and the radiation barrier. Scattering radiation can spread in all directions.
Scattering radiation dose distribution around the LINAC plane shows that the value is inversely proportional to the distance [4] [5]. Neutron scattering radiation in a radiotherapy room with the LINAC 18 MV plane also shows that the radiation distance is inversely proportional to the neutron radiation value. That also applies to other types of radiation, such as photons and electrons. The study also reviewed the value of radiation dose at the entrance of a radiotherapy room. The result shows that the radiation dose that has penetrated the entrance is smaller than the one before penetration. It proves that the entrance of a radiotherapy room also acts as a radiation barrier [5]. Another factor that can affect the scattering radiation value is the coefficient value of the radiation reflection on the wall (α). The study of the effect of the reflection coefficient of the wall on the environmental radiation in industrial facilities reported that the energy used in the X-ray plane is inversely proportional to the radiation reflection coefficient on the wall. The value of the coefficient will affect the value of environmental radiation [6].
Because the radiation produced from LINAC will scatter around the entrance of the radiotherapy room, it is necessary to calculate the dose reaches to it, and to analyze whether the total dose is still permissible. It is also necessary to determine the addition of a radiation barrier door. The types of material should also be determined based on the properties of each material.
Method
This research is a follow up of the study conducted at a Radiotherapy installation that has a LINAC 6 MV plane. The data studied were in the form of environmental doses at the entrance of the radiotherapy room with an Electa Precise LINAC plane with the photon energy of 6 MV. The independent variable applied is the variation of the reflection angle associated with the value of the radiation reflection coefficient on the wall (α). The dependent variable is the radiation dose at the entrance of the radiotherapy room with and without the additional materials (H'tot and Htot).
The total radiation dose at the entrance of the radiotherapy room without radiation barrier is calculated with the following equation [7] [8] where Htot is the radiation dose at the entrance without the radiation barrier (Sv/week), HS is the scattering radiation from the wall (Sv/week), f is the patient transmission factor for LINAC with the photon energy of 6-10 MeV (0.25), HLS is the radiation from the gantry leak that passes through the labyrinth hall (Sv/week), HPS is the scattering radiation from the patient (Sv/week), and HLT is the radiation from the gantry leak that goes directly to the entrance (Sv/week). However, for the gantry that is parallel to the axis ( Fig. 1), the equation is [7][8]: where HST is the radiation dose of the primary beam transmitted through the wall and further spread through the entrance (Sv/week). Figure 3 illustrates the radiation exposure resulting from the scattering from the patient (HPS) and radiation exposure resulting from the gantry leak that goes directly to the entrance (HLT). After all the dose values obtained, the total dose at the entrance of the radiotherapy room would be yielded using equation (2) or equation (3). Furthermore, the total dose of the entrance with the additional materials would be calculated using the material density values. The next step is to analyze whether the dose produced at the entrance is under the permitted value. The final step is to determine the most effective material used to reduce the radiation dose at the entrance of the radiotherapy room. Figure 4 shows that HST, HLS, and HPS have the same trend. The highest doses occur at the angle of 500 and decrease with the increasing angles. The radiation dose originating from the wall scattering (HST) has the largest dose value compared to other components that producing radiation doses at the entrance.
Results and Discussi
The radiation dose of the gantry leak in the labyrinth hall (HLS) comes from the main radiation source that leaks from the gantry head and hits the hallway wall and is reflected towards the entrance. The amount of the first reflected radiation emanating from the gantry head leak is considered as a radiation source of 1.4 MV for the 6 MV LINAC plane. This is due to the radiation attenuation at the gantry head, resulting in the reduced radiation energy from the original energy. The patient scattering radiation dose (HPS) is the main radiation beam that hits the patient and then attenuates and undergo energy decrease. Consequently, the amount of radiation emanating from the patient scattering is considered as a radiation source with the energy of 0.5 MeV for the 6 MV LINAC plane. The attenuated radiation further hits the wall and is reflected towards the entrance.
The dose limit value used at the entrance is the dose limit for general public, which is 1 mSv (10 -3 Sv) per year, or 2 x 10 -5 Sv per week. The value of Htot is calculated using the equation (2). Table 1 shows that the radiation dose value at the entrance without the radiation barrier door (Htot) is far below the Dose Limit Value determined by Chief Regulatory of Nuclear Power Supervisory Body (Perka BAPETEN) no. 3, 2013, which is 2 x 10 -5 Sv per week [10].
The total radiation dose at the entrance with the radiation barrier door (H'tot)
The radiation barrier door used for the radiotherapy installation can be coated with a material that can increase the radiation dose reduction. The materials used in this study are lead, iron, aluminum, and BPE, with a density of 11.34 g/cm 3 , 7.87 g/cm 3 , 2.7 g/cm 3 , and 0.95 g/cm 3 , respectively [7]. The radiation dose calculation at the entrance with the radiation barrier door uses the equation (3). The values of H'tot are shown in Table 2. TVL1 and TVLe values of the used materials are also needed. TVL1 and TVLe values can be calculated if a material density is known [7]. Figure 5 shows that the radiation barrier door decreases the Htot value. The ability of the radiation barrier door in decreasing radiation dose at the entrance is affected by the thickness of the radiation barrier door and the type of material. The thicker the radiation barrier door, the smaller the dose of radiation that penetrates the door. The greater the density of the material of the radiation barrier door, the smaller the dose of radiation that penetrates the door. Table 2 shows that lead has the greatest density among other material types, so that results in the smallest radiation dose that penetrates the door. Whereas BPE has the smallest density, which results in the largest radiation dose that penetrates the door. However, by using BPE, the total dose produced is still below the permittable dose [10]. The study used the same door thicknesses of 6 mm for each material type. In general, the percentage of the effectiveness of the radiation barrier door in decreasing radiation dose at the entrance can be calculated using the equation: The calculation using equation (4) results in the effectiveness of the materials that are used for decreasing the scattering radiation dose at the entrance of the LINAC room, which is presented in table 3. Figure 5. The comparison chart of the dose value without the additional door (Htot) and the dose value using additional doors with the variation in the door materials (H'tot)
Conclusions
This study concludes that: 1. The value of the radiation dose at the entrance without a radiation barrier door is smaller than Dose Limit Value for the general public, which is 1 mSv (10 -3 Sv) per year, or 2 x 10 -5 Sv per week. 2. The most effective material for retaining radiation at the entrance is lead, with the amount of dose reduction of 86.79%. 3. Other types of material that give permittable safe doses are iron, aluminum, and BPE.
|
v3-fos-license
|
2021-08-23T13:12:27.300Z
|
2021-08-23T00:00:00.000
|
237262084
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.700220/pdf",
"pdf_hash": "223ab3c75dc573bc43d33b1433395c88bc588efd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42421",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "223ab3c75dc573bc43d33b1433395c88bc588efd",
"year": 2021
}
|
pes2o/s2orc
|
Angiotensin-Converting Enzyme 2 in the Pathogenesis of Renal Abnormalities Observed in COVID-19 Patients
Coronavirus disease 2019 (COVID-19) was first reported in late December 2019 in Wuhan, China. The etiological agent of this disease is severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and the high transmissibility of the virus led to its rapid global spread and a major pandemic (ongoing at the time of writing this review). The clinical manifestations of COVID-19 can vary widely from non-evident or minor symptoms to severe acute respiratory syndrome and multi-organ damage, causing death. Acute kidney injury (AKI) has been recognized as a common complication of COVID-19 and in many cases, kidney replacement therapy (KRT) is required. The presence of kidney abnormalities on hospital admission and the development of AKI are related to a more severe presentation of COVID-19 with higher mortality rate. The high transmissibility and the broad spectrum of clinical manifestations of COVID-19 are in part due to the high affinity of SARS-CoV-2 for its receptor, angiotensin (Ang)-converting enzyme 2 (ACE2), which is widely expressed in human organs and is especially abundant in the kidneys. A debate on the role of ACE2 in the infectivity and pathogenesis of COVID-19 has emerged: Does the high expression of ACE2 promotes higher infectivity and more severe clinical manifestations or does the interaction of SARS-CoV-2 with ACE2 reduce the bioavailability of the enzyme, depleting its biological activity, which is closely related to two important physiological systems, the renin-angiotensin system (RAS) and the kallikrein-kinin system (KKS), thereby further contributing to pathogenesis. In this review, we discuss the dual role of ACE2 in the infectivity and pathogenesis of COVID-19, highlighting the effects of COVID-19-induced ACE2 depletion in the renal physiology and how it may lead to kidney injury. The ACE2 downstream regulation of KKS, that usually receives less attention, is discussed. Also, a detailed discussion on how the triad of symptoms (respiratory, inflammatory, and coagulation symptoms) of COVID-19 can indirectly promote renal injury is primary aborded.
INTRODUCTION
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is responsible for the outgoing pandemic of coronavirus disease 2019 (COVID-19;Liu et al., 2020;Zhang and Holmes, 2020). Acute kidney disease is a complication of COVID-19; however, data on the percentage of acute kidney injury (AKI) among hospitalized COVID-19 patients are conflicting, varying from 6% in early reports to 20-36% in more recent researches (Cheng et al., 2020;Hirsch et al., 2020;Xiao et al., 2020). The incidence of AKI in COVID-19 patients is significant, and there is consistent evidence of its association with disease severity and mortality (Cheng et al., 2020;Diao et al., 2020;Hirsch et al., 2020).
The high affinity of SARS-CoV-2 for its receptor, angiotensin (Ang)-converting enzyme 2 (ACE2), may play a significant role in tissue tropism as ACE2 is widely distributed in human organs. ACE2 is particularly abundant in kidneys and can be involved in the mechanisms leading to kidney injury in COVID-19 (Hamming et al., 2004;Diao et al., 2020;Hoffmann et al., 2020).
Angiotensin-converting enzyme 2 was discovered in 2000 by two distinct research groups (Donoghue et al., 2000;Tipnis et al., 2000). ACE2 is a zinc metallopeptidase that releases a single amino acid from the carboxy-terminal of its substrates (Guang et al., 2012) and integrates two important physiological systems: the renin-angiotensin system (RAS) and kallikrein-kinin system (KKS; Figure 1; Vickers et al., 2002;Guang et al., 2012). In RAS, ACE2 cleaves the Ang II to form Ang 1-7 ( Figure 1A; Vickers et al., 2002). Ang II is a peptide that has vasoconstrictor, anti-natriuretic, antidiuretic, inflammatory, oxidant, and fibrotic effects through its receptor of type 1 (AT1), and is known to be elevated in hypertension, kidney diseases, and metabolic disorders (Chappell, 2016;Muñoz-Durango et al., 2016). Ang 1-7 has opposite actions, exerting vasodilation, natriuresis, diuresis, anti-inflammatory, antioxidant, and anti-fibrotic actions binding to its receptor, Mas (Santos, 2014). In addition, ACE2 can act on Ang I to release Ang 1-9, which is further converted to Ang 1-7 by angiotensin-converting enzyme (ACE; Figure 1A). However, ACE2 has a higher catalytic efficiency for the hydrolysis of Ang II than Ang I (Vickers et al., 2002). Thus, the main biological function of ACE2 is to counterbalance the deleterious effects of the ACE/Ang II/ AT1 axis of RAS (Santos, 2014).
The identification of ACE2 as the receptor of SARS-CoV-2 has prompted a debate on how the ACE2 can influence the course of the COVID-19 (Hoffmann et al., 2020;Lanza et al., 2020;Verdecchia et al., 2020). Due to wide distribution of ACE2 in humans, the higher expression of this enzyme may enhance infectivity (Pinto et al., 2020). However, the depletion of the biological functions of ACE2 due to the internalization of the receptor along with SARS-CoV-2, leads to impairment of RAS and KKS, which can contribute to COVID-19 pathogenesis (Lanza et al., 2020;Verdecchia et al., 2020).
In this context, the kidneys are a potential target for SARS-CoV-2, as podocytes and proximal tubule cells abundantly express ACE2, and their role in urine filtration allows contact with circulating viruses (Hamming et al., 2004;Cheng et al., 2020;Pan et al., 2020). In addition, the kidneys are particularly sensitive to ACE2 downregulation, which is associated with several kidney diseases (Mizuiri and Ohashi, 2015). Elucidating the mechanisms responsible for renal involvement in COVID-19 and determining the immediate and long-term impacts on kidney function are necessary for achieving better patient management and developing therapeutic strategies to eliminate or minimize kidney damage.
SARS-COV-2 AND COVID-19 BACKGROUND
Three major outbreaks have been caused by severe acute respiratory syndrome coronaviruses (SARS-CoVs) in the last 2 decades: severe acute respiratory syndrome (SARS-CoV) in 2002, Middle East respiratory syndrome (MERS-CoV) in 2012, and the outgoing COVID-19 in 2019 caused by the novel coronavirus named, SARS-CoV-2. There is great epidemiological concern regarding these viral agents due to their transmissibility and mortality Zhang and Holmes, 2020).
The new coronavirus, SARS-CoV-2, was identified in late December 2019 in Wuhan, China (Zhu et al., 2020). As of July 2021, the COVID-19 pandemic is still outgoing and has affected more than 189,000,000 people and caused more than 4,000,000 deaths worldwide (Johns Hopkins University, 2021a). Currently, no treatment has proven to Frontiers in Physiology | www.frontiersin.org be safe and efficient, despite the significant number of clinical trials to repurpose approved drugs or to develop new drugs specific for COVID-19 treatment Senger et al., 2020). Recently, different vaccines have been approved for emergency use to combat COVID-19 by federal agencies: the BioNTech-Pfizer, Moderna, and Janssen vaccines were approved by the FDA in North America. In Brazil, for example, CoronaVac, AstraZeneca, and Janssen vaccines were approved for emergency use. More than 3 billion doses of vaccines against SARS-CoV-2 were administered worldwide (Johns Hopkins University, 2021a).
SARS-CoV-2 Structure and Cell Entry Mechanism
Severe acute respiratory syndrome coronavirus 2 is a single positive-strand RNA virus belonging to the betacoronavirus B lineage. Structurally, SARS-CoV-2 is comprised of a S spike protein (S), a membrane protein, an envelope protein, nucleocapsids, hemagglutinin-esterase dimers, and its genetic material (Figure 2A; Walls et al., 2020).
The S protein is a transmembrane glycoprotein that can be divided into S1 and S2 subunits. The S1 subunit contains the receptor binding domain (RBD), which is the most variable part of the coronavirus genome and is responsible for the high affinity of SARS-CoV-2 for human ACE2, which acts as a receptor for virus internalization (Hoffmann et al., 2020;Wang et al., 2020). The RBD has a dynamic position; in SARS-CoV-2, it is found predominantly lying down, which allows the virus to evade the immune system; however, the RBD only interacts with ACE2 when standing up; once this conformation is less frequent, the higher affinity may be the result of an adaptive change (Shang et al., 2020).
Proteolytic activation of S is a crucial step for membrane fusion; the process promotes conformational changes that release sufficient energy to overcome the lipid bilayer fusion energy barrier (Millet and Whittaker, 2015). SARS-CoV-2 fusion mechanisms have been proposed based on current evidence and previous studies of MERS-CoV and SARS-CoV. To catalyze the fusion process, the SARS-CoV-2 S unit should be preactivated (priming) by proteolytic proteases; there are two cleavage points in S, the first is found between S1 and S2 and is a polybasic furin cleavage site while the second is found in the S2 sequence that can be cleaved by multiple proteases, including trypsin, cathepsin L, and transmembrane serine protease 2 (TMPRSS2; Figure 2B; Hoffmann et al., 2020;Shang et al., 2020;Walls et al., 2020).
There are two possible pathways for the fusion. The plasma membrane route is possible if exogenous or transmembrane proteases, such as trypsin and TMPRSS2, are present. In MERS-CoV, this pathway only occurs if S is cleaved by furin-like proteases at the link between S1 and S2 subunits during biosynthesis ( Figure 2C; Tang et al., 2020). Otherwise, S1/S2 are cleaved after S binds to ACE2, activating a second entry pathway where the virus is endocytosed ( Figure 2D). Within the endosome, cathepsin L can be activated by low pH and cleaves S at the S2 cleavage site, triggering fusion of the virus with the endosomal membrane (Shulla et al., 2011;Tang et al., 2020). Independent of the pathway by which the viral genome reaches the cytosol, copies of the virus genome are transcribed in the cytoplasm, and structural proteins are synthetized in the intermediate compartment between the endoplasmic reticulum and Golgi apparatus, which allows S to be cleaved by furin during its biosynthesis depending on the host cells Walls et al., 2020). In fact, the presence of TMPRSS2 has been reported as a limiting factor for SARS-CoV-2 cell entry. Further, the presence of the polybasic furin cleavage site in S, wide distribution of furin-like proteases, and cathepsin L in humans are features, which contribute to enhance virus fusion (Hoffmann et al., 2020;Walls et al., 2020).
A B
FIGURE 1 | Angiotensin-converting enzyme 2 has a catalytic role in RAS and KKS. (A) Renin converts the precursor, angiotensinogen, into angiotensin I. In a classic pathway, angiotensin I is cleaved by ACE to form Angiotensin II. ACE2 can biosynthesize angiotensin 1-7 by two distinct pathways: acting directly on angiotensin II or alternatively converting angiotensin I into angiotensin 1-9 that is further cleaved by ACE, generating angiotensin 1-7. (B) The precursor kininogen is cleaved by kallikrein to form the active peptide, bradykinin that is rapidly degraded by ACE, or in an alternative pathway, can be converted to desArg 9 bradykinin by CPM and CPN. ACE2 can inactivate desArg 9 bradykinin. ACE, angiotensin-converting enzyme; ACE2, angiotensin-converting enzyme 2; CPM, carboxypeptidase M; CPN, carboxypeptidase N; KKS, kallikrein-kinin system; and RAS, renin-angiotensin system.
Frontiers in Physiology | www.frontiersin.org Tissue tropism is influenced by multiple factors, with receptor expression, distribution, and attachment to receptors as fundamental aspects to ensure that the virus entry a variety of host's cells, targeting different organs (Maginnis, 2018). The high affinity of SARS-CoV-2 RBD for ACE2 in conjunction with the wide distribution of the ACE2 and colocalization with TMPRSS2, which allows S2 subunit release and fusion to host cells, may imply the broad clinical manifestations of COVID-19, ranging from subclinical symptoms to severe acute respiratory syndrome and multiple organ damages (Shulla et al., 2011;Hoffmann et al., 2020;Walls et al., 2020;Yang et al., 2020). The presence of a polybasic furin cleavage site in the S protein of SARS-CoV-2 also expands tropism because furin-like proteases are near-ubiquitous and distributed in human cells (Walls et al., 2020).
Renal Abnormalities Related to COVID-19
In relation to the clinical manifestation of COVID-19, 30% of the cases can be asymptomatic; most of the symptomatic cases (approximately 86%) are characterized by mild to moderate symptoms (Li et al., 2020b;Yang et al., 2020). However, in 14% of the symptomatic cases, more severe symptoms are present, and hospitalization and oxygen therapy are required (Li et al., 2020b;Yang et al., 2020). Multiple organ damage is likely to be a complication among severely ill patients. The lungs are the most affected organ, but AKI, liver dysfunction, The short pathway is possible if S had been primed by furin during biosynthesis and in the presence of TMPRSS2 and/trypsin that cleave S1; (II.) the virus fuses with plasmatic membrane; (III.) releasing its genetic material into the cytosol; (IV.) RNA transcription and replication occur in the cytosol, while the structural proteins are biosynthesized in the endoplasmic reticulum and Golgi apparatus, at this point, furin can prime S at S1/S2; and (V.) New genetic material is encapsulated by envelope and structural proteins generating new virions (VI.) that will be released from the host cell. (D) (I.) The SARS-CoV-2 recognizes its receptor, ACE2. If S had not been primed, a second pathway is activated and (II.) the virus is endocytosed; (III.) Owing to the decreased pH, cathepsin L can be activated and cleaves S1, promoting fusion of the SARS-CoV-2 with the endosome membrane and (IV.) the release of viral genetic material; (V.) RNA transcription and replication occur in the cytosol while biosynthesis of the structural proteins occurs in the endoplasmic reticulum and Golgi apparatus. In this representation, there is no furin to prime S at S1/S2; and (VI.) The genetic material is encapsulated by envelope and structural proteins, generating new virions (VII.) that will be released from the host cell. ACE2, angiotensin-converting enzyme 2; S, spike protein; SARS-CoV-2, severe acute respiratory syndrome coronavirus 2; and TMPRSS2, transmembrane serine protease 2.
Frontiers in Physiology | www.frontiersin.org 5 August 2021 | Volume 12 | Article 700220 and cardiac injury are also commonly seen (Guo et al., 2020;Yang et al., 2020). The global case fatality rate of COVID-19 is estimated to be approximately 2.2% (Johns Hopkins University, 2021b). However, among critically ill patients, the rate increases to 61.5%. Mortality risk is associated with age, presence of underlying diseases, and development of organ damage during COVID-19 (Li et al., 2020b;Yang et al., 2020). Abnormalities related to impaired kidney function are commonly seen upon admission of COVID-19 patients, with elevated serum creatinine and blood serum urea present in 14.4 and 13% of patients, respectively (Cheng et al., 2020;Hirsch et al., 2020). A reduced glomerular filtration rate (GFR) has also been observed in 13.1% of the COVID-19 patients on admission (Cheng et al., 2020). Proteinuria and hematuria are relatively common, affecting 28-43.9% and 19-26.7%, respectively. These patients were predominantly males and elderly people (Chaudhri et al., 2020;Cheng et al., 2020;Nadim et al., 2020). The high incidence of proteinuria and hematuria among COVID-19 patients raises concern because proteinuria is associated with the development of AKI and higher mortality (Chaudhri et al., 2020;Cheng et al., 2020), and patients with hematuria on admission are more prone to intensive care unit (ICU) admission, invasive mechanical ventilation, and death (Chaudhri et al., 2020).
The first reports described that AKI affects only 5-6% of total COVID-19 in-hospital patients (Cheng et al., 2020); however, in more recent studies, the reported incidence of AKI has ranged from 19 to 57% (Chan et al., 2020;Hirsch et al., 2020;Xiao et al., 2020;Nugent et al., 2021). These differences may be related to the main target population and to the different SARS-CoV-2 variants. At the beginning of the spread of COVID-19, the disease was primarily concentrated in Asia and, currently, occidental countries are the epicenter of COVID-19. This shift may impact the severity of COVID-19 and the incidence of AKI that increased mainly among patients admitted to the ICU, where COVID-19associated AKI affects over to 60% of patients (Diao et al., 2020;Dudoignon et al., 2020).
The incidence of AKI as an outcome of COVID-19 is relatively high; 19% of patients require kidney replacement therapy (KRT) and the presence of AKI is a risk factor for mortality (Chawla et al., 2017;Chan et al., 2020). The reported mortality rate among patients who develop AKI is conflicting. A prospective cohort study with 701 COVID-19 patients from a hospital in Wuhan, China reported that the mortality rate reached 91.7% among those who developed AKI (Cheng et al., 2020). In a more recent study with 5,449 patients from hospitals within the metropolitan region of New York, AKI was present in 1,993 patients. Among these patients with AKI, 519 (26%) were discharged, 698 (35%) died, and 777 (39%) remained hospitalized at the time of publication (Hirsch et al., 2020). There are limited reports on renal recovery in survivors of COVID-19-associated AKI. The actual reported rate of full recovery at discharge and during post-hospital follow-up ranges from 65 to 82.4% (Chan et al., 2020;Nugent et al., 2021;Stockmann et al., 2021). However, a comparison between COVID-19-related AKI and general AKI after adjustments shows that GFR declines faster in patients with COVID-19-related AKI; these patients are also more likely to require KRT than patients with AKI who tested negative to COVID-19. Further, recovery is slower, and the full recovery rate is lower in patients with COVID-19-associated AKI (Nugent et al., 2021).
DUAL ROLE OF ACE2 IN SARS-COV-2 INFECTIVITY AND PATHOGENESIS
Angiotensin-converting enzyme 2 is specifically recognized by its actions counterbalancing the deleterious effects of the ACE/ Ang II/AT1 axis of RAS. However, the enzyme also participates in KKS, which is responsible for inactivating desArg 9 BK and has functions beyond its catalytic actions. Additionally, ACE2 plays a role in the transport of amino acids in the kidneys and intestine, and ACE2 participates in pancreatic insulin secretion (Guang et al., 2012;Hashimoto et al., 2012;Santos et al., 2019).
The discovery of ACE2 as the receptor of SARS-CoV and now SARS-CoV-2 has led to a debate regarding the role of ACE2 in COVID-19: (i) whether ACE2 upregulation enhances SARS-CoV-2 infectivity and can be related to more severe cases and (ii) whether SARS-CoV-2 binding reduces ACE2 bioavailability, which causes impairment of RAS that is associated with a more severe disease.
Upregulation of ACE2 May Enhance SARS-CoV-2 Infectivity
In SARS-CoV infection, it has been reported that the overexpression of ACE2 enhanced viral entry into the cells and mice treatment with the anti-ACE2 antibodies ceased viral entry. In addition, ACE2 knockout mice have milder SARS-CoV outcomes than wild-type animals (Verdecchia et al., 2020).
A systematic review of lung transcriptome analysis comparing healthy non-smokers with smokers, chronic obstructive pulmonary disease (COPD), and pulmonary arterial hypertension volunteers, revealed an increase in ACE2 expression in patients with lung diseases that are more likely to develop severe COVID-19 (Pinto et al., 2020). It is worth noting that increased expression of the ACE2 gene in lung disease is associated with an increase in ADAM-10 expression, which sheds ACE2 in the pulmonary epithelium (Pinto et al., 2020).
A similar study did not find a difference between COPD patients and healthy patients but showed that smoke causes an acute increase in ACE2 expression and SARS-CoV promotes ACE2 expression in infected cells (Li et al., 2020a). However, gene and actual protein expression in the tissue are not always correlated. In lipopolysaccharide (LPS)-induced lung injury, ACE2 protein expression and activity are decreased, despite a rapid increase in its mRNA expression. The disparity between protein levels and mRNA can imply a feedback response or post-transcriptional modulation of ACE2 by LPS (Sodhi et al., 2018). Local attenuation of ACE2 functions due to shedding and post-transcriptional internalization after LPS stimulation has been reported (Sodhi et al., 2018). Consequently, (IIa.) surfactant production diminishes, leading to an increase in surface tension and alveolar collapse. Additionally, (IIb.) SARS-CoV-2 infection associated with ACE2/Ang 1-7 downregulation and ACE/Ang II and desArg 9 BK/B1 exacerbation promotes the (III.) activation of the innate and adaptative immune response and complement system leading to recruitment of leukocytes and release of cytokines, chemokines, eicosanoids, and leukotrienes. The complement system and eicosanoids promote (IVa.) coagulation disorders and the leukotrienes in association with increased surface tension contribute to (IVb.) bronchoconstriction. Moreover, the local inflammation culminates in (IVc.) increased vascular permeability, vasodilation, and endothelial dysfunction, thereby enhancing leukocyte recruitment, and leading to (Va.) exacerbated local inflammation what also contribute to (IVb.) bronchoconstriction and edema. Furthermore, bronchoconstriction and edema lead to (Vb.) hypoxia and ROS generation, which contributes to (VIa.) cardiorespiratory alterations that increase metabolic demand. Most importantly, ROS generation feeds the cycle of (Va.) enhanced inflammation. This inflammatory state leads to the (VIb.) cytokine storm and due to (IVc.) enhanced vascular permeability, viral particles, leukocytes, ROS, and cytokines can reach the blood stream, ultimately causing (VII.) systemic inflammation, (VIII.) sepsis, and consequently (IX.) hypotension. Depending on the patient health status, the steps in the pathophysiological cascade that are activated, and the intensity of the immune response, the clinical manifestations can vary from asymptomatic and mild symptoms, such as fever, cough, and myalgia, to severe symptoms, including acute respiratory distress and multi-organ damage. In this scenario, exacerbated inflammation, coagulation disorders, hypoxemia, and hypotension contribute to acute kidney injury (AKI). ACE, angiotensin-converting enzyme; ACE2, angiotensinconverting enzyme 2; Ang 1-7, angiotensin 1-7; Ang II, angiotensin II; B1, kinin receptor type 1; COVID-19, coronavirus disease 2019; desArg 9 BK, desArg 9 bradykinin; PM1, pneumocytes type I; PM2, pneumocytes type II; ROS, reactive oxygen species; and SARS-CoV-2, severe acute respiratory syndrome coronavirus 2.
SARS-CoV-2 May Reduce ACE2 Bioavailability and Downregulate Its Biological Functions
In the lungs, ACE2 is mainly expressed in pneumocyte type 2 (PM2) and macrophages. The PM2 are crucial cells for lung function as they are responsible for producing alveolar surfactant and are the progenitor cells for type 1 pneumocytes that perform gas exchanges and comprise 95% of the pneumocyte cells (Verdecchia et al., 2020). The binding and fusion of SARS-CoV and SARS-CoV-2 to cells induce a reduction in bioavailability of the ACE2 receptor, which is internalized with the virus (Figure 3; Kuba et al., 2005;Verdecchia et al., 2020). Depletion of ACE2 promotes the imbalance of local RAS with an increase in the Ang II/Ang 1-7 ratio, which promotes proinflammatory responses.
Although most of the biological functions of ACE2 are related to Ang II and Ang 1-7 balance, desArg 9 BK plays an important role in inflammatory processes and is modulated by ACE2 (Sodhi et al., 2018). Interestingly, IL-1B and TNF-α can induce B1 expression. B1 activation promotes the release of chemokines, increases the expression of IL-1B and monocyte chemoattractant protein 1 (MCP-1), and enhances the recruitment and infiltration of neutrophils (Figure 3; Mahmudpour et al., 2020).
Several experimental and clinical models of lung inflammation have reported beneficial roles of ACE2/Ang 1-7/Mas, including reduced infiltration of lymphocytes and neutrophils, reduction of perivascular and peri-bronchiolar inflammation, and decreased production of IL-6 and TNF-α (Verdecchia et al., 2020). In an acid aspiration experimental model of acute lung injury, the ACE2 knockout animals had more severe inflammatory lesions that were attenuated with recombinant ACE2 and administration of the angiotensin II receptor blocker (ARB; Sodhi et al., 2018;Verdecchia et al., 2020). The isolated S spike of SARS-CoV could induce ACE2 downregulation with concomitant increase in Ang II levels. Further, ARBs were found to reduce severe inflammatory pulmonary lesions (Kuba et al., 2005). In LPS-induced lung injury, ACE2 activity was reduced. Further, its activity promoted the accumulation of desArg 9 BK and overexpression of B1, and enhanced neutrophil recruitment and infiltration (Sodhi et al., 2018). In addition, higher circulatory levels of Ang II were observed in COVID-19 patients compared to control subjects, and the levels of Ang II were found to correlate with lung injury (Mahmudpour et al., 2020).
Additionally, the groups at risk of developing more severe COVID-19 and have high mortality rates include older adults, males, and people with chronic diseases, including diabetes, hypertension, and cardiovascular and kidney diseases (Williamson et al., 2020;Yang et al., 2020). Interestingly, in all these groups, impairment of RAS with reduced ACE2/Ang 1-7/Mas regulation and/or enhanced ACE/Ang II/AT1 actions has been reported. Therefore, the depletion of ACE2 due to SARS-CoV-2 infection can contribute at least in part to the triad of hematological, pulmonary, and inflammatory outcomes of COVID-19 (Lanza et al., 2020;Verdecchia et al., 2020).
POSSIBLE MECHANISMS INVOLVED IN KIDNEY INJURY IN COVID-19 PATIENTS
The exact mechanism of kidney involvement in COVID-19 is unknown and might be multifactorial. Indirect injury due to systemic inflammation, hypoxemia, shock, hypotension, and systemic imbalance of RAS associated with SARS-CoV-2 infection is possible (Figure 3). Additionally, the SARS-CoV-2 can infect renal cells causing direct injury and subsequent impairment of intrarenal RAS that may be a major contributor to acute and long-term kidney injury (Figure 4).
SARS-CoV-2 Indirect Effects on Kidneys: How the Inflammatory, Coagulation, and Respiratory Symptoms Can Lead to Kidney Injury
Acute kidney injury is associated with intrarenal and systemic inflammatory responses (Rabb et al., 2016). After an insult, morphological and metabolic alterations occur in tubular epithelium and endothelial cells, inducing the synthesis and release of cytokines, chemokines, and leukocyte infiltration (Akcay et al., 2009). Inflammation plays a major role in the onset and progression of AKI (Akcay et al., 2009;Rabb et al., 2016). In fact, there is a cytokine profile associated with AKI, and IL-18 and IL-6 are considered biomarkers for AKI (Akcay et al., 2009).
After SARS-CoV-2 enters the cells and replicates, especially in type II pneumocytes in the lungs, innate and adaptive immune responses are activated (Miyazawa, 2020 ; Figure 3). Complement activation is an important feature of the innate immune response, which is the primary defense line to be activated after infection (Kenawy et al., 2015). Complement activation is strongly influenced by a pH below 7.1; inflammation decreases pH locally, allowing its activation. In addition, some fluid compartments, such as the lumen of renal tubules, can naturally present a pH lower than 7.1 (Kenawy et al., 2015). In healthy subjects, the leakage of proteins, including complement proteins, to renal tubules is minimal; however, in COVID-19 patients, proteinuria is a common clinical finding (Cheng et al., 2020;Nadim et al., 2020). In fact, complement system activation and deposition of the complement component, C5b-9, in the renal tubules of COVID-19 patients has been reported (Batlle et al., 2020;Benedetti et al., 2020). The complement system promotes tubulointerstitial damage and is a major player in renal injury, especially during acidosis (Kenawy et al., 2015), which can occur during SARS-CoV-2 infection due to hypoxia. In fact, 12% of COVID-19 patients present with acidosis, and the reported percentage among those with AKI is 23% Mohamed et al., 2020;Nadim et al., 2020).
The complement system is tightly cross-linked with the coagulation system (Kenawy et al., 2015 ; Figure 3). Coagulation disorders are commonly observed in severely ill patients with COVID-19, and disseminated intravascular coagulation, ischemic limbs, strokes, and venous thromboembolism have been consistently reported (Liao et al., 2020;Mohamed et al., 2020). Nevertheless, thrombocytopenia, prolonged prothrombin time and higher D-dimer levels have also been observed and associated with death in COVID-19 patients (Liao et al., 2020).
Notably, increased coagulation factors are correlated with decreased renal function in subjects without cardiovascular disease or chronic kidney disease (CKD; Dekkers et al., 2018).
Lung involvement in COVID-19 can result in hypoxia (Figure 3). The kidneys are particularly sensitive to changes in oxygen delivery. Persistence of renal hypoxia leads to the activation of intrarenal cellular mechanisms involved in renal fibrosis and vasoconstriction, which in turn enhances renal hypoxia (Haase, 2013;Fu et al., 2016). This cycle contributes to the development of AKI and the progression to CKD (Fu et al., 2016).
Hypotension can be an outcome of SARS-CoV-2 infection and can be associated with hemodynamic instability, shock, or sepsis (Shetty et al., 2020 ; Figure 3). In addition, orotracheal intubation presents a potential risk of hypotension (Smischney et al., 2016). In a recent report, most COVID-19 patients admitted to the ICU had preserved hemodynamics, unless heart failure, sepsis, or thrombotic events were associated (Corrêa et al., 2020;Hanidziar and Bittner, 2020). In contrast, a case series in the Seattle region reported that the most common cause of ICU admission was hypoxemia or hypotension, and the mortality rate among these patients was extremely high (Bhatraju et al., 2020). The severity and duration of hypotension are closely associated with the risk of AKI development in ICU patients (Lehman et al., 2010). FIGURE 4 | Direct effects of SARS-CoV-2 on kidneys: Infection and disruption of the downstream mechanisms regulated by ACE2. It is likely that SARS-CoV-2 infects the kidney by directly targeting the proximal tubule cells and podocytes. The infection can lead to depletion of ACE2 and its biological functions at the intrarenal level, which may lead to exacerbated actions of the ACE/Ang II/AT1 axis of RAS and desArg 9 BK/B1 axis of KKS. This impairment leads to reduced renal blood flow, GFR, diuresis, and natriuresis. However, there is an increase in vasoconstriction. The oxidative stress is enhanced due to a decrease in NO levels, interfering in the balance between prostacyclin and thromboxane A2. Furthermore, fibrosis is enhanced in response to TGF-β1 and endothelin-1. Finally, inflammation is upregulated along with augmented levels of chemokines, cytokines, and leukocyte recruitment. ACE2, angiotensin-converting enzyme 2; AT1, angiotensin II receptor type 1; BK, bradykinin; desArg 9 BK, desArg 9 bradykinin; B1, kinin receptor type 1; B2, kinin receptor type 2; GFR, glomerular filtration rate; KKS, kallikrein-kinin system; and RAS, renin-angiotensin system.
Frontiers in Physiology | www.frontiersin.org 9 August 2021 | Volume 12 | Article 700220 In conclusion, COVID-19 has a triad of respiratory, inflammatory, and coagulation symptoms that are frequently present in severe ill patients, and that indirectly can promote AKI.
SARS-CoV-2 Direct Effects on Kidneys: Imbalance of Intrarenal RAS and Disturbance of Kidney Homeostasis
In the kidneys, ACE2 is expressed in podocytes, mesangial cells, parietal epithelium of Bowman's capsule, brush border proximal cells, and collecting duct cells (Hamming et al., 2008;Aragão et al., 2011). Single-cell RNA sequencing analysis of different kidney cells enabled the identification of a relatively high co-expression of ACE2 and TMPRSS2 in podocytes and proximal straight tubule cells (Pan et al., 2020). Furthermore, the kidney is one of the organs with the highest ACE2 expression and activity (Hamming et al., 2004).
Post-mortem analysis of the kidneys of COVID-19 patients revealed the accumulation of SARS-CoV-2 antigens in the renal epithelial tubules, suggesting a direct infection of the kidneys by the virus (Diao et al., 2020).
Podocytes and proximal straight tubule cells are strong candidates for SARS-CoV-2 host cells in the kidneys, as they participate actively in urine filtration, excretion, and reabsorption. In fact, the virus was detected in urine samples of patients with severe COVID-19 (Diao et al., 2020). In addition, podocytes are extremely sensitive to bacterial and viral infections and podocyte injury leads to proteinuria, a common laboratory finding in COVID-19 patients even upon admission (Cheng et al., 2020;Pan et al., 2020). Kidney autopsy findings in patients with SARS-CoV-2 confirmed the presence of viral particles in the podocytes accompanied by morphological alterations, foot process effacement, vacuolation, and detachment from the glomerular basement membrane (Su et al., 2020).
Considering the direct infection of the kidneys with SARS-CoV-2, ACE2 depletion may be an important factor driving kidney injury. This can result in an imbalance of intrarenal ACE2/Ang 1-7/Mas and ACE/Ang II/AT1 arms of RAS. Additionally, ACE2 depletion may impact the KKS, downregulate BK/B2, and exacerbate desArg 9 BK/B1 actions. These components are involved in renal hemodynamic homeostasis and the molecular mechanisms involved in kidney diseases, including vasoconstriction, oxidative stress, inflammation, and fibrosis (Figure 4). A recent study reported the super activation of RAS in patients with COVID-19 and AKI; the levels of renin and aldosterone were increased and correlated with reduced sodium excretion (Dudoignon et al., 2020). The presence of AKI contributed to a 10-fold increase in mortality in this study (Dudoignon et al., 2020). Thus, we discuss the impact of ACE2 depletion on the kidneys.
Angiotensin-converting enzyme 2 can directly antagonize ACE/Ang II/AT1 actions by cleavage of Ang II with subsequent formation of Ang 1-7. In addition, ACE2 can modulate ACE/ Ang II/AT1 antagonism downstream through Ang 1-7, which can downregulate AT1 in vascular smooth cells, inhibit ACE in internal mammalian arteries, activate AT2, and promote vasodilatation (Roks et al., 1999;Clark et al., 2001; Figure 4). This counterregulatory effect is important because super activation of the ACE/Ang II/AT1 axis is related to the deleterious effects within the kidneys (Yang and Xu, 2017).
Also, Ang II can mediate profibrotic actions by direct activation of the transcription and synthesis of TGF-β, particularly TGF-β1. Ang II has also been proposed to upregulate the TGF-β receptor (Rüster and Wolf, 2011). TGF-β1 is a profibrotic and anti-inflammatory cytokine that enhances fibronectin and collagen type I mRNA expressions, posteriorly promoting extracellular matrix synthesis and deposition in the interstitial space (Rüster and Wolf, 2011;Macconi et al., 2014). Furthermore, Ang II induces expression of other profibrotic factors, including endothelin-1, plasminogen activator inhibitor-1, matrix metalloproteinase-2, and its tissue inhibitor (Rüster and Wolf, 2011).
In contrast, the peptide formed by the cleavage of Ang II by ACE2, Ang 1-7, promotes counterregulatory action at the intrarenal level (Figure 4). Ang 1-7 mediates vasodilation indirectly by stimulating the actions of BK on B2 and promotes the release of prostacyclin and NO (Schindler et al., 2007;Schinzari et al., 2018). There are controversial reports on the effects of Ang 1-7 on water balance. A diuretic and natriuretic effect has been demonstrated in several animal models and in vitro studies (Pinheiro and Simões E Silva, 2012). In contrast, in water-loaded animals, the anti-natriuretic and anti-diuretic effects of Ang 1-7 have been reported (Pinheiro and Simões E Silva, 2012).
Ang 1-7 has protective effects in the kidneys, including antioxidant, anti-inflammatory, and anti-fibrotic effects (Figure 4). These effects of Ang 1-7 are partially due to the inhibition of NFκB signaling and reduction of the levels of chemokines and cytokines, such as MCP-1, TNF-α, IL-1β, ICAM-1, and VCAM-1 (Khajah et al., 2016;Choi et al., 2020). Ang 1-7 interferes with TGF-β1 signaling through Smad2/3 and Smad4, a mechanism that leads to an enhanced synthesis of collagen type I, fibronectin, and α-SMA (Macconi et al., 2014;Choi et al., 2020). Ang 1-7 may also exert antioxidant effects through the stimulation of NO. At physiological concentrations, NO abates oxidative stress by neutralizing some species of ROS. In addition, NO can protect cells from death induced by ROS, such as hydrogen peroxide, alkyl hydroperoxides, and xanthine oxide (Wink et al., 2001). Not all ACE2 biological functions are attributed to the antagonism of ACE/Ang II/AT1 axis, as previously mentioned. ACE2 can modulate BK downstream, stimulating its effects through B2 and increasing BK half-life by inhibiting the ACE/ Ang II/AT1 axis (Cyr et al., 2001;Schindler et al., 2007;Schinzari et al., 2018). Furthermore, ACE2 inactivates the B1 agonist, desArg 9 BK, downregulating its actions (Cyr et al., 2001). Thus, ACE2 is an important regulator of KKS.
Binding of BK to B2 rapidly stimulates the activity of endothelial nitric oxide synthase and prostacyclin synthesis to enhance the release of NO, prostacyclin, and endothelialderived hyperpolarizing factor, which culminates in a potent and rapid vasodilator response (Figure 4; Hornig and Drexler, 1997;Kakoki and Smithies, 2009). Moreover, intrarenal BK effects on renal hemodynamics are mediated by B2 activation and include an augmented renal blood flow, increased GFR, and diuresis (Zhang et al., 2018). Additionally, BK may modulate oxidative stress and senescence; as impairment of KKS is associated with oxidative damage and mitochondrial dysfunction (Kakoki and Smithies, 2009). BK plays an important role in the inflammatory response, mediating vasodilation and increasing vascular permeability. BK binding to B2 promotes IL-6 expression (Golias et al., 2007).
Under physiological conditions, most of the KKS actions are mediated by B2 activation and the B1 is induced in inflammatory processes (Klein et al., 2010). Infusion of low concentrations of desArg 9 BK in anesthetized rats decreased renal blood flow and GFR, which was associated with increased renal vascular resistance and opposite effects to those of BK (Schanstra et al., 2000;Zhang et al., 2018). B1 activation is associated with enhanced transcription of NFκB, which has a positive feedback on B1 expression and promotes the release of cytokines and chemokines, ultimately resulting in the accumulation of leukocytes (Klein et al., 2010).
In conclusion, ACE2 downstream regulates RAS, KKS, prostaglandins, and NO, all involved in the renal pathophysiology. Thus, ACE2 depletion at the renal level has a significant impact on kidney function and on the molecular mechanisms involved in kidney injury.
CONCLUSION
Currently, AKI is recognized as a frequent complication of COVID-19. Considering the poor prognostics associated with presence of AKI in COVID-19 patients, elucidating the mechanisms involved in SARS-CoV-2-induced kidney injury is fundamental for developing strategies to better manage these patients. Furthermore, there is no available information on the impact of COVID-19 on long-term kidney function. The possibility of AKI evolving to CKD is a concern as patients with AKI are more prone to develop CKD and end-stage renal disease; severe conditions linked to high personal, societal, and economic burdens (Chawla et al., 2017).
Currently, there is insufficient information regarding AKI recovery in discharged COVID-19 patients. Most patients can recover from AKI stage 1, while those who evolve to stages 2 and 3 have a high mortality rate. The COVID-19related AKI seems to contribute to a faster decline in GFR, higher rate of KRT requirement, and slower complete kidney function recovery (Chan et al., 2020;Xiao et al., 2020;Nugent et al., 2021). Such findings highlight the importance of monitoring kidney function in COVID-19 survivors who presented AKI or kidney abnormalities. Clinical interventions during the time frame between AKI and possible establishment of CKD are essential to alter the course of the disease (Chawla et al., 2017).
The triad of symptoms of COVID-19, namely inflammation, impaired immune response, and coagulation disorders, can indirectly affect kidney homeostasis and cause kidney injury. Pulmonary impairment with hypoxemia and the involvement of the heart also induce renal damage. The kidneys are a potential target for direct SARS-CoV-2 infection. In this scenario, besides the damage caused by viral infection and replication, the depletion of ACE2 can be an important mechanism leading to the imbalance of RAS and KKS, contributing to a cascade of intrarenal cellular mechanisms involved in kidney injury. The impairment of ACE2 and consequently RAS and KKS, may persist after COVID-19 infection, compromising the longterm kidney function. Considering this, an extensive study of the modulation of intrarenal RAS, especially ACE2, in the context of COVID-19 may hold a key to establishing therapies to manage COVID-19-induced kidney injury.
AUTHOR CONTRIBUTIONS
NA and LG selected the relevant publications, wrote, and generated the figures for this manuscript. HT and JO contributed to the conception and revision of the article. DC was responsible for the design of this manuscript and critical assessment of the content based on her expertise. All authors contributed to the article and approved the submitted version.
ACKNOWLEDGMENTS
We would like to thank Editage (www.editage.com) for English language editing. Frontiers in Physiology | www.frontiersin.org Frontiers in Physiology | www.frontiersin.org
|
v3-fos-license
|
2017-06-18T19:47:33.872Z
|
2011-05-05T00:00:00.000
|
18731777
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://jneurodevdisorders.biomedcentral.com/track/pdf/10.1007/s11689-011-9082-7",
"pdf_hash": "7c0f43fa99c1faf1ac53098c2897923a1b8190c0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42422",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "dc8830aaa4e4764876b2c8085307b2e6989e6cc7",
"year": 2011
}
|
pes2o/s2orc
|
Is theory of mind related to social dysfunction and emotional problems in 22q11.2 deletion syndrome (velo-cardio-facial syndrome)?
Social dysfunction is intrinsically involved in severe psychiatric disorders such as depression and psychosis and linked with poor theory of mind. Children with 22q11.2 deletion syndrome (22q11DS, or velo-cardio-facial syndrome) have poor social competence and are also at a particularly high risk of developing mood (40%) and psychotic (up to 30%) disorders in adolescence and young adulthood. However, it is unknown if these problems are associated with theory of mind skills, including underlying social-cognitive and social-perceptual mechanisms. The present cross-sectional study included classic social-cognitive false-belief and mentalising tasks and social-perceptual face processing tasks. The performance of 50 children with 22q11DS was compared with 31 age-matched typically developing sibling controls. Key findings indicated that, while younger children with 22q11DS showed impaired acquisition of social-cognitive skills, older children with 22q11DS were not significantly impaired compared with sibling controls. However, children with 22q11DS were found to have social-perceptual deficits, as demonstrated by difficulties in matching faces on the basis of identity, emotion, facial speech and gaze compared with sibling controls. Furthermore, performance on the tasks was associated with age, language ability and parentally rated social competence and emotional problems. These results are discussed in relation to the importance of a better delineation of social competence in this population.
Abstract Social dysfunction is intrinsically involved in severe psychiatric disorders such as depression and psychosis and linked with poor theory of mind. Children with 22q11.2 deletion syndrome (22q11DS, or velo-cardio-facial syndrome) have poor social competence and are also at a particularly high risk of developing mood (40%) and psychotic (up to 30%) disorders in adolescence and young adulthood. However, it is unknown if these problems are associated with theory of mind skills, including underlying social-cognitive and social-perceptual mechanisms. The present cross-sectional study included classic social-cognitive false-belief and mentalising tasks and socialperceptual face processing tasks. The performance of 50 children with 22q11DS was compared with 31 age-matched typically developing sibling controls. Key findings indicated that, while younger children with 22q11DS showed impaired acquisition of social-cognitive skills, older children with 22q11DS were not significantly impaired compared with sibling controls. However, children with 22q11DS were found to have social-perceptual deficits, as demonstrated by difficulties in matching faces on the basis of identity, emotion, facial speech and gaze compared with sibling controls. Furthermore, performance on the tasks was associated with age, language ability and parentally rated social competence and emotional problems. These results are discussed in relation to the importance of a better delineation of social competence in this population.
Keywords 22q11.2 deletion syndrome . Velo-cardio-facial syndrome . Social functioning . Theory of mind . Socialperception . Social-cognition Background Social competence is a multidimensional construct in which social, emotional, cognitive and behavioural skills are involved in a dynamic interplay with the environment. This, in turn, facilitates successful social adaptation, including the ability to initiate and maintain satisfactory relationships, with, for example, peers (Iarocci et al. 2007). One way to explore the complex genetic and environmental interactions modulating social competence is to study individuals with a known genetic disorder who also have differences in social behaviour. The 22q11.2 deletion syndrome (22q11DS) also known as velo-cardio-facial syndrome is the most common known microdeletion disorder and occurs in one in every 2,000 to 4,000 live births (Shprintzen 2005;Vorstman et al. 2006). The syndrome has a large phenotypic spectrum but is most commonly associated with developmental anomalies such as cardiac and palatal abnormalities, a syndrome specific typical face, intellectual disabilities and specific social and cognitive impairments. Children with 22q11DS are frequently described as being shy and withdrawn, socially immature and as having difficulties with initiating and maintaining positive peer relationships (Golding-Kushner et al. 1985;Heinemande Boer et al. 1999;Swillen et al. 1997Swillen et al. , 1999Shprintzen 2000). Children with the syndrome also present with a high rate of psychiatric disorders including autism spectrum disorder, attention-deficit disorder, separation anxiety and affective disorders (Vorstman et al. 2006;Fine et al. 2005;Gothelf et al. 2004;Swillen et al. 2000). The syndrome is further believed to be the third highest known risk factor for developing schizophrenia-like psychotic disorders in late adolescence or early adulthood (Murphy 2002). Many of the psychiatric disorders experienced by people with 22q11DS are associated with a lack of appropriate social competence, and it has been suggested that individual differences in social competence among people with 22q11DS may be associated with the subsequent development of psychiatric disorders such as anxiety and depression (Murphy 2005). Research over the last decade has shown that social functioning in psychiatric disorders such as depression and psychosis (Wang et al. 2008) are linked with deficits in theory of mind, for instance, the ability to judge one's own and other people's mental states (Premack and Woodruff 1978).
The ability to accurately understand, reason and predict other people's behaviour requires the integration of complex skills. It has been argued that theory of mind skills are dependent on two dissociable components, namely a socialperceptual and a social-cognitive component (Tager-Flusberg and Sullivan 2000). The social-perceptual component includes the ability to recognise people and to interpret people's mental state from facial emotions or body expressions. In healthy individuals, the processing of human faces and, in particular, the ability to accurately recognise facial emotions is vital for social competence and is thought to depend on specialised neural systems including the occipitotemporal cortex (for a review, see (Posamentier and Abdi 2003)). More specifically, it has been suggested that while invariant aspects of faces are dependent on the lateral fusiform gyrus, more changeable aspects such as expression, eye gaze and lip movement are processed in the superior temporal sulcus and associated networks (Haxby et al. 2000). Few studies have specifically examined face processing in people with 22q11DS; however, recently, two functional magnetic resonance imaging (fMRI) studies identified atypical neural activations among children and adults with 22q11DS compared with healthy (Andersson et al. 2008) and learning-disabled comparison subjects (van Amelsvoort et al. 2006) when processing facial expressions. The differences in neural activations were argued to be face-specific and not due to a general visual perceptive deficit since the neural activation pattern in people with 22q11DS were similar to controls when presented with non-face stimuli such as houses (Andersson et al. 2008). Likewise, it appears as if young adults with 22q11DS use atypical strategies while viewing photographs of faces displaying emotions (as measured using visual scanpath technology), and this is associated with poorer accuracy when labelling the displayed emotions (Campbell et al. 2010a). In particular, it was reported that people with 22q11DS spent more time looking at peripheral (off-the-face) rather than internal (eye, nose, mouth) features of the face. The young people with 22q11DS also spent less time looking at the eye region of the face and significantly more time looking at the mouth compared with controls. These studies indicate that people with 22q11DS do not process faces in a typical manner; it also appears as if these atypical processes are associated with poorer skills of encoding and interpreting facial information. Indeed, it has been suggested that short-term memory of unknown faces may be impaired in 22q11DS. In particular, when children with 22q11DS were asked to recognise faces that had been learned immediately before, they performed poorer not only compared with controls but also compared with their performance on other memory tasks (e.g., visual-spatial; Campbell 2006; Lajiness-O'Neill et al. 2005). Furthermore, we recently reported that children with 22q11DS (a subgroup of those reported in the current paper) had a significantly reduced performance on face processing tests of gaze direction, identity and emotion recognition compared with intellectually, age-and gender-matched children with Williams syndrome (WS; Campbell et al. 2009). However, no significant group differences were identified on a facial speech-recognition task (Campbell et al. 2009). These findings have led to the assumption that face processing is atypical in people with 22q11DS. However, since people with WS usually perform in the normal range and significantly better than mental-agematched controls on such tasks (Tager-Flusberg et al. 2003), we still need to evaluate the face processing skills of people with 22q11DS compared with typically developing controls in order to determine how people with 22q11DS process faces.
A second key component of theory of mind is described by Tager-Flusberg and Sullivan (2000) as social-cognitive. The social-cognitive component underlies the ability to understand that other people have mental states that are independent from one's own including independent thoughts, beliefs and intentions and to make attributions about these (Castelli et al. 2002). Social-cognitive skills are crucial to understand and correctly predict peoples' actions and are often measured using classical false-belief tasks such as the Sally-Ann scenario (Baron-Cohen et al. 1985) and mentalising tasks such as the Strange Stories (Happé 1994;Jolliffe and Baron-Cohen 1999). The neurobiological substrate most strongly linked with the social-cognitive component is the medial frontal region of the brain (Siegal and Varley 2002). Recently, it was reported that a sample of children (Niklasson et al. 2002) and adults (Bassett et al. 2007;Chow et al. 2006) with 22q11DS had theory of mind deficits. These studies were valuable first steps. However, one study (Niklasson et al. 2002) did not include a control group, and the other study focussed solely on adults with 22q11DS and schizophrenia (Bassett et al. 2007;Chow et al. 2006). In contrast, we recently reported that children with 22q11DS, compared with matched children with WS, did not perform poorer on false-belief tasks (Campbell et al. 2009). There were some group differences (with the 22q11DS group performing poorer) on the Strange Stories task, although the result may have been confounded by low comprehension or on stories requiring mentalising skills (Campbell et al. 2009). However, the study was limited by a small sample size and did not examine how children with 22q11DS performed compared with typically developing children.
The objectives of the current study were to examine the two proposed components of theory of mind, socialperception (face processing) and social-cognitive (false beliefs) in a cohort of children with 22q11DS compared with typically developing sibling controls. Furthermore, we aimed to investigate if performance on these tasks was related to everyday social competence and emotional problems, as rated by the parent(s). We tested the hypotheses that: compared with age-and gender-matched sibling controls (1) children with 22q11DS participants have specific deficits in face processing tasks most related to social competence, i.e., emotion recognition and gaze direction; (2) children with 22q11DS show a deficit on the false-belief and mentalising tasks; and (3) that parent-rated social competence is correlated with performance on these theory of mind tests (face processing tasks, false-belief and mentalising tasks)
Participants
There were 50 participants in the 22q11DS group (22 males, 28 females; age range, 6 to 16.75 years (M=10.99, SD= 2.90)). The presence of a 22q11.2 deletion was confirmed through the use of fluorescence in situ hybridisation. Thirtyone unaffected sibling controls, matched for age (18 males, 13 females; age range, 6 to 14.75 years (M=10.62, SD=2.59)) were also included in the study. The majority of participants were of white Caucasian descent (79 out of 81). None of the participants in this study presented with the clinical phenotype of 22qDS but without the large 3 Mb 22q11.2 deletion. As such, all participants were included in analysis. Furthermore, those with a clinically detectable medical disorder known to affect brain structure (e.g. epilepsy or hypertension) and a history of head injury or stroke were excluded. We recruited children with 22q11DS and their typically developing siblings through the VCFS-UK support group. We chose to compare the 22q11DS cohort with sibling controls at a group-level for several reasons. First, 22q11DS is a random de novo gene deletion, as such attenuated forms of the condition are not present in siblings. Second, the sample was selected in order to control for socio-economic status as well as home environment and to facilitate recruitment.
According to independent sample t tests, there was no significant difference in age (t=0.59, df=79, p=0.56) or gender (χ²=1.23, df=79, p=0.22) between the 22q11DS and the control group (see Table 1). Prior to commencing the study, the participant's parents/guardians and, in cases where the participant was 16 years or older, the participant, gave written informed consent after the procedure was fully explained. A subgroup of the current participants has been included in a study of brain structure (N=39; Campbell et al. 2006) and in a comparison of children with 22q11DS to children with Williams syndrome (N=15; Campbell et al. 2009). The study was approved by the local ethics committee at the Institute of Psychiatry, King's College, London, UK.
Materials
Intellectual function was measured using the Wechsler Intelligence Scale for Children version III UK edition (Wechsler 1991), an intelligence test for children aged 6 to 16 years old, consisting of 13 subtests which can be used to generate the participants Full-Scale IQ, Performance IQ and Verbal IQ. Furthermore, to investigate the influence of language ability on the experimental task, the British Picture Vocabulary Scale (BPVS; Dunn, Whetton, and Burley, 1982) and the Test for Reception of Grammar (TROG; Bishop 1983) were included. The BPVS is a test of receptive (hearing) language, in which the individual matches a word presented orally with one out of four pictures by pointing. The TROG measures understanding of grammatical contrasts. The tests are designed to remove cues (such as contextual) to aid the understanding of the sentence, leaving the child with only the grammatical structure to aid them in interpreting the sentence accurately. Each sentence presented to the child has four options presented pictorially, and the items contain both grammatical and non-grammatical (lexical) distractors in order to determine whether the child has a specific problem with grammatical understanding or a bad performance due to other factors such as poor attention or memory.
Face processing
This was measured using the MRC Face Processing Skills Battery, a procedure that has previously been shown to be an effective tool for research with children with developmental disorders (Bruce et al. 2000). It consists of 14 tests which examine four different aspects of face processing; Identity, Emotion, Eye gaze and Facial Speech (Sound). In the current study, each test included images of children's faces (unless otherwise stated) on a uniform grey background approximately 5.5×4 cm in size printed on A4 paper. The tests require pointing responses and increases in difficulty across trials.
There are five Identity tests with 16 trials each, in which the participant is required to indicate which face out of two choices belong to the same child as another face presented above them. For Idmatch.dis, the two options are dissimilar in appearance in terms of age, gender or general appearance whilst, for Idmatch.sim, the two options are similar in this regard; Idno.dis has the same faces as Idmatch.dis, but the hair and ears are removed; Idno.sim task has the same faces as Idmatch.sim but the hair and ears are removed; finally, Idmask has the same faces as Idno.sim but with grey circles are painted over the eyes.
There are three Emotion tests consisting of 12 trials each, in which images of happy, sad, angry or surprised facial expressions are presented in equal proportions. Expair is used to determine whether the participant can identify an emotional expression given the verbal label. Pairs of faces are shown, and the participant indicates which face is 'happy', 'sad', 'angry' or 'surprised.' For Exmatch.child and Exmatch.adult, the participant indicate which of two presented faces 'feel the same way' as an above presented facial image. Exmatch.adult differs from Exmatch.child in that it uses adult's faces.
There are three tests of Gaze directionality. In Gazepair, two faces are presented and the task is to decide which face is looking at the participant with the position of the head being either full face or 3/4 view. Gazematch.45 and Gazematch.10 requires the participant to indicate which of two faces are looking in the same direction as an above face. Gazepair and Gazematch.45 uses 12 trials with children's faces while Gazematch.10 uses trials with one adult male's face.
There are three Facial Speech tests. Facial speech refers to the lip movements associated with the expression of basic speech sounds required for verbal communication (e.g. "ee", "oo", etc.). In this task, the mouths on the images were saying "aa", "ee", "ff" or "oo" in equal proportions. In Soupair, 12 pairs of faces are presented, and the participant is required to indicate which is saying either "aa", "ee", "ff" or "oo". The Soumatch.ff and Soumatch.44 tests require the participant to indicate which of two faces is making the same sound as an above image. Soumatch.ff have 12 trials with the faces shown in full face views while for Soumatch.44 have 24 trials in which the top face shown is a 3/4 view and the bottom faces are shown in full face view.
The Idmatch.sim, Expair, Exmatch.child, Gazepair, Gazematch.45, Soupair and Soumatch.ff tests were administered to the participant first. Participants were required to score greater than 80% accuracy for tasks in this grouping. If they made this cutoff, they progressed to the second level of face processing tasks. The second level of tasks are considered more complex, and administration continued until the participant was no longer able to maintain the predefined (80%) level of accuracy.
False-belief and mentalising
The tasks were selected on the basis that they have good validity and fair to moderate reliability for children with varying intellectual abilities including those with developmental disorders (Hughes et al. 2000). The Sally-Anne is a first-order false-belief task in that it examines the ability to understand that others can have beliefs that differ from their own and requires them to predict another person's behaviour accordingly (Baron-Cohen et al. 1985). Two dolls named "Sally" and "Anne" are shown to act out a false-belief scenario in which a marble is displaced whilst Sally is not looking. In order to pass this task, the participant has to correctly respond to the false-belief question "Where will Sally look for her marble?" The participant is also asked "Where is the marble really?" (reality question) and "Where was the marble in the beginning?" (memory question). The child could score a pass or fail on the false-belief question.
In the Smarties task, another first-order test, the participants themselves experiences having a false-belief (Gopnik and Astington 1988). Specifically, the participant is shown a Smarties box (which usually contains chocolates) and is asked what they think is in the box. The experimenter then opens the box and shows the participant that there is a pencil inside. The box is then re-closed with the pencil inside, and the participant is asked what they think their parent (who has not seen what is inside the box) would say is inside the box. The participant pass the task if they respond that their parent will say there are Smarties in the box but fails if they respond that the parent will think the box contained a pencil.
The Chocolate task is a second-order false-belief task, which examines the ability to think about what a person falsely believes another person believes (Perner and Wimmer 1985). This was attempted only if both the Sally-Anne and Smarties tasks had been passed. It involves reading the participant a false-belief story concerning two fictitious children named "Mary" and "John"; the task is supplemented by pictures also portraying the story. It is similar to the smarties displacement scenario except that in this case the question refers to a second-order false-belief by asking "Where does John think Mary will look…?". The task is scored as a pass or fail.
The Strange Stories task (Happé 1994;Jolliffe and Baron-Cohen 1999) was used to assess mentalising abilities, when listening to stories by examining the ability to interpret non-literal statements. Four stories involve everyday situations where the characters say things they literally do not mean (i.e. lies, false beliefs, double bluff or manipulation). These are contrasted with four physical stories which act as a control against comprehension deficits. They differ from the mentalising stories in that they do not involve mental states and are not social in nature, involving situations in which there is an unforeseen outcome with a mechanical-physical cause. The stories are presented in an alternating manner. To minimise memory requirements printed forms of the stories are placed in front to the participant during reading and decision making. For mentalising stories, the questions are "Why did X say that?" and "Was it true what X said?" For each physical story, the participants are asked why something happened or why a particular action had taken place. For each story, two points are awarded for an accurate full description of the story, one point for a partial description, or no points if the participant refers to irrelevant information. The scores on each story type are then combined to produce single mentalising and physical stories scores for each participant.
Social competence and emotional well-being
The Strengths and Difficulties Questionnaire (SDQ; Goodman et al. 2000) was completed by a parent. The SDQ is a brief behavioural screening questionnaire for 3-to 16-year-olds, consisting of 25 items which form five clusters, Emotional Symptoms, Conduct Problems, Hyperactivity/Inattention, Peer Relationship Problems and Prosocial Behaviour. The occurrence of particular attributes are rated using a three-point Likert scale ("not true", "somewhat true" or "certainly true"). The reliability and validity of the SDQ make it a useful measure of adjustment and psychopathology in children and adolescents (Goodman et al. 2000). For the purpose of the current study, only the emotional problems and peer relationship problem scores were analysed.
Data analysis
Statistical analyses were conducted using the Software Package for the Social Sciences (SPSS) version 14. Independent sample t tests were used to compare the groups on the variables SDQ peer and emotional problems. For the face processing tasks, a total score for each aspect was computed by summing the scores for each participant across the subtests, and when a participant did not progress to a higher level, a score of 0 was given for that particular subtest. For the Strange Stories and the face processing tasks, group differences were examined separately by simple linear regression analyses. The effect of age on the participants' performance were examined using an interaction term based on the product of standardised age scores × group score (22q11DS=1; controls=−1) enabling both group membership and age to be examined within the one variable. The beta coefficients are reported in the result section. A paired-sample t test was used to examine within-group differences on the Strange Stories task. For the Sally-Anne task, Smarties task and Chocolate task, between-group comparisons were conducted using Fisher's exact tests due to small participant numbers in each category. Consistent with previous findings (Campbell et al. 2010b), IQ score (reported WASI scores; Wechsler 1991) was omitted from analysis as a covariate in the repeated-measures design as reported IQ differences were greater than 2 SD between the groups and was therefore considered a group defining characteristic.
Finally, an aggregate standardised score for each of the two theory of mind components (social-perceptual = face processing, social-cognitive = false belief and mentalising) was computed. This was used to examine within-group correlations (using Pearson's r correlations) with the measures of social competence (as measured by the mean standardised score for peer problems) as well as chronological age, full-scale IQ and standardised scores from the BPVS, the WISC-III subtest of digit span (to measure working memory) and the TROG.
Face processing
The data are shown in Table 2 and Fig. 1. Simple linear regression analyses revealed a significant effect of Group on the face processing tasks; Identity (beta coefficient=−0.725, p<0.0005), Emotion (beta coefficient=−0.498, p<0.0005), Gaze (beta coefficient=−0.586, p<0.0005), Facial speech (beta coefficient=−0.360, p<0.0005) with the sibling control group performing significantly better than the 22q11DS group. There was also a significant effect of age on Identity (beta coefficient=0.235, p<0.005), Gaze (beta coefficient=0.226, p<0.02) and Facial speech (beta coefficient=0.242, p<0.03), with the older participants performing better than younger participants, but no age × group interactions were identified.
Further repeated-measures analyses were conducted with Group as the between-subjects factor and each of the face processing tasks entered as the within-subjects factor. Analysis revealed a significant group×task interaction (F(1,78)=4.68, p<0.03). Post hoc within-group paired-sample t tests found that both the 22q11DS and the control group had significantly more difficulties with the Gaze task compared with the other tasks (p <0.02). Meanwhile, the control group had significantly higher scores on the Identity task compared with the other tasks (p<0.0005) whilst no such pattern was identified in the 22q11DS group (p>0.05).
With the exception of Soumatch.ff (4%), all control participants advanced to the second level after achieving greater than 80% accuracy on the seven tasks used in the first level of testing. In the 22q11DS group, the proportions of participants who failed to complete the first level of testing were 10% Idmatch.sim, 2% Expair, 16% Exmatch. child, 26% Gazepair, 42% Gazematch.45, 4% Soupair and 14% Soumatch.ff.
False-belief and mentalising
The entire control group passed the Sally-Anne Task compared with 90% of 22q11DS group (n=45), although this difference was not statistically significant (Fisher's exact test, p=0.15). All five 22q11DS participants that failed the Sally-Anne task were in the 6-9-years age group. A higher percentage of the control participants (pass= 100%, n=31) passed the Smarties Task compared with the 22q11DS participants (pass=95.8%, n=46). However, there was no significant between-group difference in accuracy on the Smarties task (Fisher's exact test, p=0.52).
Group
Mean accuracy % Accuracy SD As mentioned previously, only participants who passed both the Sally-Anne and the Smarties task participated in the Chocolate Task (100% of the control group and 90% of the 22q11DS group). Significantly more control participants (100%, n=31) passed the Chocolate Task compared with the 22q11DS participants (82.2%, n=37) (Fisher's exact test, p=0.02). The majority of participants that failed the task was between 6 to 9 years of age (n=7 out of 8).
The 22q11DS group scored lower compared with controls on both the Strange Stories Task mentalising and physical stories (see Table 2). However, linear regression analyses on the difference scores revealed a significant effect of Group on the type of Strange Stories (beta coefficient=0.437, p<0.0005), with the 22q11DS group performing better on the physical stories compared with the mentalising stories whilst the control group performed at a comparable level in both tasks, this effect was not affected by age (p=0.02).
Furthermore, to explore the potentially confounding influences of using siblings as control group, we recalculated the analyses using a repeated-measures approach linking 22q11DS participants with their related sibling control. Overall, the pattern of significant and nonsignificant findings did not differ under this approach, with the exception of one face processing task (facial speech) which was not significantly different across 22q11DS and siblings when using this type of analysis.
Discussion
The current investigation is the first to investigate socialcognitive and social-perceptual mechanisms and their relationship to social competence and emotional problems among children with 22q11DS and typically developing sibling controls.
We predicted that the 22q11DS participants would have specific deficits in the social-perceptual tasks of face processing. In particular, we predicted that children with 22q11DS would have problems with tasks of a high salience to social functioning, such as emotion and gaze identification. We did indeed identify deficits in these aspects of face processing compared with sibling controls, although we also identified deficits in identity recognition and facial speech indicating a general deficit in facial processing. The greatest deficit, however, was identified in the Gaze direction task, and while this may have been at least partly due to an increased level of difficulty, we do not believe that this fully explains this finding. An independent study of visual scan path strategies in young people with 22q11DS indicate that people with the syndrome spend less time looking at the eyes compared with the mouth when judging facial emotions (Campbell et al. 2010a). This may be indicative of inefficient facial perceptual strategies Fig. 1 Task performance on face processing battery which might have influenced the performance on the gaze identification tasks carried out in the current study. Our findings are in agreement with current neurobiological knowledge of specific brain anomalies among people with 22q11DS. In particular, it has been reported that socialperceptual ability is dependent on the occipito-temporal cortex including the lateral fusiform gyrus and also the superior temporal sulcus and associated networks (Haxby et al. 2000). It has been well established that the occipital and temporal regions of the brain are affected by a deletion at chromosome 22q11.2 (Campbell 2006;Henry et al. 2002). Findings from magnetic resonance imaging (MRI) studies support the idea that social-perceptual impairments in the 22q11DS group may be due to atypical neural structures and brain functioning and have identified less insular and frontal cortical activation and relatively more activation in bilateral occipital cortex when viewing emotional faces (Andersson et al. 2008;van Amelsvoort et al. 2006). In addition, atypical scanning patterns of photographs of emotional human faces have been revealed (Campbell et al. 2010a). However, it is still unknown what strategies children with 2211DS use to judge the identity of a person or when determining eye gaze direction or facial speech.
It is also unknown whether the observed problems in facial perception are due to a general visual perceptual impairment or a face-specific social-perceptual impairment.
Although not directly comparable, we have revealed impairments in some aspects of object perception such as identifying objects from unusual viewpoints in children (Campbell 2006) and adults (Henry et al. 2002) with 22q11DS, indicating that there may be generalised problems with visual perception present in this population. However, the fMRI data reported by Andersson and colleagues did not find any group differences in neural activation when the groups were presented with objects such as houses which the authors interpreted as evidence for a face-specific neural anomaly (Andersson et al. 2008). It is also unknown if the origins of the observed face socialperceptual impairments are due to early face processing problems which may have resulted in worse social competence and hence less social interactions (and less practice) or if problems in another related area could have resulted in the observed problems. Unfortunately, no study of infant social-perception/cognition has yet been undertaken in 22q11DS, so the developmental trajectory of these skills is not clear. One could also argue that the group differences are simply due to the lower intellectual functioning of the clinical group. However, we have previously compared a subgroup of the current sample with a group of age-, gender-and intellectually matched children with Williams syndrome and subsequently identified a specific impairment in the 22q11DS group when performing the identity, emotion and gaze tasks while no significant group differences were identified in the facial speech task or the object perceptual tasks. Hence, we do not believe that the failure on these tasks are simply due to lower intellectual functioning in these children but rather a combination of lower intellectual functioning and syndrome specific differences.
However, it does seem as if language skills (in particular grammatical skills) and working memory ability are important to take into consideration when evaluating social-perceptual skills in this group of people. In the present context, consideration of semantic memory skills may elucidate the association reported between measures of face processing and grammatical skills (measured by TROG), for one who is proficient at storing and retrieving verbal concepts is liable to be proficient on tasks of verbal ability, as both require efficient access to words/concepts. Whilst the processes involved in identifying an individual are considered distinct from those required to perceive emotion and speech related actions of the mouth (Bruce and Young 1986), these processes have in common a reliance on semantic memory. As such, despite our best efforts to minimise the influence of both language and working memory in the design of the face processing tasks by using simple forced-choice matching tasks without requiring verbal responses, the findings from this study would suggest that face processing, by nature, requires these abilities.
For future studies, it will be important to compare socialperceptual skills of people with 22q11DS with people with other developmental disorders characterised by lower intellectual functioning and also to investigate if the observed facial processing deficits are specific or if similar deficits exist in other visual perceptual tasks. In addition, it would be valuable to use more naturalistic stimuli and tasks in order to determine the exact nature of the face processing deficits in 22q11DS. To conclude, considering the importance of social-perception in communication and social competence, the face processing deficits observed in the 22q11DS group needs further investigation.
We also predicted that the 22q11DS group would perform poorer on the social-cognitive tasks compared with the sibling control group. Our data suggests that socialcognitive deficits only occurred among the younger 22q11DS participants and only for more advanced second-order false-belief tasks and Strange Stories signifying that the acquisition of more complex social-cognitive tasks of false belief is suppressed in 22q11.2, but that this reflects a delay rather than a deficit. The influence of age on social-cognitive skills in children with 22q11DS may be related to a delay in the maturation of the frontal cortex among children with 22q11DS (Jablensky 2000). Van Amelsvoort and colleagues suggested that volumetric differences in the frontal lobes normalises somewhat in adults with 22q11DS Van Amelsvoort et al. 2001. Other factors such as gender and COMT phenotype has also been found to moderate frontal lobe morphology in 22q11DS and could potentially have an effect on our findings (Sands and Harrow 1995). However, our study did not identify any significant gender differences in the 22q11DS group. Future prospective studies will be designed to test this hypothesis further. In particular, it is important to include tasks that are largely independent from language to exclude the possibility that task performance is not simply attributable to language impairments. The tasks included in the current study included detailed narratives and both these, and the test-questions were grammatically complex (with the exception of the Smarties task). Indeed, our data indicate that performance on the false-belief tasks was related to grammatical competence as measured by the TROG. In addition, the pass or fail nature of the first-and secondorder false-belief tasks produced ceiling effects amongst the older participants and the sibling controls, limiting our ability to detect the range of false-belief skills present in the two groups and possibly concealing any significant between-group differences. In order to take this into account, the Strange Stories were included in the study; however, due to the complex narratives, the performance of the participants with 22q11DS may not truly reflect their mentalising ability. It will also be important to control for other cognitive processes such as inhibition and working memory.
Finally, our data suggest that social competence in the 22q11DS group is strongly associated with emotional problems, reflecting anxious and depressive traits. It has been reported that poor premorbid social functioning is related to worse outcomes among people with both depression and psychotic disorders (Jablensky 2000;Sands and Harrow 1995) and associated with poor theory of mind skills (Wang et al. 2008). Hence, children with 22q11DS with poor social competence due to underlying problems with, e.g. the social-perceptual components of theory of mind may be at particularly high risk of later psychopathology. This highlights the need to properly assess the mechanisms underlying social competence among children with 22q11DS in order to be able to design evidence-based interventions aimed at increasing resilience in this group of children at high risk of developing mood and psychotic disorders. Taken together, lack of social competence and associated emotional problems are likely to have a very significant negative impact on the quality of life and longterm functioning of young people with 22q11DS (Kiley-Brabeck and Sobin 2006).
To conclude, the current study provides an important first step in identifying the social-perceptual and socialcognitive mechanisms associated with social competence in 22q11DS. We found that theory of mind skills are related to parent-rated social competence and emotional problems among children and adolescents with 22q11DS. Whilst people with 22q11DS may have general impairments in face processing, our data suggests that young children with 22q11DS may have a developmental delay in acquiring false-belief and mentalising skills, although this may be related to lower intellectual functioning and/or language ability. Finally, studies of the mechanisms underlying social dysfunction among children with 22q11DS will be useful in order to produce targeted management and remediation of social skills in 22q11DS.
|
v3-fos-license
|
2016-06-17T03:32:30.588Z
|
2015-09-22T00:00:00.000
|
18881837
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2015.00459/pdf",
"pdf_hash": "12399e4bdc2a038729202cad3c1728167aa95541",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42423",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "81ead1544c47b1fec5d865833b9521989bfd032d",
"year": 2015
}
|
pes2o/s2orc
|
Macroautophagy in Endogenous Processing of Self- and Pathogen-Derived Antigens for MHC Class II Presentation
Although autophagy is a process that has been studied for several years its link with antigen presentation and T cell immunity has only recently emerged. Autophagy, which means “self-eating,” is important to maintain cell homeostasis and refers to a collection of mechanisms that delivers intracellular material for degradation into lysosomes. Among them, macroautophagy pathway has many implications in different biological processes, including innate and adaptive immunity. In particular, macroautophagy can provide a substantial source of intracellular antigens for loading onto MHC class II molecules using the alternative MHC class II pathway. Through autophagosomes, endogenous self-antigens as well as antigens derived from intracellular pathogens can be delivered to MHC class II compartment and presented to CD4+ T cells. The pathway will, therefore, impact both peripheral T cell tolerance and the pathogen specific immune response. This review will describe the contribution of autophagy to intracellular presentation of endogenous self- or pathogen-derived antigens via MHC class II and its consequences on CD4+ T cell responses.
Today, autophagy [from Greek: auto (self), phagos (to eat), meaning "self-eating"] refers to the breakdown mechanism that enables cells to recycle cytoplasmic constituents by degrading defective organelles and long-lived proteins in lysosomes. Initially considered to be an important alternative energy source in response to starvation, autophagy has now been implicated in multiple biological processes, including development, aging, and regeneration (5). Aberrant regulation of autophagy induces cancer, neurodegenerative diseases, and many other disorders (6). Autophagy also has diverse functions in innate immunity: pathogen recognition, elimination of microorganisms, control of inflammation, and secretion of immune mediators (7). In addition, autophagy contributes to adaptive immunity through diverse mechanisms: endogenous antigen presentation via MHC class II molecules (8,9) control of B and T cell function, and control of thymic T cell selection (7).
Currently, three different pathways of autophagy have been described: macroautophagy, microautophagy, and chaperonemediated autophagy (CMA) (10). They differ mainly on the molecular pathway the products (cargo) are delivered into lysosomes.
Substrates of CMA carry a KFERQ-like signal peptide and are recognized by the chaperone HSC70 (heat shock cognate 70), forming a substrate/chaperone complex. This complex is imported into the lysosome via LAMP2a (lysosome-associated membrane protein 2a) transporter, assisted by another HSC70 member in the lysosomal lumen. This is a unique selective pathway for the delivery of proteins into lysosomes (11,12) (Figure 1).
During microautophagy, cytoplasmic components directly gain access to the lysosome lumen via invagination and budding of its membrane. The cargo is enclosed through the formation FiGURe 1 | Pathways of autophagy. Autophagy can deliver cytosolic components to lysosomes for degradation via three different pathways. In chaperone-mediated autophagy (CMA), proteins having a KFERQ-like motif are translocated into the lysosome via the LAMP-2A transporter, with the help of Hsp70 chaperones. Microautophagy involves the sequestration of substrates via the invagination of the lysosomal membrane, while in macroautophagy, the substrates are engulfed in a double membrane vesicle, called autophagosome, which subsequently fuses with the lysosome to deliver its content for degradation. of autophagic bodies, which are then degraded by lysosomal hydrolysis (13) (Figure 1).
Macroautophagy is the best-characterized route for lysosomal degradation of cytoplasmic constituents. During this process, cytoplasmic contents or organelles are delivered to lysosomes for degradation. The hallmark of macroautophagy is the de novo formation of a cytosolic double membrane vesicle. Different membrane sources can contribute to the formation of the autophagosomal membrane, including the plasma membrane, the endoplasmic reticulum (ER), and the outer mitochondrial membrane (14). The autophagosome will then fuse with late endosomes and lysosomes to deliver its contents for enzymatic degradation. The resulting macromolecules are recycled back into the cytosol, where they can be reused for anabolic or catabolic reactions (15,16) (Figure 1).
Autophagosome formation is a complex multi-step event that is controlled by different autophagy-related genes (ATGs). At least 30 ATGs contribute to autophagy in yeast and are highly conserved among eukaryotes (17). Initial nucleation and assembly of the phagophore membrane (isolation membrane in mammals) require the action of the class III phosphatidylinositol 3-kinase (PtdIns3K) complex, which recruits multiple Atg proteins. In this process, the ubiquitin-like conjugation system Atg12-Atg5-Atg16 and Atg8 (known as LC3 in mammals) regulates autophagosome membrane elongation (18). Upon completion, all Atgs from the outer membrane are recycled. Importantly, Atg8, which is incorporated into both the inner and outer membrane of the forming autophagosome, remains associated in the inner membrane after fusion with lysosomes. Given its unique association with autophagosomes and autolysosomes, Atg8 is widely used as a marker of autophagosome formation and autophagy induction (19).
Autophagy has been described to substantially impact several aspects of innate and adaptive immunity (20). Autophagy has an intrinsic role in different cell types of the adaptive immune system. Autophagy abrogation in B cells (21), T cells (22)(23)(24), and NKT cells (25) results in decreased differentiation, effector function, and maturation. In parallel, Atg16 deficient dendritic cells (DCs) exhibit a more activated phenotype, including overexpression of co-stimulatory molecules and increased NF-kappaB activation (26). In addition to this cell intrinsic role, autophagy can impact different aspects of the adaptive immune response through its direct or indirect role in antigen presentation. Indeed autophagy can, for example, indirectly contribute to antigen presentation through its implication in the activation of various pattern-recognition receptors (PRRs) and damageassociated molecular patterns (DAMPs). In parallel, the pathway can control the secretion of different cytokines, mainly IL-1 beta, and therefore, contribute to the amplification or skewing of the T cell. The direct role of autophagy in antigen presentation has been described either in the donor cells or in the professional antigen-presenting cells (APCs).
This review will focus on the direct role of autophagy in APCs and its implication in delivering endogenous self-and pathogenderived ligands for presentation via major histocompatibility complex (MHC) class II molecules. In this review, we will not discuss the implication of unusual pathways of autophagy to antigen processing. Indeed, a mini-review of the same Frontiers topic specially focuses on that point (27). Rather, we will discuss the implication of macroautophagy in MHCII-mediated antigen presentation of intracellular proteins and its effects on peripheral CD4 + T cell responses in inflammatory and infectious diseases.
Autophagy in the immune System: endogenous MHC Class ii Antigen Processing and Presentation
MHC Class i and Class ii Classical Antigen Processing Pathways
Antigen presentation refers to pathways involved in the effective delivery of antigens to MHC molecules. Relatively small peptides (8)(9)(10) or (15)(16)(17)(18)(19)(20) amino acids are generated by proteolytic cleavage of protein substrates and displayed in the peptide-binding groove of surface expressed MHC class I or class II molecules, respectively. T cells, with their specific T-cell receptor (TCR), scan for the presence of cognate peptide-MHC complexes displayed at the cell surface of APCs. Recognition of antigenic fragments by CD4 + or CD8 + T cells is crucial to T cell activation and effector function (28).
Classically, MHC class I bound peptides are generated in the cytosol from various intracellular sources, such as cytosolic or nuclear self-proteins, proteins from intracellular pathogens or endogenous tumor antigens (29). Ubiquitinylation often targets these antigens for proteasomal degradation (30). Proteasomal products are then imported into the lumen of the ER by the transporter associated with antigen processing (TAP) (31), where they are loaded on MHC class I heterodimers. Within the ER, peptide binding is required for the correct folding of MHC class I molecules and its release from the ER. Stable peptide-MHCI complexes are exported to the cell surface via the golgi apparatus for presentation to CD8 + T cells (Figure 2).
In contrast, MHC class II bound epitopes classically originate from extracellular antigens (derived from foreign-or self-origin) phagocytosed by APCs and degraded by lysosomal proteolysis. These antigenic fragments are loaded onto MHC class II molecules in the so-called MHC class II compartments (MIICs) or late endosomes. MHC class II molecules are synthesized in the ER and associate with a chaperone known as the invariant chain (Ii; also known as CD74). Ii prevents premature peptide loading onto MHC class II molecules in the ER and guides newly assembled MHC class II molecules to late MIIC. Ii is then degraded in MIIC by lysosomal hydrolysis leaving the class II-associated invariant chain peptide (CLIP) in the peptide-binding groove. CLIP is replaced by high-affinity peptides with the help of the non-classical MHC class II molecule HLA-DM. Following peptide loading, peptide-MHC class II complexes are delivered to the cell surface for CD4 + T cell presentation (32) (Figure 2).
According to this classical view, MHC class I and class II molecules are specialized in presenting peptides derived from different origins. Through this division of labor, cytotoxic CD8 + or CD4 + helper T cells monitor the intracellular and the extracellular niches, respectively, for the presence of pathogens or for the maintenance of peripheral tolerance. However, this segregated origin of peptides can be bypassed by unconventional pathways (33). For instance, "cross-presentation" is a pathway allowing DCs to present extracellular antigens through MHC class I molecules (34,35). Consequently, cross-presentation is an important pathway for the initiation of anti-viral cytotoxic CD8 + T cell responses and for the maintenance of CD8 + T cell tolerance (36,37). Similarly, peptides of intracellular origin can be loaded onto MHC class II molecules.
Indeed, sequencing of peptides eluted from MHC class II molecules revealed that 20-30% of natural MHC class II ligands originate from intracellular cytosolic and nuclear proteins (38)(39)(40). These ligands can be generated either after cleavage by the proteasomal machinery (41) or via a group of processes, including CMA (reviewed elsewhere in this topic) and macroautophagy (Figure 2). In agreement, characterization of the MHC class II peptide repertoire expressed at the cell surface either under steady-state or after starvation-induced autophagy suggests that autophagy might influence CD4 + T cell-mediated responses to intracellular antigenic sources (42).
endogenous Processing of intracellular Antigens via Autophagy for MHC Class ii Presentation to CD4 + T Cells: Model Antigens
Pharmacological inhibitors provided the first evidence of the involvement of autophagy in endogenous MHC class II presentation to CD4 + T cells. Stockinger's group compared the antigen presentation capacity of different cells transfected with C5 protein (fifth component of mouse complement). They found that B cells and fibroblasts were able to present epitopes derived from the intracellular C5 protein to CD4 + T cells. Interestingly, in the presence of a non-specific inhibitor of autophagy, 3-MA (3-methyl adenine) -known to inactivate class III PI3 kinase) -MHC class II presentation of endogenous C5 was abrogated (43).
Subsequent studies took advantage of the same inhibitory mechanism to show that autophagy was involved in the presentation of epitopes derived from cytosolic antigens. Transfection of a model antigen, the neomycin phosphotransferase II (NeoR) into two different cell lines, showed that MHC class II-dependent presentation of NeoR was abrogated by 3-MA inhibition, and therefore, likely to be mediated via autophagy. In parallel, upon 3-MA treatment antigen degradation was inhibited (44). In another study, using DCs transfected with in vitro-transcribed RNA coding for a tumor-associated cytoplasmic antigen (MUC1), the authors demonstrated that the presentation of MUC1 on MHC class II molecules required lysosomal/endosomal processing (45). Furthermore, antigen presentation of MUC1 to CD4 + T cells was abrogated in the presence of 3-MA, suggesting an involvement of autophagy in MUC1 processing and delivery to class II compartment.
More recently, autophagy has been shown to play a role in the presentation of citrullinated peptides from the hen-egg-white lysozyme (HEL) to CD4 + T cells (46). This model antigen was overexpressed at the membrane of APCs resulting in strong presentation of an immune-dominant CD4 epitope (47). Blocking autophagy in DCs, using either 3-MA treatment or Atg5 siRNA silencing, specifically inhibited the presentation of citrullinated FiGURe 2 | MHC class i and class ii processing pathways and autophagy. Classically, MHC class I bound antigens are originated from intracellular proteins through proteasomal proteolysis and are transferred to the outer membrane, where the resulting peptides are presented to CD8 + T cells. On the other hand, MHC class II products originate from extracellular antigens, which are endocytosed and delivered to MHC class II containing compartments (MIIC), where they meet newly generated MHC class II molecules. Alternatively, autophagy can deliver cytosolic antigens for MHC class II presentation, via the fusion of autophagosomes and MIIC, for the presentation of antigens to CD4 + T cells.
Macroautophagy in endogenous processing of class II epitopes Frontiers in Immunology | www.frontiersin.org but not native HEL peptides. In parallel, presentation of HELcitrullinated peptides by B cells required the engagement of the B cell receptor, which was also inhibited by 3-MA treatment (46). As the presentation of citrullinated proteins plays a key role in pathogenesis of rheumatoid arthritis (48), such findings highlight the potential contribution of autophagy to the pathogenesis of a common autoimmune disease. Nevertheless, the physiological relevance of this finding needs to be expanded to more relevant autoantigens in rheumatoid arthritis.
The limitation of these studies is that they relied on artificial overexpression of model antigens, and therefore, they can only suggest an implication of autophagy pathway in the endogenous MHC class II antigen processing of physiologically expressed proteins. In addition, another major drawback, which may impede a proper assessment of how autophagy influences physiological CD4 + T cell responses, is the use of the pharmacological inhibitor 3-MA, which not only blocks autophagy but also affects additional biological processes (19).
The generation of labeled markers for autophagosome formation provided a better demonstration of how autophagy is involved in MHC class II presentation. To further support a broader relevance of autophagy under basal normal conditions and not only under starvation, Schimd et al. showed that low constitutive autophagosome formation occurred in a variety of human APCs, such as DCs, macrophages, and B cells (9). In this study, autophagosome formation was monitored by the accumulation of Atg8/LC3 into vesicles upon treatment with chloroquine, a blocking agent of lysosomal proteolysis. Since LC3 (the human ortholog for ATG8 in yeast) is specifically incorporated into the autophagosomal membrane upon its formation, LC3 turnover can, therefore, be used to measure autophagic activity. Autophagosomes were shown to fuse with MIIC, as evidenced by immunofluorescence co-localization of LC3-GFP, MHC class II, and HLA-DM, in both DCs and human epithelial cell lines. Importantly, silencing of Atg12 inhibited autophagosome formation and fusion with MIIC (9). In addition, a proof of concept experiment demonstrated that autophagosomes could efficiently deliver antigens to MIIC. Influenza viral protein MP1 was expressed in a fusion construct by coupling Atg8/LC3 to the C-terminus of MP1. This strategy efficiently targeted MP1 to autophagosomes and significantly enhanced its antigen presentation to CD4 + T cell-specific clones (9).
endogenous MHC Class ii Processing of Pathogen-Derived Antigens via Autophagy
The main contribution of autophagy to antigen processing of endogenous proteins and their delivery to MIIC has been described in the context of viral or bacterial infection. Indeed, autophagy is required for efficient presentation of endogenous pathogen-derived antigens on MHC class II molecules to enhance specific CD4 + T cell activation.
The first viral antigen shown to be delivered to MIIC by autophagy was the Epstein-Barr virus (EBV) nuclear antigen 1 (EBNA-1) (8). In this study, the authors used EBV-transformed lymphoblastoid cells (LCLs) and EBNA-1-specific CD4 + T cell clones. Immunofluorescence analysis of LCLs showed that upon inhibition of lysosomal acidification, and therefore, autophagosome maturation, EBNA-1 could accumulate in cytoplasmic vesicles, which expressed the lysosomal marker LAMP1. In parallel, EBNA-1 was visualized in autophagosomes by electron microscopy. Furthermore, blocking autophagy, by treatment with 3-MA or by siRNA-mediated silencing of Atg12, resulted in reduced MHC class II-restricted CD4 + T cell recognition of EBNA1 (8). In the same line of this pioneer study, Leung et al. have shown that autophagy can play a role in the processing of specific CD4 + T cell epitopes of the EBNA-1 antigen along with other endogenous pathways (49). Interestingly, the location of native EBNA-1 within the nucleus leads to less processing and presentation on MIIC, due to the absence of autophagy within the nucleus. Indeed by mutating the nuclear localization signal of EBNA-1, the range of CD4 + T cell epitopes processed through autophagy was broader since the protein was more accessible for cytoplasmic autophagic degradation (49).
Another pathogen-derived antigen processed through autophagy is the immunodominant Ag85B antigen, from Mycobacterium tuberculosis (Mtb) (50). Mtb, amongst other pathogens, can survive in phagosomes, as part of an evasion mechanism to avoid degradation. In this context, stimulation of phagosomal maturation and lysosomal degradation via the induction of autophagy enhances Mtb clearance (51,52), and may be required for optimal immune responses against Mtb. Indeed, in vivo, activation of autophagy in DCs significantly increased the presentation of Ag85B to specific CD4 + T cells. Mice vaccinated with Mtb-infected and rapamycin-treated DCs, exhibit a stronger specific CD4 + T cell response after Mtb challenge. In parallel, blocking autophagy in DCs, prior to vaccination, leads to a reduced Mtb-specific CD4 + T cell response (50).
A further in vivo study focusing on the role of autophagy during respiratory syncytical virus (RSV) infection in mice has also shown that autophagy plays a role in anti-viral CD4 + T cell responses. Mice having a defect in Beclin-1 (Beclin-1 +/− ), thus resulting in reduced autophagosome formation, exhibit exacerbated lung inflammation upon RSV infection, with increased Th2 responses and decreased IL-17 and IFN-γ responses. Furthermore, in vitro analysis of pulmonary DC from Beclin-1 +/− mice showed a reduction in MHC II level and co-stimulatory molecule expression. Finally, adoptive transfer of RSV-infected Beclin-1 +/− DC into wild type mice prior to virus challenge confirmed that the absence of autophagy within DCs leads to reduced Th1 responses and increased lung pathology (53). Recently, the same authors further dissect the contribution of autophagy in initiating and maintaining aberrant Th17 responses during RSV infection. Using mice deficient in the autophagy-associated protein, Map1-LC3b (LC3b −/− ), they observed increased Th17 cells in lungs upon infection. In addition, airway epithelium appeared to be the primary source of IL-1β during RSV infection, whereas blockade of IL-1 receptor signaling in infected LC3b −/− mice abolished IL-17dependent lung pathology (54). Such findings highlight the role of autophagy for antigen presentation of RSV and how it can shape the adaptive anti-viral immune response.
Autophagy is also involved in antigen presentation of proteins derived from extracellular pathogens, such as the bacterium Yersinia. Through the type III secretion system, Yersinia utilizes carrier proteins, the Yersinia outer proteins (Yop) for the delivery of bacterial proteins into the cytosol of host cells. Interestingly by constructing a fusion antigen with the cytoplasmic translocated YopE protein, Russman et al. could demonstrate that chimeric fusion proteins are processed by autophagy, in macrophages, and presented via MHC class II to induce CD4 + T cell activation (55). Nevertheless, the relevance of this mechanism for Yersinia epitopes was not demonstrated.
Together, these studies suggest that autophagy induction in DCs and macrophages can enhance antigen presentation of MHC class II epitopes from intracellular pathogens in order to induce efficient CD4 + T cell responses. However, this scenario might not happen in all instances. Indeed, despite the fact that influenza A virus manipulates autophagy, no significant contribution of this pathway to the anti-viral CD4 + T cell response was demonstrated (56).
In parallel, many bacteria and viruses have developed escape mechanisms to inhibit autophagy, resulting in increased intracellular pathogen load (57)(58)(59). Whether this will negatively influence pathogenic CD4 + T cell responses remains to be further investigated.
Autophagy in Positive and Negative Selection of T Cell Repertoire
Autophagy plays a major role in thymic selection of a diverse T cell repertoire, and therefore, has important consequences for central tolerance induction (60).
During T cell development, T cell precursors undergo positive selection in the thymic cortex and negative selection in the thymic medulla. Positive selection allows the establishment of a functional and diverse T cell repertoire, whereas negative selection eliminates potentially auto-reactive T cells, in order to establish central tolerance toward self-antigens (61). Central tolerance is based on the presentation of self-peptides at the surface of thymic APCs, especially in thymic epithelial cells (TECs) and thymic DCs, either via MHC class I or MHC class II molecules for CD8 + or CD4 + T cell development, respectively.
The generation of a functional and self-tolerant CD4 + T-cell repertoire relies on the availability of a full range of self-peptides displayed by thymic APCs. The peptides presented should cover most of, if not all tissue antigens, which T cells might encounter in the periphery. Thymic APCs utilize different mechanisms in order to present a broad range of self-peptides.
Significant progresses have been made to clarify how TECs, which have low endocytic activity, can obtain self-peptides for MHC class II presentation and induction of a diverse CD4 + T cell repertoire devoid of auto-reactive cells (62). Recently, autophagy has been implicated in the unconventional MHC class II selfpeptide loading and presentation in the thymus.
Indeed, TECs exhibit high constitutive autophagosome formation in a starvation-independent fashion (63). Neonatal lethality of mice lacking autophagy, such as ATG5 −/− or ATG7 −/− mice (64,65), impedes the direct assessment of T-cell development in these conditions. Nevertheless, by transplanting embryonic Atg5 −/− thymi under the renal capsule of normal adult recipients, it was demonstrated that autophagy in thymic epithelium is essential for the establishment of a broad T-cell repertoire and for tolerance induction (63). In comparison to controls, transplanted thymi from knockout mice were smaller but exhibited normal epithelial differentiation and organization. In this setting, positive selection of some MHC class II-restricted TCR specificities was impaired in Atg5 deficient thymi. In contrast, absence of autophagy in TECs did not affect CD8 T cell repertoire (63). Importantly, selftolerance was compromised when thymi from Atg5 −/− embryos were grafted in athymic nude mice. In this system, because of the complete deficiency of endogenous thymus, development of T cells completely relies on transplanted TECs. Between 4 and 6 weeks after grafting, transplanted mice with autophagy-deficient thymi exhibited clear signs of autoimmunity, such as progressive weight loss and inflammatory cell infiltrates, in different organs (63). These results should be, however, taken with caution since the experimental system could be geared toward autoimmunity due to the lymphopenic recipients.
In addition, autophagosomes were shown to co-localize with MIIC in both cTECs and mTECs (66) emphasizing the potential role of the pathway in thymic selection. However, more recently, the importance of autophagy in TECs for T cell development and self-tolerance establishment has been re-challenged and suggests that the lack of autophagy in TECs had a minor impact on T cell repertoire development. Transgenic mice bearing a specific suppression of Atg7 or Atg5 in epithelial cells (ATG7 f/f K14-Cre mice) or (ATG5 f/f K5-Cre mice), exhibit unaltered thymic structure, a normal T cell repertoire, and no evidence of autoimmunity (67,68). Even though endogenous autophagy was efficiently deleted in epithelial cells of both the thymic medulla and cortex, no activation of CD4 + T cells nor enhanced tissue inflammation or autoimmune manifestations were observed in these models.
The difference between these models and the study by Klein et al. could be explained first by the different approaches used to abrogate autophagy in the thymus. In the first study, complete autophagy-deficient thymi were transplanted, whereas autophagy was specifically deleted in epithelial cells in the second study. A second possible explanation for this difference could also be that the two studies were carried on different mouse backgrounds. Finally, lymphopenic hosts used in the Klein's study are known to be more permissive to autoimmunity development (69,70).
Recently, a more refined model addressing the role of autophagy in thymic epithelium for central tolerance was carried out. A model antigen was expressed associated either to the mitochondria or to the plasma membrane. Both intracellular and membrane-bound forms of the antigen were directly presented by TECs, when transgenic thymi were transplanted under the kidney capsule of MHC class II-deficient mice. Using this scenario, a role for hematopoietic APCs in negative selection was excluded. Importantly, expression of both neo-antigen forms resulted in clonal deletion of TCR specific CD4 + thymocytes (71). Additionally, when autophagy was abrogated using Atg5 −/− thymi transplanted into transgenic mice, negative selection of T cells recognizing the membrane-associated form of the protein was not affected. However, negative selection of T cells recognizing the intracellular antigen was dependent on autophagy since it was abrogated in Atg5 −/− mice, firmly establishing a role for autophagy in central tolerance toward some endogenously expressed intracellular antigens.
The direct implication of efficient endogenous Ag loading into MHC class II by autophagy in mTECs was further characterized. By coupling an antigen to LC3 molecules, a new elegant model was designed to directly target the antigen to autophagosomes. In addition, expression of the fusion protein was settled under the transcriptional control of the Aire promoter. Despite the fact that both mTECs and DCs express Aire, only mTECs were able to induce effective cognate CD4 + T cells response, in ex vivo cultures, in an autophagy-dependent fashion. Moreover, using the same model, clonal CD4 + thymocyte deletion was also observed in vivo. Interestingly, mice expressing a mutated version of the fusion protein, unlinked to autophagosomes, exhibited similar negative selection of CD4 + thymocytes. Under these conditions, indirect presentation of this particular Ag by DC compensated the impaired direct presentation by mTECs. In addition, autophagy requirements in TECs for efficient negative selection could rely on the amount and the distribution of a given antigen (71).
Finally, a recent study has also reported an important role of autophagy in TECs for T cell selection. Using Clec16a knockdown mice in the non-obese diabetic (NOD) mouse model for type 1 diabetes, the authors unexpectedly found that these mice were protected from diabetes (72). The phenotype was related to a decrease in autophagosome formation in TECs from mice in which Clec16a was silenced. Interestingly, a general reduction of CD4 + T cell activation was observed. The precise mechanism of how Clec16a affects autophagy levels in TECs and, consequently, CD4 + T cell selection remains unclear. In addition, it is difficult to link a reduction in autophagosome formation in TECs with an overall hypo responsiveness of CD4 + T cells. The authors speculate that the quality of the selected repertoire is different, but no particular auto-antigen specificity was addressed to explain why autoimmunity is dampened. Instead, a global increased negative selection was hypothesized, as shown by a general decrease in CD4SP maturation. How this deficiency will exclusively affect self reactive T cell function without impairing pathogen specific T cell responses, is difficult to understand. Despite that the precise mechanism needs further investigations, the novelty of the study resides in the fact that this is the first demonstration of how CLEC16A can affect autoimmune responses. Indeed, the genetic association of CLEC16A with multiple autoimmune diseases is finally linked to a molecular mechanism impacting autophagy and central tolerance.
Therefore, using non-redundant mechanisms, thymic APCs contribute to efficient CD4 + thymocyte differentiation and establishment of CD4 + T cell repertoire. Intrinsic features of each subset determine the pathways by which they obtain and process antigens for MHC class II loading. TECs constitute a unique non-hematopoietic cell subset expressing constitutively high levels of MHC class II but exhibiting a poor efficacy in capturing extracellular antigens. With disparities between cTECs and mTECs, macroautophagy has been convincingly demonstrated to participate in the effective loading of intracellular antigens onto MHC class II molecules for the essential process of central tolerance (Figure 3).
Conclusion
With the advance of the molecular era of autophagy and the identification of ATG genes and pathways, increasing research has demonstrated a prominent role for autophagy in previously unknown biological functions, including adaptive immunity (73). In this regard, autophagy plays an important new role in endogenous antigen processing and presentation of intracellular antigens through MHC class II molecules, with an important effect on CD4 + T cell responses. Indeed, the presentation of self-antigens in the thymus via autophagic pathways significantly contributes to shaping the T cell repertoire and to establishing central T cell tolerance.
In addition through enhancing MHC class II presentation of intracellular pathogen-derived antigens, autophagy contributes to efficient CD4 + T cell priming and actively shapes adaptive immune responses. Therefore, a better understanding of autophagic functions could be explored to increase the efficiency of vaccines. Moreover, it still remains to be elucidated whether autophagy is also involved in the presentation of self-antigens outside the thymus and if it would, then, play a role in peripheral CD4 + T cell tolerance induction and maintenance. Whether activation or suppression of autophagy could have therapeutic benefits in autoimmunity as well as inflammatory disorders requires further clarification.
|
v3-fos-license
|
2018-04-03T02:36:04.197Z
|
2015-04-30T00:00:00.000
|
2187477
|
{
"extfieldsofstudy": [
"Physics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/srep09966.pdf",
"pdf_hash": "d2f50602225ff1da7ac87d3f5cb93c2f1627f5ef",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42424",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "b8c581f2d7d68e6137aa4710a123a06e86aabe43",
"year": 2015
}
|
pes2o/s2orc
|
K-space polarimetry of bullseye plasmon antennas
Surface plasmon resonators can drastically redistribute incident light over different output wave vectors and polarizations. This can lead for instance to sub-diffraction sized nanoapertures in metal films that beam and to nanoparticle antennas that enable efficient conversion of photons between spatial modes, or helicity channels. We present a polarimetric Fourier microscope as a new experimental tool to completely characterize the angle-dependent polarization-resolved scattering of single nanostructures. Polarimetry allows determining the full Stokes parameters from just six Fourier images. The degree of polarization and the polarization ellipse are measured for each scattering direction collected by a high NA objective. We showcase the method on plasmonic bullseye antennas in a metal film, which are known to beam light efficiently. We find rich results for the polarization state of the beamed light, including complete conversion of input polarization from linear to circular and from one helicity to another. In addition to uncovering new physics for plasmonic groove antennas, the described technique projects to have a large impact in nanophotonics, in particular towards the investigation of a broad range of phenomena ranging from photon spin Hall effects, polarization to orbital angular momentum transfer and design of plasmon antennas.
Surface plasmon resonators can drastically redistribute incident light over different output wave vectors and polarizations. This can lead for instance to sub-diffraction sized nanoapertures in metal films that beam and to nanoparticle antennas that enable efficient conversion of photons between spatial modes, or helicity channels. We present a polarimetric Fourier microscope as a new experimental tool to completely characterize the angle-dependent polarization-resolved scattering of single nanostructures. Polarimetry allows determining the full Stokes parameters from just six Fourier images. The degree of polarization and the polarization ellipse are measured for each scattering direction collected by a high NA objective. We showcase the method on plasmonic bullseye antennas in a metal film, which are known to beam light efficiently. We find rich results for the polarization state of the beamed light, including complete conversion of input polarization from linear to circular and from one helicity to another. In addition to uncovering new physics for plasmonic groove antennas, the described technique projects to have a large impact in nanophotonics, in particular towards the investigation of a broad range of phenomena ranging from photon spin Hall effects, polarization to orbital angular momentum transfer and design of plasmon antennas.
A n ultimate goal of nanophotonics is to engineer single nanostructures, or clusters of them, capable of precisely manipulating the propagation, emission and absorption of light. A large interest in this capability stems on one hand from projected applications of plasmonics 1 and metamaterials 2,3 in domains ranging from improved photovoltaics 4 , efficient solid-state lighting 5 and on-chip optical components 1 to the improvement of research tools in spectroscopy and microscopy at the single molecule level 6 . On the other hand, the fields of plasmonics, metamaterials, and metasurfaces 1-3 continue to surprise both with new insights in the peculiar solutions of Maxwell's equations when exotic material responses are introduced, and in the parallels with solid-state phenomena such as the spin hall effect [7][8][9][10][11][12] or topological insulator physics 13 .
The behavior of any single nanostructure in response to an incident optical wave is most generally described by a so-called t-matrix [14][15][16] , also known as scattering matrix or generalized transmission function. In the case of scattering, the t-matrix completely specifies the far-field distribution for any incident field, including polarization, amplitude, phase, and k-distribution of both fields [14][15][16] . Experimental techniques for the characterization of nanostructures, such as bright and dark field microscopy or NSOM, measure different subsets of the transmission function. However, there is no single technique that can map the complete transformation of incident light into the far field. This paper introduces high-NA k-space polarimetry 17-20 as a technique to measure the response of single scatterers to incident fields with different polarizations. This technique combines a Fourier microscope 21,22 , capable of mapping the k-vector distribution of scattered radiation, with a polarimeter 23-25 that measures the full polarization state for each wave vector. For a given incident k-vector distribution, a k-space polarimeter measures all information encoded in the transmission function of a scatterer across an entire microscope back aperture, up to an over-all phase.
In order to demonstrate k-space polarimetry we consider bullseye antenna scatterers (BEs), consisting of periodic grooves concentric to a circular hole in a plasmonic metal film, Fig. 1 (a). These antennas are among the simplest, most widely used, and best understood plasmonic structures that scatter light directionally [26][27][28][29] . Furthermore, bullseye antennas are widely studied for their ability to impart directionality to the fluorescence of fluorophores residing in the central aperture 30 . While the role of wavelength and antenna design on field enhancement and directionality is well understood 31 , the polarization state of the light scattered by BEs has been only partially characterized [32][33][34][35] . Here we use k-space polarimetry to measure the angle-resolved polarization state of the scattering of bullseyes under different illuminations. Our results show strong linear-to-circular polarization conversion at off-normal scattered wave vectors, showing that even a structure as ubiquitous as a bullseye antenna still contains surprising physics, relating to the emerging field of controlling spin-orbit coupling for photons [7][8][9][10][11][12] .
K-space polarimetry At the basis of our k-space polarimeter is a conventional microscope, with a ''Bertrand'' or ''Fourier'' lens. A Fourier microscope exploits the fact that the back focal plane of a microscope objective provides access to the entire distribution of k-vectors collected by it, which can be directly mapped onto a CCD camera chip. Fourier imaging has been applied, for instance, to image the radiation pattern of single emitters to determine their dipole moment 21,22 and to map the directivity optical antennas impart to emitters 30,[36][37][38] . Fourier microscopes have been also used in scattering experiments on single nanostructures 39 and nanostructure arrays [40][41][42][43] . The back focal plane in a Fourier microscope retains full information regarding momentum, but also in other degrees of freedom such as energy (frequency) and polarization. Accessing this information requires additional analyzers. For instance, energy resolved radiation patterns have been measured using spectrometers 44 or gratings 45 to disperse Fourier images.
Regarding polarization, Fourier microscopy measurements have been reported with a single linear polarizer as analyzer 39,46 , which only partially interrogates the polarization state of the scattered field. Given the nature of polarization, it is fundamentally impossible to retrieve the full state from linear-polarization measurements. Moreover, the very strong refraction of rays in high NA aplanatic lenses means that imaging through linear analyzers behind the microscope objective does not correspond to a natural polarization basis. Indeed, in the beam behind the objective, the basis of s and ppolarization applicable to the spherical wave emanating into the far field from a scatterer in the object plane converts to radial and azimuthal polarization, and not into orthogonal Cartesian polarizations. Thereby a simple linear polarization analysis is incomplete, and impractical.
Polarimeters perform complete polarization measurement, that is measurements that allow retrieving the Stokes parameters S 0 , S 1 , S 2 and S 3 In this work we place a polarimeter in a Fourier microscope, to determine the polarization state for each scattered k-vector. We use a rotating-plate polarimeter composed by a quarter wave plate (QWP) followed by a linear polarizer (LP) [23][24][25] . These two elements act as a linear polarizer when their optical axes are aligned and as a circular polarizer when the angle between their optical axes is p/4. If I a,b is the intensity measured after rotating the QWP by an angle a and the LP by an angle b (subscript labels expressed in degrees) with respect to the x axis, the Stokes parameters are given by Thus a total of six measurements are used to retrieve the four Stokes parameters. The first Stokes parameter, S 0 , corresponds to total intensity. The other three parameters are given by the difference between intensities transmitted by orthogonally orientated polarizers: horizontal and vertical for S 1 , diagonal plus and minus for S 2 , and right and left-handed circular for S 3 .
Since the Stokes parameters fully describe polarization, any other figure of merit for polarization can be retrieved from them. For instance, the total degree of polarization DP and the degrees of linear DLP and circular polarization DCP are given by This capability of a polarimeter to determine how much of a beam is actually polarized would be especially useful in plasmon-enhanced fluorescence experiments, where coupling to a plasmon resonance may impart polarization even to randomly oriented dipole emitters. Other intuitive, and commonly used, figures of merit for polarization that are easily obtained from the Stokes parameters include the amount of s-and p-polarized light, or the ellipticity and orientation of the polarization ellipse. Figure 2 shows our setup with its two main components: a homebuilt Fourier microscope (b) and a rotating-plate polarimeter (c). As a light source, we use a supercontinuum laser (Fianium) filtered by an acousto-optical tunable filter (AOTF) and a 20 nm band pass filter centered at 750 nm. A linear polarizer and a quarter wave plate set the input beam polarization, Fig. 2(a). A 10 3 objective weakly focuses the light on the sample and a 60 3 objective (NA 5 0.7) collects the resulting radiation. The presented technique is not limited to these choices: both illumination and detection sides can encompass any available objective NA. Moreover, a full polarimetric k-vector mapping on the incident side is possible by scanning a mildly focused input beam in the back focal plane of the input objective 47 On the collection side, light passes through a spatial filter in an intermediate image plane to isolate light scattered only by a single nanostructure. The spatial filter is composed of a 151 telescope (2f telescope 5 100 mm) and a 300 mm pinhole, equivalent to about 20 mm on the sample. To obtain Fourier images, i.e., to image the back focal plane of the objective onto a CCD camera (Photometrics CoolSnap EZ), we use a f Fourier 5 200 mm lens placed at a distance 4f telescope 1 f Fourier from the back focal plane, followed by a f tube 5 200 mm tube lens. The rotating-plate polarimeter consists of a broadband quarter wave plate (Thorlabs AQWP-600) and a linear polarizer (Thorlabs LPVIS100), and it is placed just before the tube lens.
Experimental setup
Our sample was fabricated in a 200 nm thick gold film evaporated on a glass cover slip that was first coated with a 5 nm chromium adhesion layer. The bullseye antennas were imprinted on the film with a focused ion beam (FIB) by milling isolated circular holes with a diameter of 210 nm through the metal and engraving 8 grooves of about 60 nm depth that are concentric with the holes. We studied structures with different distances between consecutive grooves (pitch p) and between the central hole and the first groove (d). We present results for two structures BE 440 with p 5 440 nm and d 5 330 nm, shown in Fig. 1(a) and BE 600 with p 5 600 nm and d 5 600 nm. In all cases the width of each groove was half the pitch. In our experiments the structure is immersed in water, so as to provide a scatterometry dataset that directly compares to the conditions also used in experiments on fluorescence enhancements in Ref. 30.
Measurements and results
In order to retrieve the Stokes parameters of the light scattered by each structure, we measured its k-space distribution with six settings of the polarimeter given by Eq. 1. Figure 2(e) shows the set of measurements for BE 440 illuminated with vertically polarized light. The k-space distribution is a circular disk on the CCD chip because of the Abbe sine condition for microscope objective design. The center of this disk corresponds to the microscope optical axis, i.e., jk jj j 5 2p/l sinh 5 0. The outer rim corresponds to the objective NA and the distance d from image center to edge relates to angle as d / sinh (angle in air) 39,48,49 . As mentioned before, the back focal plane images directly correspond to the k jj distribution of scattered light.
The raw data in Fig. 2(e) reveal the characteristic pattern for the light scattered by bullseye antennas first reported by Lezec et al. 27 The intensity is strongly peaked in a narrow lobe around the forward direction, which is surrounded by a fringe at sinh < 0.2. Similar behavior has been previously shown using conventional rotationstage set-ups 35 and is usually explained by diffraction. At the measurement wavelength, the groove period matches a 2nd order Bragg condition for the surface plasmon supported at the grooved Au interface. Coherent addition of plasmons scattered out as light into the far field by the grooves thereby leads to a directional beam. The highest intensity is observed when analyzing the scattered light in the polarization-conserving channel (I 90,90 ), with an almost 1051 ratio to the cross polarized channel (I 0,0 ). However, even if most of the scattering retains the polarization of the incident light, the intensity of horizontally polarized light I 0,0 is non negligible. In part this is expected, since plasmons launched at the hole are subsequently radiated by the grooves. Hence the p-polarized nature of surface plasmons must appear in the scattered light. Indeed, the presence of the grooves strongly raises the cross polarized intensity to a level at least 3 orders of magnitude higher than obtained for a single hole (measurement not shown). The raw data also reveal more subtle and surprising details. For instance, the raw measurements using circular polarization analysis show that scattered light is handed for particular wave vectors, even though the sample and the illumination are mirror symmetric. In particular, at off-normal angles lobes with left-and right-handed polarization appear at mirrored scattered wave vectors. The slight asymmetry in shape and intensity of different lobes in the same measurement is due to small misalignments of the angular position of the polarimeter elements, which we can determine only with a precision of 6 2u. Other sources of error in this kind of measurements include mirrors and birefringent optical elements such as some microscope objectives.
We now discuss the Stokes parameters retrieved from the raw data, which provide complete (and non-redundant) information on the angular-dependent features of the polarization state of the scattered light. Figures 3 (a) and (b) show the Stokes parameters of the light scattered by the two antennas, BE 440 and BE 600 respectively, for four distinct input polarizations. In these figures, each row corresponds to a different incident polarization (vertical, horizontal, right and left handed circular), while each column represents different Stokes parameters. The first column shows the total intensity, S 0 , while the last three columns show the parameters S 1 , S 2 and S 3 normalized to the total intensity S 0 . The angle-dependent intensity summed over all polarization contributions, S 0 , is directly comparable to literature reports on the physics of bullseye antennas 30,35 . Since we consider a cylindrically symmetric structure excited along the symmetry axis, solely the incident polarization should determine the symmetry of scattering patterns in S 0 as confirmed by Fig. 3 (a) and (b). While circularly polarized incident light results in rotationally invariant radiation patterns, this invariance is lost when illuminating with linearly polarized light since it breaks the symmetry of the system. The resulting elongated pattern rotates with the incident linear polarization, as was shown earlier in ref. 35 when studying the polarization of a single line (k y 5 0) in a Fourier image.
Regarding the polarization properties of the scattered light, our measurements show that at jk jj j 5 0 the scattering always retains the polarization of the incident field as required by the cylindrical symmetry of the system (structure plus incident and scattered wave vectors). At scattering vectors jk jj j ? 0 the incident polarization is not trivially translated into the polarization of the scattering and, for instance, there are well defined regions where input polarization and scattering are orthogonally polarized. In the case of linear incident polarization, these regions appear as four elongated lobes in S 1 /S 0 in the first two rows of Fig. 3 (a) and (b). For incident circular polarization, annular regions appear where the scattering is oppositely handed to the incident field, as shown by S 3 /S 0 in the last two rows of each figure. In both cases, at scattering angles surrounding these regions, the structures convert linear polarization into circular/elliptical polarization and vice versa. Thus, even these simple, cylindrically symmetric structures show strong polarization conversion at specific wave vectors, a phenomenon of recent interest in the field of controlling spinorbit coupling of light by nanophotonic structures 7-12 . There are many other bases into which the retrieved polarization information may be cast, for instance to best bring out the geometry of a particular scattering problem or to best suit a researchers intuition. As example, we demonstrate the retrieval of different figures of merit for BE 440 illuminated with vertically and right-handed circularly polarized light. Figure 4 shows the retrieved degree of polarization DP, degree of linear polarization DLP and degree of circular polarization DCP. As would be expected for a completely coherent scattering process, even though the scattered field presents a complex polarization, the structure does not decrease the degree of polarization of the incident field. The scattering conserves the incident total degree of polarization DP 5 1 for every k-vector independently of the incident polarization. As shown before, most of the scattered light has a degree of linear (circular) polarization that closely matches the incident light. Conversion from linear to circular polarization and vice versa, occurs in well defined regions where there is a quarter wave phase difference between the light emanating directly from the hole (polarized as the incident field over the entire back aperture) and that radiated by the grooves (radially polarized over the back aperture owing to the p-polarized nature of plasmons). Figure 4 also shows the parameters of the ellipse described by the electric field vector as a function of time, which are a frequently used representation of the polarization state of fully polarized light 25 .
The ellipticity e, defined as the ratio of semi-major and semi-minor axis, takes values between 0 for linearly polarized light to 6 1 for right and left handed circularly polarized light, while y denotes the orientation of the ellipse. The angle y runs from 2p/2 to p/2, where 0 means that the major axis points along x. This representation not only highlights the strong polarization conversion, already evident in the degree of linear and circular polarization, but further allows a detailed tracking of the polarization ellipse orientation. In particular, it unveils the presence of so-called C-points which are polarization singularities where y is undefined, corresponding to nodes of purely circularly polarized scattering [50][51][52][53][54][55] .
Comparison to a theoretical model The polarization information obtained with k-space polarimetry can serve as benchmark to test models currently used to describe the behavior of bullseye antennas. In particular, here we extend a common simplified, scalar model developed in Ref. 29,35,56 to describe the intensity distribution of scattering by BEs, to also predict all angular features in its polarization content. In brief, the accepted model for scattering by structures in gold consisting of a nanoaperture surrounded by corrugations is that the radiation pattern is composed of two contributions. First, the nanoaperture itself is assumed to radiate into the far field as a point source. Second, plasmons launched at the hole propagate into the film as a circular wave, approximated as e ikSPPr = ffiffi r p , where k SPP is the complex plasmon wave vector that accounts for phase accumulation and loss. The surface plasmons subsequently excite the grooves that act as secondary sources radiating out into the far field. In this model, the radiated scalar field observed at an observation point a distance R away from the scatterer, and at a viewing angle set by the parallel wave vector jk jj j is proportional to With the complex ratio A between the central hole contribution and that of the grooves, and the effective coherence length l c as free parameters, this scalar model has proven remarkably successful for explaining beaming (i.e., S 0 ) despite the abstraction of each groove as an infinitely thin radiating circle 29,56,35 and the neglect of multiple scattering effects.
To include polarization in this model, we make the following two key modifications. First, we consider the central hole as an in-plane electric dipole that both radiates into the far-field and launches surface plasmon polaritons in the metal air interface (SPP) according to a cos w in-plane angular amplitude distribution (w being the angle between the in-plane dipole and the in-plane wave vector). Second, following 57, the grooves are modeled as lines of in plane magnetic dipoles tangential to the grooves. We take the in-plane dipole moment induced in the central hole to be directly inherited from the input polarization of the driving field. Finally as radiation pattern for each elementary radiator in the system we use the full dipolar radiation pattern for dipoles above a substrate as derived by Lukosz et al. 58 As parametric input to the model we use the surface plasmon polariton wave vector calculated from tabulated optical data 59 and include the groove periodicity as taken from the SEM characterization. The model then depends on several 'free parameters'. These are the complex ratio (A, written as jAje iQ ) of the scattering amplitudes of the central hole and the grooves, the effective coherence length l c and the effective location of the magnetic dipoles relative to the center of the groove they represent. We parametrize these values through a, the effective radius at which the first groove occurs. Since we consider grooves with a 50% duty cycle, the relation between a and the actual distance between hole and first groove in the sample is by no means trivial. However, we naturally require that for a given structure scattering patterns for any incidence condition are explained by the same parameter set. Figure 5 shows the comparison between the measured Stokes parameters (a)-(c), and those calculated using the model, (b)-(d).
Here we focus on two systems, BE 440 illuminated with horizontally polarized light (a)-(b) and BE 660 illuminated with right handed circularly polarized light (c)-(d). As free parameters, for both structures the ratio between the intensity scattered by the central hole and the grooves is taken as jAj 5 0.1 mm 21 and Q 5 p/2 and the effective coherence length as function of the pitch p is l c 5 10p. The differences in actual geometry result in a 5 149 nm for BE 440 in (b) and a 5 266 nm for BE 660 in (d). Figure S1 in the supplementary information shows the calculated patterns for all measurements in Fig. 3. Inspection of the calculation shows that the simple model reproduces all salient features in the data for all input polarizations, and for all output polarizations. These notably include the beaming in S 0 , the occurrence of pockets of output polarization orthogonal to the input, as well as the angular regions in which linearly polarized light is converted to circular polarization and vice versa.
It is important to notice that, while the scalar model 29,56,35 and the vectorial form of it are very robust for predicting total intensity S 0 , the angle dependent polarization features encoded in S 1, S 2 and S 3 are much more sensitive than the total intensity to the choice of the free parameters. For instance, there is a large range of dipole amplitude choices where the intensity pattern hardly changes, whereas the polarization features vary dramatically, as shown in the supplementary information (Fig. S2). Also, we found that while a good match to the data is obtained for a particular combination of a, jAj and Q, this combination is not unique. This observation allows us to draw two conclusions. On one hand, the fact that the simple, commonly used model is so successful for predicting intensity patterns should in retrospect not be read as a validation per se of its input parameters, or of the involved approximations of abstracting the hole to a point and the grooves to infinitely thin circles of secondary radiators. Indeed, we find that good matching of the model to just overall intensity, i.e., S 0 is possible for a very wide range of input parameters. In contrast, matching the full set of Stokes parameters is much more demanding. Thereby we draw as second conclusion that Fourier polarimetry provides experimental signatures that are excellently suited to discriminate between different, improved models for the response of plasmonic bullseyes, and by extension also for spirals, plasmonic crystals, and array antennas.
Conclusions and perspectives
We have reported a new measurement technique to resolve the polarization state of light scattered by a single nanostructure as function of wave vector across an entire microscope back aperture. The technique combines back focal plane imaging, also known as Fourier microscopy, with a polarimeter consisting of a linear polarizer and quarter wave plate. From just six camera images, all polarization content can be retrieved as we demonstrate for a simple bullseye antenna. Our results evidence some remarkable features of the scattering of BEs. The scattering pattern of BEs strongly depends on the incident polarization. Circular incident polarization result in rotational symmetric patterns while linear polarization results in patterns elongated in the direction of the polarization, as consequence of the p-polarized nature of the involved plasmon excitation. While scattered light dominantly retains the polarization of the incident field there are well defined regions at jkj ? 0 where the polarizations of incident and scattered field are different, even orthogonal to the incident field, or completely converted in helicity from 21 to 0 or 1 1.
The reported measurement technique is equally applicable to fluorescent nanostructures in which the total degree of polarization is not unity, and is in fact an important parameter. For instance, while randomly oriented emitters should result in DP 5 0, once they are strongly coupled to the resonance of a plasmonic nanorod, their emission is expected to inherit the orientation of the rod as dominant polarization. Thus mapping the degree of polarization could be an important quantifier to measure the efficiency with which a nanostructure controls emission polarization. We argue that Fourier polarimetry is an easily implemented, yet extremely sensitive tool to test our understanding of a plethora of dielectric and metallic nanophotonic structures. Beside plasmonics, this technique could be also a useful tool to study molecular orientation in biological samples 60 or in spin populations 61 , optically induced magnetic order 62 , or as a complement of other imaging techniques such as orientation imaging microscopy 63 .
|
v3-fos-license
|
2021-07-26T00:05:30.587Z
|
2021-06-15T00:00:00.000
|
236257058
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21870764.2021.1929738?needAccess=true",
"pdf_hash": "ad9cf1c404849bf2eb0ffa84bf67c87520d0ff4b",
"pdf_src": "TaylorAndFrancis",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42425",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "5fa8f6cb1686b6487dd367d45479d81b92d9498d",
"year": 2021
}
|
pes2o/s2orc
|
Characterization of in-situ zirconia/mullite composites prepared by sol-gel technique
ABSTRACT The main objective of this study was to investigate the role of zirconia addition to mullite through an in-situ reaction aimed at improving both the mechanical properties and the sinterability behavior. In this work, mullite–zirconia composites were produced using a sol-gel technique. Different amounts of zirconia (0, 10, 15, and 20 wt.%) were added to the mullite, and the calcined gels were sintered at 1550–1700°C for 1 h. The apparent porosity and bulk density of the blank and zirconia/mullite composites were estimated in accordance with ASTM C-20. The phase composition and sample morphology were evaluated via X-ray diffraction (XRD) and scanning electron microscopy analysis (SEM), respectively. Furthermore, the mechanical properties and thermal expansion coefficient (TEC) were also evaluated. The results revealed that the apparent porosity decreased and the density of the zirconia/mullite composites increased when the sintering temperature was increased from 1550 to 1700°C. However, the mechanical properties improved with increasing zirconia content and MZ20 sintered at 1700°C exhibited the maximum bending strength. The TEC results reflected the influence of the composition on the sample TEC. Samples with higher ZrO2 content yielded higher TEC figures than those with lower content.
Introduction
Mullite (3Al 2 O 3 .2SiO 2 ) exhibits excellent properties such as chemical and thermal stability, efficient utilization at high temperature, high melting point, and high creep resistance [1][2][3]. However, this material suffers from drawbacks (e.g. low fracture toughness and poor sinterability) that have limited its application as a functional material [4,5]. Forming composites with this material has provided a solution for such problems. Many materials have been added to mullite to form composites. Addition of zirconia (ZrO 2 ) to mullite for composite formation enhances both the mechanical properties and the sinterability behavior and hence has gained significant attention [6]. The dispersion of fine ZrO 2 in the ceramic matrix improves the sinterability and the mechanical properties of the obtained composites [7]. In addition, improving the toughening strength of mullite by disseminating tetragonal zirconia (t-ZrO 2 ) in the mullite matrix has been extensively investigated [8,9]. The toughening mechanism is based on the t-ZrO 2 to m-ZrO 2 phase transformation [8,10,11]. As it is well known that t-ZrO 2 is the best for toughening of the composites. Zirconia/mullite composites have been manufactured using diverse production techniques and starting materials. A low-cost traditional sintering method has been widely used. In such a method, zircon, kaolinite, and alumina are used as starting materials [12,13]. Some authors have used halloysite, boehmite, and zirconia to prepare mullite/zirconia composites [14].
A study considering the effect of different starting materials on the properties of mullite/zirconia composites indicated that under high temperature sintering conditions, changing mullite and ZrO 2 sources had a slight influence on the sintering process and the produced phases. Moreover, the results revealed that the reaction sintered and in-situ formed composites exhibited low mechanical figures compared with the composites formed from the reaction involving ZrO 2 , SiO 2 , and Al 2 O 3 [15].
Many authors have evaluated the effect of mechanical activation on the mullite/zirconia formation temperature [16][17][18]. Some of these studies indicated that 60 h of activation enhanced the formation of a composite that was sintered at 1400°C for 4 h [16,17]. Another study [18] claimed that 100 h of activation reduced the sintering temperature of 40-h milled composites to 1420°C. Nevertheless, Sistani et al. [19] stated that increasing the milling time to 72 h enhanced uniformity of the phases and restrained the t to m zirconia transformation at 1550°C. Both the tensile strength and Vickers micro-hardness of the sintered samples were enhanced with increasing milling time. The same authors [20] studied the coinciding impact of the mechanical activation procedure and the addition of TiO 2 and zinc oxide (ZnO) to the starting materials. The impact on the diametral tensile strength, Weibull modulus, and surface roughness of the produced sintered samples was considered. They found that prolonging the mechanical activation to 72 h enhanced the formation of the t-ZrO 2 phase. However, calcination at 1550°C diminished the effect of the activation on the produced phases. They also reported that high sintering temperatures (up to 1550°C) are required to produce mullite/zirconia composites from batches containing ZnO. The results indicated that, despite the high temperatures required, the added ZnO had the most positive effect on the mechanical properties.
Various advanced sintering techniques such as spark plasma sintering, hot-pressing sintering, and microwave-assisted sintering [21][22][23], have recently been used to improve the sinterability of mullite composites. However, chemical methods such as hydrolytic precipitation and sol-gel techniques have been suggested as a more effective route than mechanical mixing methods [24]. Compared with the mechanical methods, the sol-gel technique shows higher homogeneity, finer grain size accompanied by high particle efficiency, and a reduction in the sintering temperature.
To improve the mechanical strength, specimens with a modified grain orientation were fabricated. Grain orientation modification is achieved by means of templated grain growth. The introduction of templated grains in ceramic bodies has led to the formation of a textured microstructure on sintering at high temperatures [25]. Tape casting is one method that can be used to generate ordinated templates, and accordingly, may be employed for controlling the growth orientation of the needlelike mullite grains in the template. Tür et al. [26] found that a textured mullite/zirconia composite fabricated from mixing alumina, zircon, and aluminum borate produced textured mullite. They reported that the mechanical properties of the produced composite improved significantly due to the development of mullite grains in directions perpendicular to and parallel to the longitudinal direction.
The current study focuses on the production of zirconia/mullite ceramic composites using a sol-gel technique. To improve the sinterability and homogeneity, various concentrations (0, 10, 15, and 20 mass%) of ZrO 2 were added in-situ to the mullite sol. The influence of ZrO 2 addition to the mullite matrix on the sinterability, phase composition, and microstructure was investigated. In addition, mechanical and thermal properties for the produced pure and the ZrO 2 /mullite composites were evaluated.
Sample preparation
As illustrated in Figure 1, pure mullite (MZ0) was prepared using a stoichiometric composition of Al 2 O 3 /SiO 2 with a molar ratio of (3:2). Firstly, 0.6 M Al(NO 3 ) 3 .9H 2 O was dissolved in a double-distilled water in a ratio of ANN:H 2 O ≈ 1:28. 0.4 M Si(OC 2 H 5 ) 4 was separately hydrolyzed with double-distilled water and a few drops of nitric acid to accelerate the hydrolysis process under continuous stirring at 80°C for nearly 2 h to obtain a stable sol. Subsequently, Si(OH) 4 sol was added in a dropwise manner to the Al(NO 3 ) 3 .9H 2 O solution. For fabrication of the zirconia/ mullite composite samples, the molar ratio of Al 2 O 3 and SiO 2 was kept constant at (3:2), and various amounts (10, 15, and 20 mass%) of ZrO 2 were added during the sol formation stage of the mullite. Mixing with a gentle stirring at 80°C was continued until gelation occurred. The formed gel was dried at 110°C for 24 h, and then calcined at 800°, 1000°, and 1200°C for formation of the pure mullite phase, with 1 h of soaking time in static air in an electrically heated furnace (heating and cooling rate is 5°C/min) (see Figure 1). The composite batches with 10, 15, and 20 mass% of ZrO 2 were denoted as MZ10, MZ15, and MZ20, respectively.
The calcined powders were subsequently ground (speed: 300 rpm) for 1 h in a planetary ball mill (PM100, Retch Gmbh), using a ZrO 2 jar and ZrO 2 balls as a grinding medium (2 cm diameter). To determine the physical properties and chemical composition, the batches were dry pressed (force: 30 kN) into disks (diameter: 10 mm and thickness: ≈3 mm) using a stainless mold. In addition, the mechanical properties were estimated from 5 × 5 × 60 mm rectangular bars. The pellets and rods were sintered at temperatures ranging from 1550°C to 1700°C with 1 h of soaking time in static air in an electrically heated furnace (heating and cooling rate: 5°C/min).
Mullite powder
Mullite phase formation at different calcination temperatures (800, 1000, and 1200°C) was evaluated via X-ray diffraction (XRD) analysis using a Philips X-ray diffractometer with a Cu target and Ni filter. The measurements were performed at a scanning speed of 0.02°/s over a 2θ range of 5-60°. The mullite grain size and morphology were determined by means of transmission electron microscopy (TEM; JEOL JEM-2100 Electron Microscope, HRTEM, Japan) analysis, operated at 300 kV. The sample was prepared by dispersing a small amount of the powder in acetone using ultrasonic bath for 30 min. A drop of the well-dispersed suspension was deposited on a carbon coated copper grid followed by drying the grid for evaporating the solvent before TEM examination.
Zirconia/mullite composite bodies
The apparent porosity and bulk density of both mullite and zirconia/mullite sintered samples were estimated using the Archimedes method performed in accordance with ASTM C-20 [27]. Moreover, the phase composition of the samples sintered at the optimum sintering temperature (1700°C) was determined by means of XRD analysis with a Philips X-ray diffractometer (model PW1730). In preparation for microstructural characterization, each sintered sample was polished and thermally etched for 15 min in air at a temperature 100°C lower than the respective sintering temperatures. The samples were then coated with gold via sputtering coating, thereby ensuring the electrical conductivity of each sample. Scanning electron microscopy (Quanta FEG 250, Holland) coupled with energy-dispersive X-ray spectroscopy (EDS) were employed for microstructural and elemental evaluations. Furthermore, the thermal expansion coefficient (TEC) was measured using an automatic Netzsch DIL402 PC instrument (Germany). The bending strength was investigated by means of a three-point bending test on a universal testing machine (Model LLOYD LRX5 K). The strength was calculated as follows: Bending strength (σ) (MPa) = 3FL/2BD 2 (1) Where: F = maximum force before failure, L = span width between the two supports (mm), B = width of the sample (mm), and D = height of the sample (mm).
Characterization of the prepared mullite powder
The phase composition of the mullite gel calcined at various temperatures, (800°, 1000°, 1200°C) is given in Figure 2(a). As shown in the figure, the mullite (JCPDS 83-1881) was present as an amorphous phase at 800°C and started to crystallize at 1000°C. At 1200°C, mullite is present as a well crystalline phase. Some corundum (Al 2 O 3 ) phase peaks (JCPDS 46-1212) were also observed, but silica peaks were absent, owing possibly to the occurrence of silica as an amorphous phase. The particle size of the mullite powder sintered at 1200°C ranged from 6.91 to 20 nm, as shown in the TEM images presented in Figure 2(b). Furthermore, the image showed that the fine mullite particles are present as agglomerated clusters.
Characterization of mullite and zirconia/ mullite composite bodies
The effect of sintering temperature and ZrO 2 content on the bulk density and apparent porosity of the zirconia/mullite composites sintered at temperatures ranging from 1550° to 1700°C for 1 h is demonstrated in Figure 3a and b. As indicated in the figure, increasing both the sintering temperature and the ZrO 2 content led to an enhancement of the densification parameters. Pure mullite bodies were characterized by low bulk density and high porosity levels even for sintering at 1700°C. In contrast, the bodies composed of 20 wt.% ZrO 2 and 80 wt.% mullite sintered at 1700°C were characterized by high bulk density and the lowest apparent porosity. That is, the increase in the ZrO 2 content enhanced the densification process and increased the bulk density figures due to the high theoretical density of the ZrO 2 phase. The increase in density was accompanied by a decrease in the apparent porosity, that is, the density is inversely proportional to the porosity. The studied composites underwent only partial densification and were characterized by low-density figures. This resulted from the fact that on sintering at high temperatures the mullite grains grew into elongated shapes, which led to the formation of closed pores that diminished the density figures. Figure 4 presents the phase evolution of the zirconia/mullite composites sintered at the optimum sintering temperature. The results revealed that the MZ0 bodies are composed of a pure mullite phase (JCPDS no. 83-1881), whereas the MZ10, MZ15, and MZ20 bodies are composed of mullite and m-ZrO 2 (JCPDS no. 07-0343). No ZrO 2 transformation from m-ZrO 2 to t-ZrO 2 was observed at any of the studied compositions. Figure 5a shows Scanning electron microscopy (SEM) micrographs of the pure mullite bodies densified at 1700°C for 1 h. As shown in the figure, many pores and unwell boundaries mullite particles with sizes ranging from 0.42 to 1.30 µm occurred in each body. In addition, a few needlelike particles and the agglomeration of very fine mullite grains were observed.
An energy-dispersive X-ray analysis (EDS) pattern of the mullite grains is shown in Figure 5b. The pattern shows that mullite phase formation occurred at an Al 2 O 3 content of 62.15 wt.%. Data reported for the incongruent formation of mullite [28,29] indicates that mullite with 77.2 wt.% Al 2 O 3 (2Al 2 O 3 . SiO 2 , 2/1 mullite) is formed at 1890°C. Furthermore, the formation of 2/1 mullite is affected by many factors, such as the materials used for synthesis and the formation conditions. The decrease in the formation temperature shifts the composition of mullite toward low alumina content and high silica content (3Al 2 O 3 . 2SiO 2 , 3/2 mullite). The abovementioned data suggest that the formation of the silica-rich mullite phase resulted from the low sintering temperature employed, that is, 1700°C.
The MZ10 microstructure, Figure 6 (a), consists of randomly distributed secondary mullite acicular grains embedded in a matrix of primary mullite. The porosity increase and the density decrease of the MZ10 sintered bodies resulted from the grain size heterogeneity of the mullite matrix [30]. The abovementioned phenomena increased the porosity. However, compared with that of the pure mullite, the porosity of the sintered MZ10 bodies was characterized by coalescence and an arrangement of the ZrO 2 grains (white) in the matrix. Very few round ZrO 2 grains occurred in the intragranular regions, and most of the equiaxed agglomerated ZrO 2 grains occurred in the triple junctions of the mullite grains. Previous studies based on special XRD technology have reported that t-ZrO 2 is always present as intragranular particles, whereas m-ZrO 2 occurs mainly as intergranular particles [31].
When the ZrO 2 content was increased to 15 and 20 wt.% (see Figure 6 (b and c)), the microstructural homogeneity of the sintered bodies increased relative to that of MZ10. The ZrO 2 grains were quite homogeneously distributed in the mullite matrix, which is composed of stick-like randomly oriented secondary mullite grains with almost the same grain size. It was observed that by increasing the zirconia content, both the densification enhancement and the growth and formation of well-crystalline secondary mullite are taking place. That it could be because of the formation of the zircon (ZrSiO 4 ) phase that formed as a result of the interaction between the mullite' silica and the added zirconia. Pena and De Aza [32] have reported that a system containing zircona and mullite formed a transient liquid phase at temperatures as low as 1450 ± 10°C, which contributes to the enhancement of the rate of reaction and the sintering process. Also, at high temperature, about 1675°C, zircon (ZrSiO 4 ) is decomposed to zirconia (ZrO 2 ) and silica (SiO 2 ) [33]. Such decomposition is mostly associated with the formation of a slight quantity of glassy phase. With the appearance of the glassy phase, the mullite grains start to lose their deficient boundaries ( Figure 5) and manifest as rectangular prisms as shown in the SEM micrographs ( Figure 6) [7]. The quantity of the formed glassy phase is too small to be detected by either the XRD analysis or the SEM observations. Several studies have focused on modulating and enhancing the mechanical behavior of ceramic composites. The three-point bending strength results of sintered zirconia/mullite composites sintered at their optimum sintering temperature are given in Table 1. The XRD patterns of the composites were quite similar. However, Table 1 shows that the bending strength increased significantly from 34.01 ± 1.93 MPa for pure mullite samples to 83.30 ± 2.46 MPa for composites containing 20 wt.% ZrO 2 . This ZrO 2 -induced noticeable enhancement in the strength is attributed to the reduction in the apparent porosity and the eventual increase in the bulk density observed for the bodies with high ZrO 2 content. In addition, the morphologies of both the mullite and ZrO 2 and their distribution and interaction played an important role in improving the bending strength [34].
The thermal expansion coefficient TEC (α) of the sintered samples was measured (see Table 1 for the results). The heating and cooling rat of the samples up to 1000°C was 5°C/min. As indicated in the table 1, the lowest α (3.91 × 10 −6 /°C) at 1000°C was obtained for the pure mullite samples. The α value of each sample increased with the addition of ZrO 2 to mullite. Furthermore, the α value of the sample containing 20 mass% ZrO 2 was 5.94 × 10 −6 /°C is similar to that of monoclinic ZrO 2 (6 × 10 − 6 /°C) [35,36]. The abovementioned results concur with the XRD findings, which indicated that the sintered samples are composed of mullite and monoclinic ZrO 2 .
It is clear from the obtained results that the combination between mullite and zirconia earned the produced composite unique properties such as excellent thermal shock resistance, improved mechanical stability and the high chemical stability of the zirconia/ mullite composites. All of these properties encourage the employment of these composites in many industrial applications such as: glass industry kilns where the need for high chemical and corrosion stability are required, thermal protection materials for combustors, aircraft, and gas turbine engines, in the steel industries, they are used in slide gate valves, which an integral part of the continuous casting processes for steel. They also used for the production of the nozzles and plugs that need materials having both good thermal shock and erosion resistance.
Conclusion
In the present study, a zirconia/mullite nanocomposite was fabricated by means of an in-situ sol-gel preparation technique and sintering of the fabricated bodies at 1700°C. The most important findings of this study are summarized as follows: 1-Increasing both the sintering temperature and the ZrO 2 content led to an improvement in the densification parameters.
2-For all the compositions considered, no ZrO 2 transformation from m-ZrO 2 to t-ZrO 2 was observed.
3-The microstructural homogeneity of the sintered bodies increased with increasing ZrO 2 content. 4-The mechanical strength was enhanced by the addition of ZrO 2 . This enhancement is attributed to the improvement in the densification parameters observed for the bodies with a high ZrO 2 content. Furthermore, the morphologies of both the mullite and ZrO 2 and their distribution and interaction played an important role in improving the bending strength.
5-The thermal expansion coefficient increased with the addition of ZrO 2 and its value indicated the presence of m-ZrO 2 .
|
v3-fos-license
|
2020-05-01T01:00:56.751Z
|
2020-04-01T00:00:00.000
|
216869396
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.aclweb.org/anthology/2020.findings-emnlp.125.pdf",
"pdf_hash": "cc0efbb179ac615230a309864fd2c73d389f4414",
"pdf_src": "ACL",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42427",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"sha1": "0f1107494c77a6aa559c52f8b37aede31398e334",
"year": 2020
}
|
pes2o/s2orc
|
Investigating Transferability in Pretrained Language Models
How does language model pretraining help transfer learning? We consider a simple ablation technique for determining the impact of each pretrained layer on transfer task performance. This method, partial reinitialization, involves replacing different layers of a pretrained model with random weights, then finetuning the entire model on the transfer task and observing the change in performance. This technique reveals that in BERT, layers with high probing performance on downstream GLUE tasks are neither necessary nor sufficient for high accuracy on those tasks. Furthermore, the benefit of using pretrained parameters for a layer varies dramatically with finetuning dataset size: parameters that provide tremendous performance improvement when data is plentiful may provide negligible benefits in data-scarce settings. These results reveal the complexity of the transfer learning process, highlighting the limitations of methods that operate on frozen models or single data samples.
Introduction
Despite the striking success of transfer learning in NLP, remarkably little is understood about how these pretrained models improve downstream task performance. Recent work on understanding deep NLP models has centered on probing, a methodology that involves training classifiers for different tasks on model representations (Alain and Bengio, 2016;Conneau et al., 2018;Hupkes et al., 2018;Liu et al., 2019;Tenney et al., 2019a,b;Goldberg, 2019;Hewitt and Manning, 2019). While probing aims to uncover what a network has already learned, a major goal of machine learning is transfer: systems that build upon what they have learned to expand what they can learn. Given that most † atamkin@stanford.edu Figure 1: The three experiments we explore. Lighter shades indicate randomly reinitialized layers, while darker shades indicate layers with BERT parameters. For layer permutations, all layers hold BERT parameters, what changes between trials is their order. In all three experiments, the entire model is finetuned end-toend on the GLUE task. recent models are updated end-to-end during finetuning (e.g. Devlin et al., 2019;Howard and Ruder, 2018;Radford et al., 2019), it is unclear how, or even whether, the knowledge uncovered by probing contributes to these models' transfer learning success.
In a sense, probing can be seen as quantifying the transferability of representations from one task to another, as it measures how well a simple model (e.g., a softmax classifier) can perform the second task using only features from a model trained on the first. However, when pretrained models are finetuned end-to-end on a downstream task, what is transferred is not the features from each layer of the pretrained model, but its parameters, which define a sequence of functions for processing representations. Critically, these functions and their interactions may shift considerably during training, potentially enabling higher performance despite not initially extracting features correlated with this task. We refer to this phenomenon of how layer parameters from one task can help transfer learning Figure 2: The benefit of using BERT parameters instead of random parameters at a particular layer varies dramatically depending on the size of the finetuning dataset. However, as finetuning dataset size decreases, the curves align more closely with probing performance at each layer. Solid lines show finetuning results after reinitializing all layers past layer k in BERT-Base. 12 shows the full BERT model, while 0 shows a model with all layers reinitialized. Line darkness indicates subsampled dataset size. The dashed lines show probing performance at each layer. Error bars are 95% CIs. on another task as transferability of parameters.
In this work, we investigate a methodology for measuring the transferability of different layer parameters in a pretrained language model to different transfer tasks, using BERT (Devlin et al., 2019) as our subject of analysis. Our methods, described more fully in Section 2 and Figure 1, involve partially reinitializing BERT: replacing different layers with random weights and then observing the change in task performance after finetuning the entire model end-to-end. Compared to possible alternatives like freezing parts of the network or removing layers, partial reinitialization enables fairer comparisons by keeping the network's architecture and capacity constant between trials, changing only the parameters at initialization. Through experiments across different layers, tasks, and dataset sizes, this approach enables us to shed light on multiple dimensions of the transfer learning process: Are the early layers of the network more important than later ones for transfer learning? Do individual layers become more or less critical depending on the task or amount of finetuning data? Does the position of a particular layer within the network matter, or do its parameters aid optimization regardless of where they are in the network?
We find that when finetuning on a new task: 1. Transferability of BERT layers varies dramatically depending on the amount of finetuning data available. Thus, claims that certain layers are universally responsible or important for learning certain linguistic tasks should be treated with caution. (Figure 2) 2. Transferability of BERT layers is not in general predicted by the layer's probing performance for that task. However, as finetuning dataset size decreases, the two quantities exhibit a greater correspondence. (Figure 2, dashed lines) 3. Even holding dataset size constant, the most transferable BERT layers differ by task: for some tasks, only the early layers are important, while for others the benefits are more distributed across layers. (Figure 3) 4. Reordering the pretrained BERT layers before finetuning decreases downstream accuracy significantly, confirming that pretraining does not simply provide better-initialized individual layers; instead, transferability through learned interactions across layers is crucial to the success of finetuning. (Figure 4) 2 How many pretrained layers are necessary for finetuning?
Our first set of experiments aims to uncover how many pretrained layers are sufficient for accurate learning of a downstream task. To do this, we perform a series of incremental reinitialization experiments, where we reinitialize all layers after the kth layer of BERT-Base, for values k ∈ {0, 1, . . . 12}, replacing them with random weights. We then finetune the entire model end-toend on the target task. Note that k = 0 corresponds to a BERT model with all layers reinitialized, while k = 12 is the original BERT model. We do not reinitialize the BERT word embeddings. As BERT uses residual connections (He et al., 2016) around layers, the model can simply learn to ignore any of the reinitialized layers if they are not helpful during finetuning.
We use the BERT-Base uncased model, implemented in PyTorch (Paszke et al., 2019) via the Transformers library (Wolf et al., 2019). We finetune the network using Adam (Kingma and Ba, 2015), with a batch size of 8, a learning rate of 2e-5, and default parameters otherwise. More de-tails about reinitialization, training, statistical significance, and other methodological choices can be found in the Appendix. We conduct our experiments on three English language tasks from the GLUE benchmark, spanning the domains of sentiment, reasoning, and syntax (Wang et al., 2018): SST-2 Stanford Sentiment Treebank involves binary classification of a single sentence from a movie review as positive or negative (Socher et al., 2013).
QNLI Question Natural Language Inference is a binary classification task derived from SQuAD (Rajpurkar et al., 2016;Wang et al., 2018). The task requires determining whether for a given (QUESTION, ANSWER) pair the QUESTION is answered by the ANSWER.
CoLA The Corpus of Linguistic Acceptability is a binary classification task that requires determining whether a single sentence is linguistically acceptable (Warstadt et al., 2019).
Because pretraining appears to be especially helpful in the small-data regime (Peters et al., 2018), it is crucial to isolate task-specific effects from data quantity effects by controlling for finetuning dataset size. To do this, we perform our incremental reinitializations on randomly-sampled subsets of the data: 500, 5k, and 50k examples (excluding 50k for CoLA, which contains only 8.5k examples). The 5k subset size is then used as the default for our other experiments. To ensure that an unrepresentative sample is not chosen by chance, we run multiple trials with different subsamples. Confidence intervals produced through multiple trials also demonstrate that trends hold regardless of intrinsic task variability.
While similar reinitialization schemes have been explored by Yosinski et al. 2019) in an NLP context, none investigate these data quantity-and task-specific effects. Figure 2 shows the results of our incremental reinitialization experiments. These results show that the transferability of a BERT layer varies dramatically based on the finetuning dataset size. Across all but the 500 example trials of SST-2, a more specific trend holds: earlier layers provide more of an improvement on finetuning performance when the finetuning dataset is large. This trend suggests that larger finetuning datasets may enable the network to learn a substitute for the parameters in the middle and later layers. In contrast, smaller datasets may leave the network reliant on existing feature processing in those layers. However, across all tasks and dataset sizes, it is clear that the pretrained parameters by themselves do not determine the impact they will have on finetuning performance: instead, a more complex interaction occurs between the parameters, optimizer, and the available data.
Does probing predict layer transferability?
What is the relationship between transferability of representations, measured by probing, and transferability of parameters, measured by partial reinitialization? To compare, we conduct probing experiments for our finetuning tasks on each layer of the pretrained BERT model. Our probing model averages each layer's hidden states, then passes the pooled representation through a linear layer and softmax to produce probabilities for each class. These task-specific components are identical to those in our reinitialization experiments; however, we keep the BERT model's parameters frozen when training our probes. Our results, presented in Figure 2 (dashed lines), show a significant difference between the layers with the highest probing performance and reinitialization curves for the data-rich settings (darkest solid lines). For example, the probing accuracy on all tasks is near chance for the first six layers. Despite this, these early layer parameters exhibit significant transferability to the finetuning tasks: preserving them while reinitializing all other layers enables large gains in finetuning accuracy across tasks. Interestingly, however, we observe that the smallest-data regime's curves are much more similar to the probing curves across all tasks than the larger-data regimes. Smaller finetuning datasets enable fewer updates to the network before overfitting occurs; thus, it may be that finetuning interpolates between the extremes of probing (no data) and fully-supervised learning (enough data to completely overwrite the pretrained parameters). We leave a more in-depth exploration of this connection to future work.
4 Which layers are most useful for finetuning?
While the incremental reinitializations measure each BERT layer's incremental effect on transfer Figure 3: Early layers provide the most QNLI gains, but middle ones yield an added boost for CoLA and SST-2. Finetuning results for 1) reinitializing a consecutive three-layer block ("block reinitialized") and 2) reinitializing all other layers ("block preserved" learning, they do not assess each layer's contribution in isolation, relative to either the full BERT model or an entirely reinitialized model. Measuring this requires eliminating the number of pretrained layers as a possible confounder. To do so, we conduct a series of localized reinitialization experiments, where we take all blocks of three consecutive layers and either 1) reinitialize those layers or 2) preserve those layers while reinitializing the others in the network. 1 These localized reinitializations help determine the extent to which BERT's different layers are either necessary (performance decreases when they are removed) or sufficient (performance is higher than random initialization when they are kept) for a specific level of performance. Again, BERT's residual connections permit the model to ignore reinitialized layers' outputs if they harm finetuning performance. These results, shown in Figure 3, demonstrate that the earlier layers appear to be generally more helpful for finetuning relative to the later layers, even when controlling for the amount of finetuning data. However, there are strong task-specific effects: SST-2 appears to be particularly damaged by removing middle layers, while the effects on CoLA are distributed more uniformly. The effects 1 See the Appendix for more discussion and experiments where only one layer is reinitialized. on QNLI appear to be concentrated almost entirely in the first four layers of BERT-suggesting opportunities for future work on whether sparsity of this sort indicates the presence of easy-to-extract features correlated with the task label. These results support the hypothesis that different kinds of feature processing learned during BERT pretraining are helpful for different finetuning tasks, and provide a new way to gauge similarity between different tasks.
How vital is the ordering of pretrained layers?
We also investigate whether the success of BERT depends mostly on learned inter-layer phenomena, such as learned feature processing pipelines (Tenney et al., 2019a), or intra-layer phenomena, such as a learned feature-agnostic initialization scheme which aid optimization (e.g. Glorot and Bengio, 2010). To approach this question, we perform several layer permutation experiments, where we randomly shuffle the order of BERT's layers before finetuning. The degree that finetuning performance is degraded in these runs indicates the extent to which BERT's finetuning success is dependent on a learned composition of feature processors, as opposed to providing better-initialized individual layers which would help optimization anywhere in the network. These results, plotted in Figure 4, show that scrambling BERT's layers reduces their finetuning ability to not much above a randomly-initialized network, on average. This decrease suggests that BERT's transfer abilities are highly dependent on the intra-layer interactions learned during pretraining.
We also test for correlation of performance between tasks. We do this by comparing task-pairs for each permutation, as we use the same permutation for the nth run of each task. The high correlation coefficients for most pairs shown in Table 1 suggest that BERT finetuning relies on similar inter-layer structures across tasks.
Conclusion
We present a set of experiments to better understand how the different pretrained layers in BERT influence its transfer learning ability. Our results reveal the unique importance of transferability of parameters to successful transfer learning, distinct from the transferability of fixed representations assessed by probing. We also disentangle important factors affecting the role of layers in transfer learning: task vs. quantity of finetuning data, number vs. location of pretrained layers, and presence vs. order of layers. While probing continues to advance our understanding of linguistic structures in pretrained models, these results indicate that new techniques are needed to connect these findings to their potential impacts on finetuning. The insights and methods presented here are one contribution toward this goal, and we hope they enable more work on understanding why and how these models work.
B Reinitialization
We reinitialize all parameters in each layer, except those for layer normalization (Ba et al., 2016), by sampling from a truncated normal distribution with µ = 0, σ = 0.02 and truncation range (−0.04, 0.04). For the layer norm parameters, we set β = 0, γ = 1. This matches how BERT was initialized (see the original BERT code on GitHub and the corresponding TensorFlow documentation).
C Subsampling, number of trials, and error bars
The particular datapoints subsampled can have a large impact on downstream performance, especially when data is scarce. To capture the full range of outcomes due to subsampling, we randomly sample a different dataset for each trial index. Due to this larger variation when data is scarce, we perform 50 trials for the experiments with 500 examples, while we perform three trials for the other incremental reinitialization experiments. A scatterplot of the 500-example trials is shown in Figure 5.
For the localized reinitialization experiments, we perform ten trials each. Error bars shown on all graphs in the main text are 95% confidence intervals calculated with a tdistribution.
D Localized reinitializations of single layers
We also experiment with performing our localized reinitialization experiments at the level of a single layer. To do so, we perform three trials of reinitializing each layer k ∈ {1 . . . 12} and then finetuning on each of the three GLUE tasks. Our results are plotted in Figure 6. Interestingly, we observe little effect on finetuning performance from reinitializing each layer (except for reinitializing the first layer on CoLA performance). This lack of effect suggests either redundant information between layers or that the "interface" exposed by the two neighboring layers somehow beneficially constrains optimization.
E Number of finetuning epochs
He et al. (2019) found that much or all of the performance gap between an ImageNet-pretrained model and a model trained from random initialization could be closed when the latter model was trained for longer. To evaluate this, we track validation losses up to ten epochs in our incremental experiments, for k ∈ {0, 6, 12} across all tasks and for 500 and 5k examples. We find minimal effects of training longer than three epochs for the subsamples of 5k, but find improvements of several percentage points for training for five epochs for the trials with 500 examples. Thus, for the trials of 500 in Figure 2, we train for five epochs, while training for three epochs for all other trials. We train our probing experiments (8 trials per layer) with early stopping for a maximum of 40 epochs on the full dataset.
F Higher learning rate for reinitialized layers
In their reinitialization experiments on a convolutional neural network for medical images, Raghu et al. (2019) found that a 5x larger rate on the reinitialized layers enabled their model to achieve higher finetuning accuracy. To evaluate this possibility in our setting, we increase the learning rate by a factor of five for the reinitialized layers. The results for our incremental reinitializations are plotted in Figure 7. A higher learning rate appears to increase the variance of the evaluation metrics while not improving performance. Thus, we keep the learning rate the same across layers.
G Layer norm
Because the residual connections around each sublayer in BERT are of the form LayerNorm(x + Sublayer(x)), reinitializing a particular layer neutralizes the effect of the last layer norm application from the previous layer in a way that cannot be circumvented through the residual connections. However, for brevity we simply refer to "reinitializing a layer" in this paper. We also assessed whether preserving the layer norm parameters in each layer might aid optimization. To do so, we preserved these parameters in our incremental trials with 5k examples. These trials are plotted in Figure 8, and demonstrate that preserving layer norm does not aid (and may even harm) finetuning of reinitialized layers.
H Dataset descriptions and statistics
We display more information about the finetuning datasets, including the full size of the datasets, in
I.2 Computing infrastructure
All experiments were run on single Titan XP GPUs.
I.4 Average runtime
Average runtime for each approach:
I.5 Evaluation method
To evaluate the performance of our method, we compute accuracy for SST-2 and QNLI and Matthews Correlation Coefficient (Matthews, 1975) for CoLA. We compute these metrics always on the official validation sets, which are never seen by the model during training. Accuracy measures the ratio of correctly predicted labels over the size of the test set. Formally: accuracy = T P +T N T P +T N +F P +F N Since CoLA presents class imbalances, MCC is used, which is better suited for unbalanced binary classifiers (Warstadt et al., 2019). It measures the correlation of two Boolean distributions, giving a value between -1 and 1. A value of 0 means that the two distributions are uncorrelated, regardless of any class imbalance. M CC = (T P ·T N )−(F P ·F N )) √ (T P +F P )(T P +F N )(F P +T N )(T N +F N )
I.6 Hyperparameters
We performed one experiment with a 5x learning rate and implemented early stopping to choose the number of epochs for the probing experiments.
For batch size and learning rate, we kept the default parameters for all tasks: • Learning rate: 2e-5 • Batch size: 8
|
v3-fos-license
|
2022-08-31T15:03:49.613Z
|
2022-08-27T00:00:00.000
|
251948062
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-1312/10/9/1201/pdf?version=1661599295",
"pdf_hash": "ee25ba296176929b09ae6070a79bf9231feab9ea",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42429",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "e063c28d7d9329e80a208d5801bd2558cb075784",
"year": 2022
}
|
pes2o/s2orc
|
Numerical Simulation of an Air-Bubble System for Ice Resistance Reduction
: Ships sailing through cold regions frequently encounter floe ice fields. An air-bubble system that reduces friction between the hull and ice floes is thus considered useful for the reduction of ice-induced resistance. In this study, a numerical analysis procedure based on coupled finite volume method (FVM) and discrete element method (DEM) is proposed to simulate complicated hull-water-gas-ice interactions for ice-going ships installed with air-bubble systems. The simulations reveal that after turning on the air-bubble system ice floes in contact with the hull side wall are pushed away from the hull by the gas-water mixture, resulting in an ice-free zone close to the side hull. It is found that the drag reduction rate increases with the increase of ventilation, while the bow ventilation plays a deciding role in the overall ice-resistance reduction. The proposed procedure is expected to facilitate design of new generations of ice-going ships.
Introduction
As a consequence of global warming, the polar marginal ice zone has been observed to become wider [1].The polar marginal ice zone is characterized by floe ice fields that cover 15-80% sea surface [2].The existence of ice in water induces extra resistance for ships sailing in floe ice fields [3][4][5].During the hull-ice interaction, significant kinetic energy from ship propulsion is dissipated, resulting in speed loss or additional fuel consumption [6,7].An air-bubble system that reduces the friction between the hull and the ice floes is thus considered useful for the reduction of ice-induced resistance in floe ice fields.
The air-bubble system discussed in this article has something in common with but differs from the air lubrication technology that has been utilized for drag reduction.The ship's air lubrication technology can be subdivided into two main categories.The first group is termed bubble-induced skin-friction drag reduction (BDR), for which a large number of micro-bubbles are injected into the boundary layer.The second group is termed air layer drag reduction (ALDR), which forms a continuous gas layer on the hull surface [8].Both groups utilize air as a lubricant, which has been proven to decrease the friction between the ship and the seawater, see, e.g., in [9,10].The air-bubble system of this study is more similar to the former group of air lubrication technology, but aims to reduce hullice friction instead.This is achieved by injecting air from a series of nozzles at the bow and bilge.When the air bubbles arise along the hull, the mixed air and water create a strong current, forming a layer between the hull and the ice floes, consequently reducing ice resistance.
The idea of reducing ice resistance by air bubbles originated in the late 1960s [11].An air-bubble system was studied mainly through model tests.Till the early 1990s, several icebreakers and ice-going vessels that mainly operate in the Baltic Sea were installed with air-bubble systems [12].Little progress in air-bubble systems for ice resistance reduction has been reported in recent decades.Nevertheless, as more ships sail into the polar floe ice fields, an air-bubble system may bring added value to ice-going ships' efficiency and safety, and thus deserves further investigation.Furthermore, the fast development of computational fluid dynamic (CFD) methods makes it possible to investigate complicated hull-water-air-ice interaction processes with numerical simulations, instead of depending on high-cost model tests.In this work, the authors aim to make use the state-of-the-art numerical methods to quantify the ice resistance characteristics of ice-going ships installed with air-bubble systems.
There are two major numerical methods for simulating air-bubbles.The first one is termed the interface tracking method [13].With this approach, the fluid interface can be accurately defined.However, this method requires a complicated process of mesh reconstruction, and it has the disadvantage of mass and energy loss of the bubbles.The second method is called the interface capturing method [14], in which the fluid interface does not have to be accurately defined.The different liquids are instead distinguished through additional fluid variables such as the mass fraction.The interface capturing method requires a large number of grid cells to keep the accuracy.This method is represented by the method of the volume of fluid (VOF) approach, which is often employed in the simulation of large bubble motions and free surface flow in liquids [15].Some recent research making use of the VOF approach are as follows: Zhu et al. [16] used the VOF method to investigate the effects of gas velocity, liquid velocity and other factors on the bubble detachment diameter.Based on the VOF method, Li et al. [17] described the deformation during the ascent of a single bubble in gas-liquid, gas-liquid-solid multiphase flow under high pressure.Tsui et al. combined the VOF method with the Level Set method to simulate rising bubbles in still water and got agreeable results [18].
Numerical simulations of ship resistance characteristics in ice-infested waters have been presented by many researchers.A recent review paper by Li and Huang [19] indicates that the discrete element method (DEM) predicates reasonable resistance induced by broken ice.Hansen and Loset [20] applied a two-dimensional DEM model to simulate the ice force on ships in broken ice.Ji et al. [21,22] used the GPU parallel algorithm to accelerate the DEM calculation, making it possible to use the DEM method to calculate sea ice structure interaction in the large-scale calculation domain.Luo et al. [23] applied combined CFD-DEM to study the coupling characteristics of ship-ice-water in the brash ice channel and analyzed the difference between one-way coupling and two-way coupling.In addition to the DEM approach, other numerical methods were also employed for ship-ice interaction simulations.Kim et al. [24] simulated ice resistance in ice channels through the finite element method (FEM) and found the simulation results were in good agreement with the model test results.Lubbad and Loset [25] simulated ships in ice through the physics engine PhysX, and compared it with the full-scale measurements.Furthermore, there are other emerging numerical methodologies, such as the Peridynamics (PD) method, the Smooth Particle Hydrodynamics (SPH) method, and the Extended Finite Element (XFEM) method.All those numerical methods have the potential to simulate ship-ice interactions with reasonable accuracy [19,26].
In this paper, the authors utilized the combined CFD-DEM approach to simulate ship resistance in floe ice fields with the air-bubble system installed.The air-water interface was simulated by using the VOF method.An icebreaker was chosen as the case study vessel.By this means, we aim to find out how effective the air bubbles are for drag reduction in ice-infested waters.
The Numerical Models
In this section, the features of the numerical models of this study are summarized and the key theoretical formulations are presented as follows.
The Governing Equations of the Incompressible Fluid
In this study, the finite volume method (FVM) is used to discretize the fluid domain.The governing equations are: ∂ ∂t where t is time, V is the volume of the fluid element, a is the area vector, v is the velocity vector of the fluid element, S u is the source term of the continuity equation, p is the pressure, T is the viscous stress tensor, f b is the resultant force of the body force, s u is the source term of the momentum conservation equation.The viscous stress tensor can be expressed as: where µ is the dynamic viscosity coefficient, I is the unit tensor.
The Governing Equations of the Discrete Phase in Numerical Simulation
The governing equations of the discrete phase follow the Lagrangian framework.The surface force and physical force acting on the particle jointly determine the change of particle momentum, and its momentum conservation equation is: In this equation, m p is particle mass, v p is particle velocity, F s is surface force, F b is body force, F d is the drag force, F p is pressure gradient force, F vm is virtual mass force, F g is the gravity, F con is the contact force.The two most critical items are the calculation of the drag force and the contact force.The former involves the treatment of the gas-liquid mixed phase, and the latter depends on the choice of the contact model.The calculation of both will be introduced in subsequent sub-sections.
The conservation of angular momentum of the particle can be expressed as: where I p is the moment of inertia of the particle, ω p is the angular velocity of the particle, M b is the resistance moment, r c is the vector from the contact point to the center of gravity, F ci is the contact force between particle c and particle i, and M ci is the moment of rolling resistance acting on the particle.
The Turbulence Model and the Free Surface Treatment
The turbulence model of the RANS equation used in the numerical simulation of this paper is the standard k-ε model.The existence of the turbulence model is to make the Reynolds-averaged Navier-Stokes equation closed.For the RANS equation, the average value can be regarded as the time average of the steady-state situation and the overall average of repeatable transient situations.Inserting the decomposed solution variables into the Navier-Stokes equations yields equations of average quantities.The conservation equations of average mass and average momentum can be expressed as: where ρ is the density, v is the mean velocity, p is the mean pressure, T is the mean viscous stress tensor, and f b is the resultant force of body forces (such as gravity, centrifugal force, etc.).This equation is essentially the same as the original N-S equation, with an extra term T RANS added to the momentum equation, which is the stress tensor.
The standard k-ε model is a two-equation model that determines the turbulent length and time scale by solving two independent transport equations.This model assumes a fully turbulent fluid flow and does not take into account the effects of molecular viscosity.The transport equation corresponding to the turbulent kinetic energy and dissipation rate of the standard k-ε model is of the form: where G k is the term from the turbulent kinetic energy, k due to the average velocity gradient, G b is the term from the turbulent kinetic energy caused by the buoyancy effect, C 1ε , C 2ε and C 3ε are empirical constants.σ k and σ ε are the Prandtl numbers corresponding to the turbulent kinetic energy and dissipation rate, respectively.S k and S ε are user-defined source terms.
The relationship between the turbulent kinetic viscosity and turbulent kinetic energy and the dissipation rate can be expressed as: In this equation, C µ is the empirical constant.The standard k-ε model is a semiempirical formula derived from physical experiments combined with theory.
In order to capture the water-air interface better to simulate the effect of the bubble assist system, this paper adopts the VOF (Volume of Fluids) method and HRIC (High-Resolution Interface Capture) format.The VOF method is used to capture incompatible terms and assumes that the mesh resolution is sufficient to resolve the position and shape of the interface between the different phases.Therefore, we should pay attention to the mesh size during the numerical simulation.Figure 1 shows the unsuitable mesh and the suitable mesh: The VOF model describes the phase distribution and position of the interface through the field of the phase volume fraction α i , the volume fraction α i = V i V of the phase i, where V i is the volume of phase i in the grid cell, V is the volume of the grid cell, And the sum of the volume fractions of all phases within each grid cell is 1.When the grid contains only a single fluid, the material properties of the fluid are used in the calculation.If there are multiple fluid phases in the grid, it is regarded as a mixture, and its material properties use the weighted average of each phase.
The distribution of fluid phase i is determined by the mass conservation equation: where a is the surface area vector, v is the velocity of the mixed fluid, v d,i is the diffusion velocity, S αi is the source term of the phase i, and Dρ i /Dt is the Lagrangian derivative of the phase density ρ i .When only two phases, water and air, are present in the simulation, the mass conservation equation is solved for the first term only, and the volume fraction of the second phase is adjusted in each grid cell so that the sum of the volume fractions equals 1.
The momentum equation of the fluid can be expressed as: where p is the pressure, I is the unit tensor, T is the stress tensor, f b is the vector of the body force, S i α is the momentum source term of the phase, and g is the gravity acceleration.The drag force F d provided by the mixed fluid can be calculated as: where C d is the drag coefficient, ρ is the density of the continuous phase (mixing density for multiphase flow), v s = v − v p , v is the instantaneous velocity of the continuous phase, v p is the particle slip velocity, and A p is the projected area of the particle.The drag coefficient C d in this equation is determined by the Schiller-Naumann correlation, which applies to fluids with bubbles, which is set as: Re p (1 + 0.15Re p 0.687 ), Re p ≤ 1000 0.44, Re p > 1000 (14) where Re p is the particle Reynolds number, which is defined as Re p ≡ , where D p is the particle equivalent diameter and µ is the kinematic viscosity.
The Ice Model
Ice is modeled by using DEM, and the governing equation is Newton's law which has been described in detail in Section 2.2.The contact force in the governing equation needs to be calculated by the contact model.The contact model of DEM will be introduced below.
We employ the DEM method in this study to model the ice floes.Two major assumptions are taken to simplify the computation.Firstly, we assume that the ice floes will be pushed away but not broken during the ship-ice interaction process.Secondly, the contacts of ship-ice and ice-ice are assumed as elastic.Based on these assumptions, the Hertz-Mindlin model can be implemented.In this model, the spring simulates the elastic part of the collision process, and the damper reflects the energy dissipation of the collision process.The contact force between two DEM particles is described by the following equations: where F con is the contact force, F n is the normal force, F t is the tangential force.The normal force is expressed as: where K n is the normal spring stiffness, d n is the overlap of the local normal directions of the contact between the two particles, N n is the Normal damping, v n is the normal velocity of the particle.
The normal spring stiffness is: The normal damping is: where E eq is the equivalent Young's modulus, R eq is the equivalent radius, M eq is the equivalent particle mass and The tangential direction is defined by: where K t is the tangential spring stiffness, N t is the tangential damping, d t is the overlap of the local tangential directions of the contact between the two particles, C fs is the coefficient of static friction.The tangential spring stiffness is: The tangential damping is: where The equivalent radius is: The equivalent particle mass is: The equivalent Young's modulus is: The equivalent shear modulus is: where M A and M B are the masses of spheres A and B; R A and R B are the radii of the sphere; E A and E B are Young's modulus of the sphere; ν A and ν B represent Poisson's ratio of A and B, respectively.For collisions between particles and walls, the above formula remains the same but assumes that the wall radius and mass are summed R wall = ∞ and M wall = ∞, so the equivalent radius decreases to R eq = R partical , and the equivalent mass decreases to M wall = M partical .
The Computational Domain Settings
The main ship particulars of the case study vessel are listed in Table 1. Figure 2 illustrates the three-dimensional hull model as well as the location of the nozzles of the air-bubble system.It is noticeable that the nozzles are placed on the bow and along the side instead of in the bottom and the keel areas.This is because conventional air-bubble systems aim at reducing the water resistance of the ship, which requires the air-bubble system to cover the wet surface of the hull as much as possible.For that purpose, the bottom and the keel areas need to be covered by air-bubbles.The air-bubble system in this study, in contrast, is supposed to reduce ice resistance instead.It would be sufficient to use a smaller volume of air from the bow/sides to push the crushed ice away from the hull.The numerical modelling and simulations are carried out in the commercial CFD software STAR-CCM+.In the calculation domain, the stern bottom is considered the coordinate origin.The positive X-direction is defined as the direction from the stern to the bow; the positive Y-direction is to the port side; the positive Z-direction is upwards.In order to minimize the influence of the boundary on the flow field, the fluid domain should be set as large as possible.In this study, following the experience of a previous study [27], the distances between the boundary from the stern and the bow are set to be twice the ship's length.The water depth is also set to be twice the ship's length.The width of the ice field is set to be three times the ship's breadth.The ice field is modelled with ice floes made of DEM elements.The size of the ice floe is assumed as 4 m × 4 m × 1 m.The ice concentration is controlled by setting the distance of the injecting point as the injecting speed of the DEM element.The computational domain with a concentration of 60% is illustrated in Figure 3.
software STAR-CCM+.In the calculation domain, the stern bottom is considered the coordinate origin.The positive X-direction is defined as the direction from the stern to the bow; the positive Y-direction is to the port side; the positive Z-direction is upwards.In order to minimize the influence of the boundary on the flow field, the fluid domain should be set as large as possible.In this study, following the experience of a previous study [27], the distances between the boundary from the stern and the bow are set to be twice the ship's length.The water depth is also set to be twice the ship's length.The width of the ice field is set to be three times the ship's breadth.The ice field is modelled with ice floes made of DEM elements.The size of the ice floe is assumed as 4 m × 4 m × 1 m.The ice concentration is controlled by setting the distance of the injecting point as the injecting speed of the DEM element.The computational domain with a concentration of 60% is illustrated in Figure 3. Table 2 lists the boundary conditions of the computational domain.The boundary to the right is set as the velocity inlet, while the boundary to the right is the outlet.The other surrounding boundary surfaces are set as slip wall conditions.The hull surface and the sides of the ice field are set as non-slip wall conditions.The air inlet is set as the velocity inlet boundary.For the computational domain and ship model, the fluid domain meshes with the trimmed meshing model and the boundary layer grids are divided around the ship's surface.The y+ value is in the range of 30-60.The meshes on the ship's surface are refined near the waterline, the stem, the stern, and around the nozzles as shown in Figure 4.The total mesh number is about 3.9 million.
The numerical simulation is carried out by discrete solution of N-S equation based on finite volume method, and the multiphase flow model adopts the VOF method to realize interface tracking [28].In this paper, there are two kinds of fluids in the computational domain, α 0 represents the volume function of the air phase and α 1 represents the volume function of the water phase.In the computational domain, the sum of the volume fractions of the two phases is 1 (α 0 + α 1 = 1).The turbulence model adopts the standard k-ε Model [29].
For the computational domain and ship model, the fluid domain meshes with the trimmed meshing model and the boundary layer grids are divided around the ship's surface.The y+ value is in the range of 30-60.The meshes on the ship's surface are refined near the waterline, the stem, the stern, and around the nozzles as shown in Figure 4.The total mesh number is about 3.9 million.The numerical simulation is carried out by discrete solution of N-S equation based on finite volume method, and the multiphase flow model adopts the VOF method to realize interface tracking [28].In this paper, there are two kinds of fluids in the computational domain, 0 represents the volume function of the air phase and 1 represents The nozzle on the hull surface is set as the velocity inlet of the air.In this study, the gas velocity and pressure of the air-bubble system are small that the compressibility is ignored.The separation flow model is employed to describe the liquid phase and gas phase, which solves the momentum equation corresponding to each dimension, and associates the momentum equation with the continuity equation through the prediction correction method.The second-order discretization of the convective flux is in use, which is deemed particularly suitable for constant-density fluids.Each nozzle contains at least 25 complete meshes to avoid excessive numerical loss when the computational domains are generated.Figure 5 illustrates the numerical losses of the computational domains under different mesh numbers.The color in the figure represents the volume fraction of air, red represents the gas volume ratio of 100%, yellow represents the gas-liquid interface (gas volume ratio is 50%), and green color represents the gas attached to the surface of the hull.ar.Sci.Eng.2022, 10, x FOR PEER REVIEW 10 of the volume function of the water phase.In the computational domain, the sum of the v ume fractions of the two phases is 1 ( 0 ).The turbulence model adopts standard k-ε Model [29].
The nozzle on the hull surface is set as the velocity inlet of the air.In this study, gas velocity and pressure of the air-bubble system are small that the compressibility ignored.The separation flow model is employed to describe the liquid phase and g phase, which solves the momentum equation corresponding to each dimension, and sociates the momentum equation with the continuity equation through the prediction c rection method.The second-order discretization of the convective flux is in use, which deemed particularly suitable for constant-density fluids.Each nozzle contains at least complete meshes to avoid excessive numerical loss when the computational domains generated.Figure 5 illustrates the numerical losses of the computational domains und different mesh numbers.The color in the figure represents the volume fraction of air, r represents the gas volume ratio of 100%, yellow represents the gas-liquid interface (g volume ratio is 50%), and green color represents the gas attached to the surface of the hu The ice floes are modelled using the DEM element, following the theoretical mod as described in Section 2.4.In this work, the ice density is set as 900 kg/m 3 , Young's mo ulus is assumed as 1 GPa and Poisson's ratio is assumed as 0.3.
Comparision of Simulations
Simulations with the air-bubble system activated are compared with those witho the air-bubble system under identical operational conditions.For the air-bubble syste the air inlet velocity is 2.5 m/s, and the air jet direction is perpendicular to the hull surfa The ice floes are modelled using the DEM element, following the theoretical models as described in Section 2.4.In this work, the ice density is set as 900 kg/m 3 , Young's modulus is assumed as 1 GPa and Poisson's ratio is assumed as 0.3.
Comparision of Simulations
Simulations with the air-bubble system activated are compared with those without the air-bubble system under identical operational conditions.For the air-bubble system, the air inlet velocity is 2.5 m/s, and the air jet direction is perpendicular to the hull surface.The ship speed is 6 knots.Figure 6 illustrated the wave patterns and streamlines around the hull for the cases of when the air-bubble system is deactivated and when the air-bubble system is activated, respectively.When the air-bubble system is turned on, the wave pattern around the hull is found to differ obviously from that when the air-bubble system is turned off.After the gas is pumped out from the nozzles, the gas-water mixture rises along the side of the hull and then the gas escapes from the free surface, resulting in a more distorted free surface around the hull.It is also observed that when the gas-water mixture exists the streamlines differ from those when there is no gas in the fluid flow.When ice exists in the water, additional resistance is induced by the interaction between the hull and the ice floes.In this study, the ice fields are assumed to be composed of ice floes of identical size and shape of 4 m × 4 m × 1 m.The ice concentration is assumed to be 60%.Figure 7 illustrates the snapshot at 120 s of the simulation in the ice field with a ship speed of 6 knots.The air-bubble system has not been activated.When a ship enters the floe ice fields, the speeds of the ice floes around the bow drop rapidly due to wavemaking and the collision with the bow.Consequently, the ice floes accumulate around the bow and some of them slide then along the bow area to the ship's sides and the bottom, as shown in Figure 7.It is observed that during the hull-water-ice interaction, the ice floes are overturned by the wave system rising from the bow/shoulder area.Some of the ice floes collide with the bow and also with other ice floes.Then some ice floes move along the ship's side or the bottom, resulting in friction forces on the hull.An ice-free channel slightly narrower than the width of the ship is formed behind the ship.When ice exists in the water, additional resistance is induced by the interaction between the hull and the ice floes.In this study, the ice fields are assumed to be composed of ice floes of identical size and shape of 4 m × 4 m × 1 m.The ice concentration is assumed to be 60%.Figure 7 illustrates the snapshot at 120 s of the simulation in the ice field with a ship speed of 6 knots.The air-bubble system has not been activated.When a ship enters the floe ice fields, the speeds of the ice floes around the bow drop rapidly due to wave-making and the collision with the bow.Consequently, the ice floes accumulate around the bow and some of them slide then along the bow area to the ship's sides and the bottom, as shown in Figure 7.It is observed that during the hull-water-ice interaction, the ice floes are overturned by the wave system rising from the bow/shoulder area.Some of the ice floes collide with the bow and also with other ice floes.Then some ice floes move along the ship's side or the bottom, resulting in friction forces on the hull.An ice-free channel slightly narrower than the width of the ship is formed behind the ship.
In contrast, when the air-bubble system is turned on, the hull-water-ice interaction becomes significantly different, which is illustrated in Figure 8.It is observed that when the air-bubble system is on, despite ice accumulation remaining unchanged around the bow, much fewer ice floes become in contact with slide through the shoulder due to the gas-water mixture.Most of the ice floes are overturned before passing through the ship's shoulder and drifting away from the hull, which greatly reduces the hull-ice contact occurrence at the sides and the bottom.
a ship speed of 6 knots.The air-bubble system has not been activated.When a ship enters the floe ice fields, the speeds of the ice floes around the bow drop rapidly due to wavemaking and the collision with the bow.Consequently, the ice floes accumulate around the bow and some of them slide then along the bow area to the ship's sides and the bottom, as shown in Figure 7.It is observed that during the hull-water-ice interaction, the ice floes are overturned by the wave system rising from the bow/shoulder area.Some of the ice floes collide with the bow and also with other ice floes.Then some ice floes move along the ship's side or the bottom, resulting in friction forces on the hull.An ice-free channel slightly narrower than the width of the ship is formed behind the ship.In contrast, when the air-bubble system is turned on, the hull-water-ice interaction becomes significantly different, which is illustrated in Figure 8.It is observed that when the air-bubble system is on, despite ice accumulation remaining unchanged around the bow, much fewer ice floes become in contact with slide through the shoulder due to the gas-water mixture.Most of the ice floes are overturned before passing through the ship's shoulder and drifting away from the hull, which greatly reduces the hull-ice contact occurrence at the sides and the bottom.
Ice Resistance Calculation
In this subsection, we quantify ice-induced resistance for the cases with and without the air-bubble system.The top subplot of Figure 9 shows the time history of the total ice resistance when the air-bubble system is on, for which t = 11.9 s is the timestep when the ship bow reaches the ice field.It is observed that the ice resistance gradually increases and becomes stable around t = 20 s, which corresponds to the timestep when the entire hull has entered the ice field.The middle and bottom sub-plots of Figure 9, illustrate the time series of the ice resistance components from the bow and the ship's sides, respectively.The ice resistance values of the stable stage, i.e., 20-120 s are listed in Table 3.The bow area accounts for a major part of the total ice resistance.
Ice Resistance Calculation
In this subsection, we quantify ice-induced resistance for the cases with and without the air-bubble system.The top subplot of Figure 9 shows the time history of the total ice resistance when the air-bubble system is on, for which t = 11.9 s is the timestep when the ship bow reaches the ice field.It is observed that the ice resistance gradually increases and becomes stable around t = 20 s, which corresponds to the timestep when the entire hull has entered the ice field.The middle and bottom sub-plots of Figure 9, illustrate the time series of the ice resistance components from the bow and the ship's sides, respectively.The ice resistance values of the stable stage, i.e., 20-120 s are listed in Table 3.The bow area accounts for a major part of the total ice resistance.
As mentioned previously, when the air-bubble system is turned on with the air injection rate of 2.5 m/s, the hull-ice contact on the ship side is greatly reduced.Figure 10 illustrates the time series of the ice resistances for this case.In comparison with the ice resistances in Figure 9, the total resistance as well as the components from the bow area and the sides are found to be smaller.The resistance values are listed in Table 3, together with the resistance when the air-bubble system is off.The resistance reductions are also included in Table 3.It is seen that when the air-bubble system is off, the total ice resistance is reduced by 15.3%.When it comes to the ice resistance components, the bow resistance remains almost unchanged, with a reduction rate of 10.3%.In contrast, the ice resistance from the ship's sides is greatly reduced, with a drag reduction rate of 70.8%.
ship bow reaches the ice field.It is observed that the ice resistance gradually increases and becomes stable around t = 20 s, which corresponds to the timestep when the entire hull has entered the ice field.The middle and bottom sub-plots of Figure 9, illustrate the time series of the ice resistance components from the bow and the ship's sides, respectively.The ice resistance values of the stable stage, i.e., 20-120 s are listed in Table 3.The bow area accounts for a major part of the total ice resistance.The effect of the ship speed on the drag reduction rate was also investigated.In addition to the abovementioned speed 6 knots, ship-ice interactions under four other speeds were simulated and ice resistances with and without the air-bubble system were compared.Table 4 listed the resistances as well as the drag reduction rates, which are also plotted in Figure 11.It is observed that the drag reduction rate decreases with the speed increase.This can be explained by the fact that with the increase of the ship's speed, the location where the bubbles reach the free surface moves backwards due to the drag effect of the fluid.This implies the area covered by the gas-water mixture moves backwards at a higher speed, resulting in a larger hull surface in the front in contact with ice.The drag reduction rate is thus reduced.This is however a tentative explanation of this interesting phenomenon.The effect of ship speed on the drag reduction rate of an air-bubble system requires systematic investigation, in a combination of other factors such as the injected air volume, which is included in the authors' future work.
In addition to the drag force in the longitudinal direction, the hull-ice interaction forces in the transverse direction regarding the air-bubble system are also analyzed.Figure 12 shows the time series of ice-induced drift force with and without the air-bubble system.It is seen from the figure that the ice-induced drift forces increase gradually in the first stage of the ice-going voyage for both cases.This is similar to the drag force, which can be explained by the fact that the air-bubble system has not been utilized when the ship enters the ice field.After the entrance stage up to t = 25 s, the ice-induced drift force is found to be smaller in both magnitude and variation.The mean value and the standard deviation of the ice-induced drift force without the air-bubble system are 14.0 kN and 94.3 kN, respectively.For comparison, when the air-bubble system has been activated, the mean and the standard deviation of the ice-induced drift force become 8.4 kN and 64.3 kN, respectively, which indicates the ice-induced drift force has also been reduced significantly.It is also noticeable that the ice-induced drift force has a large standard deviation.This is because the ice forces on the ship's sides are asymmetric.
As mentioned previously, when the air-bubble system is turned on with the air injection rate of 2.5 m/s, the hull-ice contact on the ship side is greatly reduced.Figure 10 illustrates the time series of the ice resistances for this case.In comparison with the ice resistances in Figure 9, the total resistance as well as the components from the bow area and the sides are found to be smaller.The resistance values are listed in Table 3, together with the resistance when the air-bubble system is off.The resistance reductions are also included in Table 3.It is seen that when the air-bubble system is off, the total ice resistance is reduced by 15.3%.When it comes to the ice resistance components, the bow resistance remains almost unchanged, with a reduction rate of 10.3%.In contrast, the ice resistance from the ship's sides is greatly reduced, with a drag reduction rate of 70.8%.The effect of the ship speed on the drag reduction rate was also investigated.In addition to the abovementioned ship speed of 6 knots, ship-ice interactions under four other speeds were simulated and ice resistances with and without the air-bubble system were compared.Table 4 listed the resistances as well as the drag reduction rates, which are also plotted in Figure 11.It is observed that the drag reduction rate decreases with the speed increase.This can be explained by the fact that with the increase of the ship's speed, the location where the bubbles reach the free surface moves backwards due to the drag effect of the fluid.This implies the area covered by the gas-water mixture moves backwards at a higher speed, resulting in a larger hull surface in the front in contact with ice.The drag reduction rate is thus reduced.This is however a tentative explanation of this interesting phenomenon.The effect of ship speed on the drag reduction rate of an air-bubble system requires systematic investigation, in a combination of other factors such as the injected air volume, which is included in the authors' future work.In addition to the drag force in the longitudinal direction, the hull-ice interaction forces in the transverse direction regarding the air-bubble system are also analyzed.Figure 12 shows the time series of ice-induced drift force with and without the air-bubble system.It is seen from the figure that the ice-induced drift forces increase gradually in the first stage of the ice-going voyage for both cases.This is similar to the drag force, which
Effects of Ventilation Rate
In this sub-section, a sensitivity study about the ventilation rate at the nozzle regarding drag reduction rate was carried out.Different air velocities from the nozzles were investigated under the condition of an ice concentration of 60% and a ship speed of 6 kn.The nozzles are divided into two groups: the bow nozzles are the first 3 pairs of nozzles at the bow area, and the side nozzles are the rest of 5 pairs at the sides that are near the bilge; see Figure 1 for the exact locations of the nozzles.The ventilation rate in m/s represents the gas flow rate at the nozzle.Four air velocities, i.e., 1, 2.5, 5 and 10 m/s, were investigated.A reference case with zero air velocity represents the condition when the airbubble system is turned off.The drag reduction for a total of eight cases with different air velocities was calculated.The ice resistance data and drag reduction rate under for these cases are listed in Table 5.In this table, Case D represents the air-bubble system with a ventilation rate of 2.5 m/s, which has been mentioned in the previous sub-sections.It is observed from Table 5, that with the increased ventilation rate the ice resistance is reduced.Let us look closer at the specific cases.Case A is when the bow-nozzles are turned off and the side-nozzle ventilation rate is set as 1 m/s.Figure 13 shows the simulation of the ice resistance time series of this case.When the side nozzles are turned on, air bubbles go up along the hull wall to the free surface.The ice floes at the ship's side become
Effects of Ventilation Rate
In this sub-section, a sensitivity study about the ventilation rate at the nozzle regarding drag reduction rate was carried out.Different air velocities from the nozzles were investigated under the condition of an ice concentration of 60% and a ship speed of 6 kn.The nozzles are divided into two groups: the bow nozzles are the first 3 pairs of nozzles at the bow area, and the side nozzles are the rest of 5 pairs at the sides that are near the bilge; see Figure 1 for the exact locations of the nozzles.The ventilation rate in m/s represents the gas flow rate at the nozzle.Four air velocities, i.e., 1, 2.5, 5 and 10 m/s, were investigated.A reference case with zero air velocity represents the condition when the air-bubble system is turned off.The drag reduction for a total of eight cases with different air velocities was calculated.The ice resistance data and drag reduction rate under for these cases are listed in Table 5.In this table, Case D represents the air-bubble system with a ventilation rate of 2.5 m/s, which has been mentioned in the previous sub-sections.It is observed from Table 5, that with the increased ventilation rate the ice resistance is reduced.Let us look closer at the specific cases.Case A is when the bow-nozzles are turned off and the side-nozzle ventilation rate is set as 1 m/s.Figure 13 shows the simulation of the ice resistance time series of this case.When the side nozzles are turned on, air bubbles go up along the hull wall to the free surface.The ice floes at the ship's side become thus overturned and move away from the hull.As a result, the occurrence of hull-ice interaction on both sides of the ship side is reduced.A further step is to turn on the bow nozzles.This is Case B, for which the simulation and ice resistance time series are illustrated in Figure 14.
For this case, the air velocities are set to 1 m/s for all the nozzles.It is seen from Figure 14 that observed the ice-free zone moves forward to the vicinity of the ship's shoulder, and at the same time, the width of the ice-free zone increases.However, the ice accumulation at the bow area remains almost unchanged.Comparing Case B with Case A, the drag reduction effect is not significant.
thus overturned and move away from the hull.As a result, the occurrence of hull-ice interaction on both sides of the ship side is reduced.A further step is to turn on the bow nozzles.This is Case B, for which the simulation and ice resistance time series are illustrated in Figure 14.For this case, the air velocities are set to 1 m/s for all the nozzles.It is seen from Figure 14 that observed the ice-free zone moves forward to the vicinity of the ship's shoulder, and at the same time, the width of the ice-free zone increases.However, the ice accumulation at the bow area remains almost unchanged.Comparing Case B with Case A, the drag reduction effect is not significant.To better interpret the drag reduction rates in Table 5, the ice resistance values under the various air velocities are paired for comparison, as shown in Figure 15.The cases in Table 5 are put into two categories: one is featured by the side-nozzle ventilation rate kept as 1 m/s; the other is characterized by the nozzles having the same ventilation rate.Comparing the two groups, it is found that the ice resistance of the two groups is quite close under the same bow-nozzle ventilation rate.This implies that the drag reduction effect is more sensitive to the bow ventilation volume.If the bow ventilation volume is sufficiently large, the side ventilation volume has a marginal contribution to drag reduction.This is To better interpret the drag reduction rates in Table 5, the ice resistance values under various air velocities are paired for comparison, as shown in Figure 15.The cases in Table 5 are put into two categories: one is featured by the side-nozzle ventilation rate kept as 1 m/s; the other is characterized by the nozzles having the same ventilation rate.Comparing the two groups, it is found that the ice resistance of the two groups is quite close under the same bow-nozzle ventilation rate.This implies that the drag reduction effect is more sensitive to the bow ventilation volume.If the bow ventilation volume is sufficiently large, the side ventilation volume has a marginal contribution to drag reduction.This is in line with the fact that for this vessel, the ice resistance on the side of the ship accounts for less than 30% of the total ice resistance.thus overturned and move away from the hull.As a result, the occurrence of hull-ice interaction on both sides of the ship side is reduced.A further step is to turn on the bow nozzles.This is Case B, for which the simulation and ice resistance time series are illustrated in Figure 14.For this case, the air velocities are set to 1 m/s for all the nozzles.It is seen from Figure 14 that observed the ice-free zone moves forward to the vicinity of the ship's shoulder, and at the same time, the width of the ice-free zone increases.However, the ice accumulation at the bow area remains almost unchanged.Comparing Case B with Case A, the drag reduction effect is not significant.To better interpret the drag reduction rates in Table 5, the ice resistance values under the various air velocities are paired for comparison, as shown in Figure 15.The cases in Table 5 are put into two categories: one is featured by the side-nozzle ventilation rate kept as 1 m/s; the other is characterized by the nozzles having the same ventilation rate.Comparing the two groups, it is found that the ice resistance of the two groups is quite close in line with the fact that for this vessel, the ice resistance on the side of the ship accounts for less than 30% of the total ice resistance.The work presented in this article is one of the first investigations on numerical simulation of air-bubble systems regarding drag reduction in floe ice fields.The ice conditions were simplified to ice floes of identical size and shape, which are the delimitations of the current work.An icebreaker was employed as the case study vessel for the demonstration of the proposed procedure.Other ship types with different hull forms need to be modelled to verify the robustness of the proposed procedure.Ice model tests are also required for validation of the numerical results, which are included in the authors' ongoing work.Despite the limitations, the proposed procedure is expected to facilitate design of new generations of ice-going ships.
Conclusions
In this paper, the authors made use of a coupling CFD-DEM approach in combination with the VOF method to simulate resistance in floe ice fields, aiming to establish a numerical analysis procedure for ice-going ships installed with air-bubble systems.From the simulations and analyses, a more distorted wave making around the hull is observed after turning on the air-bubble system.Ice floes in contact with the hull side wall are pushed away from the hull by the gas-water mixture, resulting in an ice-free zone close to the side hull.The ventilation rate of the air-bubble system is also studied.It is found that the drag reduction rate increases with the increase of ventilation but decreases somewhat at higher speeds.Side ventilation only contributes to reducing the side friction resistance, and the side friction resistance can be eliminated under low ventilation.In general, the bow ventilation plays a deciding role in the overall drag reduction.The work presented in this article is one of the first investigations on numerical simulation of air-bubble systems regarding drag reduction in floe ice fields.The ice conditions were simplified to ice floes of identical size and shape, which are the delimitations of the current work.An icebreaker was employed as the case study vessel for the demonstration of the proposed procedure.Other ship types with different hull forms need to be modelled to verify the robustness of the proposed procedure.Ice model tests are also required for validation of the numerical results, which are included in the authors' ongoing work.Despite the limitations, the proposed procedure is expected to facilitate design of new generations of ice-going ships.
Conclusions
In this paper, the authors made use of a coupling CFD-DEM approach in combination with the VOF method to simulate resistance in floe ice fields, aiming to establish a numerical analysis procedure for ice-going ships installed with air-bubble systems.From the simulations and analyses, a more distorted wave making around the hull is observed after turning on the air-bubble system.Ice floes in contact with the hull side wall are pushed away from the hull by the gas-water mixture, resulting in an ice-free zone close to the side hull.The ventilation rate of the air-bubble system is also studied.It is found that the drag reduction rate increases with the increase of ventilation but decreases somewhat at higher speeds.Side ventilation only contributes to reducing the side friction resistance, and the side friction resistance can be eliminated under low ventilation.In general, the bow ventilation plays a deciding role in the overall drag reduction.
Figure 1 .
Figure 1.(a) unsuitable mesh size; (b) suitable mesh size.The VOF model describes the phase distribution and position of the interface through the field of the phase volume fraction αi, the volume fraction = of the phase i, where Vi is the volume of phase i in the grid cell, V is the volume of the grid cell, And the sum of the volume fractions of all phases within each grid cell is 1.When the grid contains only a single fluid, the material properties of the fluid are used in the calculation.If there are
22 ,
10, x FOR PEER REVIEW 8 of 1 contrast, is supposed to reduce ice resistance instead.It would be sufficient to use a smaller volume of air from the bow/sides to push the crushed ice away from the hull.
Figure 2 .
Figure 2. the three-dimensional hull model and the nozzle locations of the air-bubble system.
Figure 2 .
Figure 2. The three-dimensional hull model and the nozzle locations of the air-bubble system.
Figure 3 .
Figure 3. Schematic diagram of the computational domain; (a) Schematic diagram of the computational domain at the initial stage; (b) the layout of the ice field with an ice concentration of 60%.
Figure 3 .
Figure 3. Schematic diagram of the computational domain; (a) Schematic diagram of the computational domain at the initial stage; (b) the layout of the ice field with an ice concentration of 60%.
Figure 4 .
Figure 4.The meshing of the numerical model.(a) Ship hull mesh; (b) Front view of the flow domain; (c) Side view of the flow domain.
Figure 4 .
Figure 4.The meshing of the numerical model.(a) Ship hull mesh; (b) Front view of the flow domain; (c) Side view of the flow domain.
JFigure 6 .
Figure 6.Wave patterns and streamlines around the hull for: (a) when the air-bubble system is turned off; (b) when the air-bubble system is activated.
Figure 6 .
Figure 6.Wave patterns and streamlines around the hull for: (a) when the air-bubble system is turned off; (b) when the air-bubble system is activated.
Figure 7 .
Figure 7.The snapshot at 120 s of the simulation when the air-bubble system has not been activated: (a) isometric view; (b) bottom view.
Figure 7 .Figure 8 .
Figure 7.The snapshot at 120 s of the simulation when the air-bubble system has not been activated: (a) isometric view; (b) bottom view.J. Mar.Sci.Eng.2022, 10, x FOR PEER REVIEW 12 of 19
Figure 8 .
Figure 8.The snapshot at 120 s of the simulation when the air-bubble system has been activated: (a) isometric view; (b) bottom view.
Figure 9 .
Figure 9.Time series of ice resistance when the air-bubble system is off: (top) total ice resistance; (middle) ice resistance from the bow area; (bottom) ice resistance from the sides.
Figure 10 .
Figure 10.Time series of ice resistance when the air-bubble system is on: (top) total ice resistance; (middle) ice resistance from the bow area; (bottom) ice resistance from the sides.
Figure 11 .
Figure 11.The resistance drag reduction rate at different speeds.
Figure 12 .
Figure 12.Comparison of time history curves of ice contact force along ship width.
Figure 12 .
Figure 12.Comparison of time history curves of ice contact force along ship width.
Figure 13 .
Figure 13.simulation and ice resistance of Case A: the side-nozzle ventilation rate is 1 m/s; the bownozzles are turned off.
Figure 14 .
Figure 14.simulation and ice resistance of Case B: the side-nozzle ventilation rate is kept as 1 m/s; the bow nozzles are turned on as 1 m/s.
Figure 13 .
Figure 13.Simulation and ice resistance Case A: the side-nozzle ventilation rate is 1 m/s; the bow-nozzles are turned off.
Figure 13 .
Figure 13.simulation and ice resistance of Case A: the side-nozzle ventilation rate is 1 m/s; the bownozzles are turned off.
Figure 14 .
Figure 14.simulation and ice resistance of Case B: the side-nozzle ventilation rate is kept as 1 m/s; the bow nozzles are turned on as 1 m/s.
Figure
Figure Simulation and ice of Case B: the side-nozzle ventilation rate is kept as 1 m/s; the bow nozzles are turned on as 1 m/s.
Figure 15 .
Figure 15.Ice resistance comparison for the various air velocities.
Author
Contributions: Conceptualization, B.-Y.N. and Z.L.; methodology, B.-Y.N. and H.W.; software, Z.L.; validation, H.W.; writing-original draft preparation, H.W.; writing-review and editing, B.-Y.N., Y.X. and Z.L.; project administration, Y.X. and B.F.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by National Natural Science Foundation of China, grant numbers 52192690, 52192693, 51979051, 51979056, U20A20327; and by National Key Research and Development Program of China, grant number 2021YFC2803400.And the APC was funded by National Natural Science Foundation of China, grant number 52192690.
Figure 15 .
Figure 15.Ice resistance comparison for the various air velocities.
Table 1 .
The main particulars of the case study vessel.
Table 1 .
The main particulars of the case study vessel.
Table 2 .
The boundary conditions of the computational domain.
Table 3 .
Ice resistance of the stable stage.
Figure 9.Time series of ice resistance when the air-bubble system is off: (top) total ice resistance; (middle) ice resistance from the bow area; (bottom) ice resistance from the sides.
Table 3 .
Ice resistance of the stable stage.
Table 4 .
Ice resistance at different speeds.
Bubble System off) Ice Resistance (kN) (Air-Bubble System on)
Figure 10.Time series of ice resistance when the air-bubble system is on: (top) total ice resistance; (middle) ice resistance from the bow area; (bottom) ice resistance from the sides.
Table 4 .
Ice resistance at different speeds.
Bubble System off) Ice Resistance (kN) (Air-Bubble System on)
Figure 11.The ice resistance and drag reduction rate at different speeds.
Table 5 .
Ice resistance under different air velocities.
Table 5 .
Ice resistance under different air velocities.
|
v3-fos-license
|
2023-08-26T15:24:31.010Z
|
2023-08-23T00:00:00.000
|
261170117
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-0472/13/9/1660/pdf?version=1692770553",
"pdf_hash": "f9ddf3e76ea48b591d8fe03dab7ef5dd89019e90",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42430",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "d49ccd5cc2cb3648f8112340d82543c5f45ebffb",
"year": 2023
}
|
pes2o/s2orc
|
Development of a Control System for Double-Pendulum Active Spray Boom Suspension Based on PSO and Fuzzy PID
: During the operation of boom sprayers in the field, it is crucial to ensure that the entire boom is maintained at an optimal height relative to the ground or crop canopy. Active suspension is usually used to adjust the height. A control system for double-pendulum active suspension was developed in this paper. The control system consisted of a main control node, two distance measurement nodes, a vehicle inclination detection node, and an execution node. Communication between nodes was carried out using a CAN bus. The hardware was selected, and the interface circuits of the sensors and the actuator were designed. The transfer functions of the active suspension and electric linear actuator were established. In order to enhance the efficiency of the control system, the particle swarm optimization (PSO) algorithm was employed to optimize the initial parameters of the fuzzy PID controller. The simulation results demonstrated that the PSO-based fuzzy PID controller exhibited improvements in terms of reduced overshoot and decreased settling time when compared to conventional PID and fuzzy PID controllers. The experimental results showed that the active suspension system equipped with the control system could effectively isolate high-frequency disturbances and follow low-frequency ground undulations, meeting the operational requirements.
Introduction
Boom sprayers are commonly used in crop protection to apply chemical materials that prevent diseases, weeds, and other problems due to their wide coverage and high efficiency [1][2][3].Tractor vibrations and uneven field terrain may cause undesired boom motions, such as rolling, yawing, and jolting [4,5].These boom motions have negative effects on the distribution of spray deposits [6,7].Studies have shown that boom rolling in the vertical plane is accountable for variations in the distribution of spray deposits ranging from 0 to 1000%, with the optimum amount being 100% [8,9].Therefore, booms are typically mounted with a vertical suspension system to reduce rolling for the majority of sprayers [10].There have been many types of vertical suspension systems, including pendulum suspension, trapezium suspension, cable suspension, and Douven suspension [11][12][13][14][15]. Double-pendulum suspension systems have been studied over the past few years, both theoretically and experimentally [16][17][18][19][20].In these studies, various control methods, including proportional, proportional-integral, and H ∞ control theory, have been tested to regulate the motion of the boom.Modern large and medium-sized sprayers have achieved electromechanical-hydraulic integration control.They utilize computers to perform the real-time monitoring and adjustment of operational speed, spray volume, spray pressure, and other parameters.The spray boom's position and balance are achieved through a hydraulic system.The UC5 boom height controller produced by NORAC, a Canadian company, consists of ultrasonic sensors, roll sensors, control modules, proportional valves, display terminals, and bus terminators.Using sensor data, it can make responsive height adjustments, allowing the boom to automatically follow the contours of the land.The CONTOUR system developed by the UK company Cheffer can quickly level 18 m and 24 m long booms, enhancing the operational efficiency and spraying accuracy of the sprayer.
Manufacturers and researchers have developed different types of control systems that can automatically adjust the height between the boom and the ground or the crop canopy.Xue et al. developed a control method based on an adaptive fuzzy sliding model control algorithm.The method exhibited superior responsiveness, stability, and accuracy when compared to conventional PID control [21].Cui et al. designed a high-performance boom control system for conventional pendulum active suspension based on a digital signal processor.The speed feed-forward compensation PID control method was used to achieve the high-precision control of the spray boom [22].Aiming at the problem of low accuracy and poor stability caused by parameter uncertainties in the suspension and the electrohydraulic position servo system, Cui et al. studied an adaptive robust controller based on a nonlinear model of the suspension system.The proposed controller demonstrated good asymptotic tracking performance and steady-state tracking accuracy when compared to the feedback linearization controller, robust feedback controller, and PID controller [23].These control systems adopt hydraulic cylinders as actuators and require specific machine configurations to operate effectively.In recent years, attempts have been made to control the motion of spray booms using microcontroller units and programmable logic controllers.However, the control effects were unsatisfactory due to their limitations [24,25].
In this study, an electric linear actuator was employed, thereby simplifying the configuration of the control system.The control system for the inclination angle of the boom was designed based on a CAN bus.The control algorithm was developed based on PSO and fuzzy PID.Finally, the effectiveness of the control system was verified by experiments using double-pendulum active suspension.
Modeling of the Double-Pendulum Suspension System 2.1. Description of the Suspension System
A double-pendulum suspension system is comprised of two pendulum rods, height sensors, springs, dampers, and an actuator, as illustrated in Figure 1.The first pendulum rod is articulated to the frame and the second pendulum rod at O 1 and O 2 , respectively.The boom is rigidly fixed to the second pendulum rod at G, the gravity center of the boom.A spring and a damper are connected between the first pendulum rod and the frame attached to the vehicle.An actuator is installed between the first and second pendulum rods.
company, consists of ultrasonic sensors, roll sensors, control modules, proportional valves, display terminals, and bus terminators.Using sensor data, it can make responsive height adjustments, allowing the boom to automatically follow the contours of the land.The CONTOUR system developed by the UK company Cheffer can quickly level 18 m and 24 m long booms, enhancing the operational efficiency and spraying accuracy of the sprayer.
Manufacturers and researchers have developed different types of control systems that can automatically adjust the height between the boom and the ground or the crop canopy.Xue et al. developed a control method based on an adaptive fuzzy sliding model control algorithm.The method exhibited superior responsiveness, stability, and accuracy when compared to conventional PID control [21].Cui et al. designed a high-performance boom control system for conventional pendulum active suspension based on a digital signal processor.The speed feed-forward compensation PID control method was used to achieve the high-precision control of the spray boom [22].Aiming at the problem of low accuracy and poor stability caused by parameter uncertainties in the suspension and the electro-hydraulic position servo system, Cui et al. studied an adaptive robust controller based on a nonlinear model of the suspension system.The proposed controller demonstrated good asymptotic tracking performance and steady-state tracking accuracy when compared to the feedback linearization controller, robust feedback controller, and PID controller [23].These control systems adopt hydraulic cylinders as actuators and require specific machine configurations to operate effectively.In recent years, attempts have been made to control the motion of spray booms using microcontroller units and programmable logic controllers.However, the control effects were unsatisfactory due to their limitations [24,25].
In this study, an electric linear actuator was employed, thereby simplifying the configuration of the control system.The control system for the inclination angle of the boom was designed based on a CAN bus.The control algorithm was developed based on PSO and fuzzy PID.Finally, the effectiveness of the control system was verified by experiments using double-pendulum active suspension.
Description of the Suspension System
A double-pendulum suspension system is comprised of two pendulum rods, height sensors, springs, dampers, and an actuator, as illustrated in Figure 1.The first pendulum rod is articulated to the frame and the second pendulum rod at O1 and O2, respectively.The boom is rigidly fixed to the second pendulum rod at G, the gravity center of the boom.A spring and a damper are connected between the first pendulum rod and the frame attached to the vehicle.An actuator is installed between the first and second pendulum rods.In the event that the power supply is deactivated and the actuator length remains constant, the suspension becomes passive, causing the boom to pivot only around O 1 .A special case arises when the angle between the first and second pendulum rods is zero, indicating that the boom's balance position is horizontal, with its gravity center G below the hinge point O 1 .However, if the actuator length is either increased or decreased, the boom's gravity center will shift to the left or right, respectively, resulting in a non-horizontal equilibrium position.Consequently, by continuously regulating the actuator length, the suspension becomes active, enabling the boom to adapt to variations in ground slope.
Transfer Function of the Active Suspension System
The suspension system's parameters are presented in Figure 1.For the control of active suspension, the input is the actuator displacement d, and the output is the inclination angle of the boom γ.For the generalized coordinate γ, by applying the second Lagrange equation, the equation of the boom motion can be derived as where E k represents the kinetic energy of the boom (in N•m); γ represents the inclination angle of the boom (rad); Q γ represents the generalized force (N); and ψ represents the dissipation function (N•m•rad/s).
The kinetic energy of the boom can be expressed as where I represents the moment of inertia of the boom about the axis through G (kg•m 2 ); m represents the mass of the boom (kg); l 1 represents the length of the first pendulum (m); l 2 represents the length of the second pendulum (m); and θ represents the angle between the first and second pendulum rods (rad).Based on Equation (2), the following two equations can be inferred: ..
θ
(3) The generalized force can be expressed as where k represents the stiffness coefficient of the spring (N/m); l s represents the distance from the spring's installation position to O 1 (m); d represents the actuator displacement (m); and l 3 represents the distance between the installation position of the actuator and the point O 2 (m).
The dissipation function can be expressed as where c represents the damping coefficient of the damper (N•s/m), and l d represents the distance from the damper's installation position to O 1 (m).Therefore, ∂ψ ∂ Assuming that γ and d are small, ignoring the second-order terms, and substituting Equations (3)-( 5) and (7) into Equation (1), the equation of the boom motion can be stated in the following form: By performing the Laplace transform on Equation (8) and rearranging the terms, the transfer function linking the inclination angle of the boom, γ, to the actuator displacement, d, can be derived as
Transfer Function of the Electric Linear Actuator
In this study, a DC electric linear actuator was chosen as the actuator.Compared with pneumatic cylinders and hydraulic cylinders, it has the advantages of lower price, smaller size, lighter weight, easier installation, and no need for air source or oil circuit.It also has good mechanical characteristics, sensitive movement, and is convenient for electrical control.The controller generates PWM (pulse width modulation) pulses with a certain duty cycle, which are sent to the motor driver.The driver regulates the input voltage and generates an output voltage proportional to the duty cycle, which is subsequently transmitted to the electric linear actuator.
The transfer function of the electric linear actuator can be expressed as where d(s) represents the image function of the displacement of the actuator piston rod, R(s) represents the image function of the duty cycle, K represents the proportionality coefficient, and T represents the time constant [26][27][28].
To obtain the parameters of the transfer function, we adopted the time domain identification method to model the electric linear actuator.A step signal was applied, and the response curve of the piston rod displacement over time was measured.
The equipment for determining the parameters is shown in Figure 2. In order to reduce the impact of random disturbances on the measurement error, a single-chip microcomputer was used to emit a pulse with a frequency of 20 kHz and a duty cycle of 100% as the input signal.The driver modulated the input 24 V voltage to form an output voltage, which was supplied to the electric linear actuator.The displacement of the actuator piston rod was measured using a linear displacement sensor, and the voltage signal output by the sensor was input into a data acquisition device and saved on a computer.
In order to obtain the displacement of the actuator piston rod, the linear displacement sensor was calibrated before the measurement.By fitting the experimental data, the fitted equation between the actuator piston rod displacement d and the sensor output voltage u was obtained as d = 21.108u+ 0.3854, with a coefficient of determination R 2 = 0.9815.
According to Equation (10), the step response of the actuator piston rod velocity can be represented by the following equation: where v(t) represents the actuator piston rod velocity (mm/s), and t represents time (s).
Based on the measured displacement of the actuator piston rod, the velocity response of the actuator piston rod was obtained by numerical differentiation.Then, K and T were determined by numerical fitting, and the results are shown in Figure 3.
Using the MATLAB fitting toolbox, the fitted equation was obtained as v(t) = 11.49(1− e −t/0.085 ), with a coefficient of determination R 2 = 0.89.Therefore, it could be concluded that K = 11.49and T = 0.085 in the transfer function.In order to obtain the displacement of the actuator piston rod, the linear displacement sensor was calibrated before the measurement.By fitting the experimental data, the fitted equation between the actuator piston rod displacement d and the sensor output voltage u was obtained as d = 21.108u+ 0.3854, with a coefficient of determination R 2 = 0.9815.
According to Equation ( 10), the step response of the actuator piston rod velocity can be represented by the following equation: where v(t) represents the actuator piston rod velocity (mm/s), and t represents time (s).
Based on the measured displacement of the actuator piston rod, the velocity response of the actuator piston rod was obtained by numerical differentiation.Then, K and T were determined by numerical fitting, and the results are shown in Figure 3. Using the MATLAB fitting toolbox, the fitted equation was obtained as v(t) = 11.49(1− e −t/0.085 ), with a coefficient of determination R 2 = 0.89.Therefore, it could be concluded that K = 11.49and T = 0.085 in the transfer function.
Overall Structure of the Control System
The control system consisted of a computer, four single-chip microcomputers, and communication lines, as shown in Figure 4. Node 1 was used to measure the inclination angle of the vehicle body, nodes 2 and 3 were used to measure the distance between the In order to obtain the displacement of the actuator piston rod, the linear displacement sensor was calibrated before the measurement.By fitting the experimental data, the fitted equation between the actuator piston rod displacement d and the sensor output voltage u was obtained as d = 21.108u+ 0.3854, with a coefficient of determination R 2 = 0.9815.
According to Equation ( 10), the step response of the actuator piston rod velocity can be represented by the following equation: where v(t) represents the actuator piston rod velocity (mm/s), and t represents time (s).
Based on the measured displacement of the actuator piston rod, the velocity response of the actuator piston rod was obtained by numerical differentiation.Then, K and T were determined by numerical fitting, and the results are shown in Figure 3. Using the MATLAB fitting toolbox, the fitted equation was obtained as v(t) = 11.49(1− e −t/0.085 ), with a coefficient of determination R 2 = 0.89.Therefore, it could be concluded that K = 11.49and T = 0.085 in the transfer function.
Overall Structure of the Control System
The control system consisted of a computer, four single-chip microcomputers, and communication lines, as shown in Figure 4. Node 1 was used to measure the inclination angle of the vehicle body, nodes 2 and 3 were used to measure the distance between the The control system consisted of a computer, four single-chip microcomputers, and communication lines, as shown in Figure 4. Node 1 was used to measure the inclination angle of the vehicle body, nodes 2 and 3 were used to measure the distance between the spray boom and crop canopy or ground, and node 4 was used to drive the actuator.The CPU of each node used a high-integration and strong anti-interference single-chip microcontroller.Node 5, the computer node, was mainly responsible for analyzing the data transmitted by each node and issuing commands to the actuator according to the analysis results.
Agriculture 2023, 13, x FOR PEER REVIEW 6 of 18 spray boom and crop canopy or ground, and node 4 was used to drive the actuator.The CPU of each node used a high-integration and strong anti-interference single-chip microcontroller.Node 5, the computer node, was mainly responsible for analyzing the data transmitted by each node and issuing commands to the actuator according to the analysis results.The main control chips need to be installed on the spray boom, which requires a small size and strong anti-interference ability in complex field conditions.Considering the
Hardware Selection (1) Main control chips of the functional nodes
The main control chips need to be installed on the spray boom, which requires a small size and strong anti-interference ability in complex field conditions.Considering the above factors, the STC12C5A60S2 microcomputer produced by Shenzhen Hongjing Company was chosen.It has a faster execution speed than microcomputers with the same frequency.It also integrates an eight-channel 10-bit voltage input successive comparison high-speed analogto-digital converter.Unlike ordinary 51-series microcomputers, it integrates two UART full-duplex serial ports and has an ESD protection function, with strong anti-interference ability, fully meeting the system's requirements.
(2) Ranging sensor Ultrasonic sensors have a high cost-performance ratio and are easy to install.They can achieve non-contact measurement and are not affected by external factors such as light.According to the measurement requirements of the control system, the KS109 integrated ultrasonic sensor produced by Shenzhen Daoxiang Electrical Technology Co., Ltd., Shenzhen, China was selected.Its beam angle is about 10 • , its detection range is 4~1000 cm, and its accuracy can reach 1 mm.It also integrates a DS18B20 temperature sensor with a measurement accuracy of ±0.5 • C and a measurement range of −55~+125 • C.
(3) Angle sensor Gyroscopes, also called angular motion detectors, have the advantages of a small size and low price, and they can achieve high precision in measuring tilt angles.Currently, most components on the market integrate MEMS gyroscopes and MEMS accelerometers together to improve the accuracy of the system.According to the system measurement range, the WT901C angle sensor produced by Shenzhen WitMotion Intelligent Technology Co., Ltd., Shenzhen, China was chosen, which uses the MPU6050 chip as the inertial measurement unit.The angle measurement range is ±180 • , with a static measurement accuracy of 0.05 • and a dynamic measurement accuracy of 0.1 • , meeting the requirements of the system design.
Hardware Solution for Serial Communication
Common serial communication methods include RS-232, RS-485, and the CAN bus, which are widely used in different fields.The maximum communication distance of RS-232 is only about 15 m.However, in a wide boom sprayer, the distance between the nodes of the control system is usually greater than 15 m.RS-232 and RS-485 use electrical protocols, and the development of communication systems is more challenging.Therefore, considering factors such as the communication transmission distance, transmission speed, bus utilization, network characteristics, transmission mode, and error tolerance, the CAN bus was adopted, owing to its low communication costs, flexible configuration, and high reliability.
As shown in Figure 4, the system adopted a tree structure, which was easy to expand.Any node on the bus could initiate communication without needing to request PC permission, which ensured high communication efficiency and met real-time requirements.We connected 120 Ω resistors at both ends of the CAN bus to consume the signal transmitted to the endpoint, preventing signal reflection and subsequent overlapping signals that could cause transmission errors.
During the communication process, all node inputs and outputs were CAN bus signals.However, there were only USB interfaces on the computer, and only serial ports on the STC125A60S2 microcomputer.Therefore, the direct transmission of data using the CAN bus between nodes was not possible The circuits for distance measurement and actuator drive are shown in Figure 5.The two circuits both included a microcomputer minimum system, a CAN bus conversion module, and a CAN bus connector.When debugging the distance measurement circuit, we disconnected the connection wire between the microcomputer and the ultrasonic sensor SDA pin.At this time, the value collected by the microcontroller was FFH.We started the serial port debugging assistant on the PC to check the received data.If the received data were correct, we connected the sensor SDA pin to the microcomputer.Otherwise, we checked the connection of the CANL and CANH pins of the CAN bus conversion module.When debugging the actuator circuit, we used a serial port debugging assistant to send data from PC to the microcomputer system in extended frame format.The microcomputer could display the received control information on an LCD display module and drive the electric linear actuator accordingly.
nals.However, there were only USB interfaces on the computer, and only serial ports on the STC125A60S2 microcomputer.Therefore, the direct transmission of data using the CAN bus between nodes was not possible The circuits for distance measurement and actuator drive are shown in Figure 5.The two circuits both included a microcomputer minimum system, a CAN bus conversion module, and a CAN bus connector.When debugging the distance measurement circuit, we disconnected the connection wire between the microcomputer and the ultrasonic sensor SDA pin.At this time, the value collected by the microcontroller was FFH.We started the serial port debugging assistant on the PC to check the received data.If the received data were correct, we connected the sensor SDA pin to the microcomputer.Otherwise, we checked the connection of the CANL and CANH pins of the CAN bus conversion module.When debugging the actuator circuit, we used a serial port debugging assistant to send data from PC to the microcomputer system in extended frame format.The microcomputer could display the received control information on an LCD display module and drive the electric linear actuator accordingly.
where u(k) represents the output signal, e(k) represents the input signal, K p is the proportional coefficient, K i is the integral coefficient, and K d is the derivative coefficient.
The parameter tuning method of conventional PID control is complex, and when the model is not accurate enough, empirical methods are often used, though it is timeconsuming and difficult to find the optimal model.Fuzzy control does not require the establishment of an accurate mathematical model and has strong robustness and antiinterference capabilities.Therefore, fuzzy PID control was adopted for the online correction of the PID controller parameters.The structure of the fuzzy PID control system is shown in Figure 6, where α represents the inclination angle of the ground; γ represents the inclination angle of the boom; e represents the deviation between the inclination angle of the ground and the inclination angle of the boom; ec represents the rate of change in the deviation; e and ec are inputs of the fuzzy controller; and ∆K p , ∆K i , and ∆K d are the PID control parameter increments obtained through fuzzy rule operation.
The parameter tuning method of conventional PID control is complex, and when the model is not accurate enough, empirical methods are often used, though it is time-consuming and difficult to find the optimal model.Fuzzy control does not require the establishment of an accurate mathematical model and has strong robustness and anti-interference capabilities.Therefore, fuzzy PID control was adopted for the online correction of the PID controller parameters.The structure of the fuzzy PID control system is shown in Figure 6, where α represents the inclination angle of the ground; γ represents the inclination angle of the boom; e represents the deviation between the inclination angle of the ground and the inclination angle of the boom; ec represents the rate of change in the deviation; e and ec are inputs of the fuzzy controller; and ΔKp, ΔKi, and ΔKd are the PID control parameter increments obtained through fuzzy rule operation.[-6, 6], so the quantization factor k e of e was 17.14 and the quantization factor k ec of ec was 8.57.We used seven linguistic values for the fuzzy variables, namely negative big (NB), negative medium (NM), negative small (NS), zero (ZO), positive small (PS), positive medium (PM), and positive big (PB).The triangle membership functions were used because they are suitable for online adjustment, as shown in Figure 7.
Increasing the proportional coefficient Kp can speed up the system response, but if it is too large, the system will become unstable.Likewise, a too large K d may lead to instability.Therefore, the basic domains of ∆Kp, ∆K i , and ∆K d were all set as [−3, 3], and the quantization levels were {−3, −2, −1, 0, 1, 2, 3}.The corresponding membership functions are shown in Figure 8. Increasing the proportional coefficient Kp can speed up the system response, but if it is too large, the system will become unstable.Likewise, a too large Kd may lead to instability.Therefore, the basic domains of ΔKp, ΔKi, and ΔKd were all set as [−3, 3], and the quantization levels were {−3, −2, −1, 0, 1, 2, 3}.The corresponding membership functions are shown in Figure 8. Increasing the proportional coefficient Kp can speed up the system response, but if it is too large, the system will become unstable.Likewise, a too large Kd may lead to instability.Therefore, the basic domains of ΔKp, ΔKi, and ΔKd were all set as [−3, 3], and the quantization levels were {−3, −2, −1, 0, 1, 2, 3}.The corresponding membership functions are shown in Figure 8.
Fuzzy Rules
Establishing the fuzzy rules required the full consideration of the characteristics of boom control.When e and ec are large, indicating a significant deviation between the current and target angle values, ΔKp and ΔKi should be increased to reduce the angle difference, and an appropriate value should be selected for ΔKd [29][30][31].Comprehensively tak-
Fuzzy Rules
Establishing the fuzzy rules required the full consideration of the characteristics of boom control.When e and ec are large, indicating a significant deviation between the current and target angle values, ∆Kp and ∆K i should be increased to reduce the angle difference, and an appropriate value should be selected for ∆K d [29][30][31].Comprehensively taking into account the influence of K p , K i , and K d on the dynamic and static performance of the system, a fuzzy rule table was determined (Table 1), which included a total of 147 fuzzy rules.Based on the membership functions and fuzzy rules, fuzzy inference was performed using the Mamdani method, and defuzzification was carried out using the centroid method.The fuzzy process is shown in Figure 9.
Agriculture 2023, 13, x FOR PEER REVIEW 10 of 18 Based on the membership functions and fuzzy rules, fuzzy inference was performed using the Mamdani method, and defuzzification was carried out using the centroid method.The fuzzy process is shown in Figure 9.
PSO Algorithm
The outputs of the fuzzy PID controller were ∆Kp, ∆K i , and ∆K d .The initial values of the PID control parameters needed to be selected by experience, and the size of the initial values had a significant impact on the system performance.Therefore, the PSO algorithm was used to optimize the initial parameters.
The global optimization problem can be expressed as where Z represents a particle swarm formed by n particles, and Z i represents the position of each particle.
The update equations for the flight speed and position of particles are where x id represents the d-th value in the position of the i-th particle in the particle swarm; v id represents the d-th parameter value in the speed of the i-th particle; p id represents the d-th value in the best position experienced by the i-th particle; p gd represents the d-th value in the best position experienced by all particles in the swarm; c 1 represents the cognitive learning factor, which controls the acceleration constant of particles flying towards their own best position; c 2 represents the social learning factor, which controls the acceleration constant of particles flying towards the global best position; and r 1 and r 2 independent random numbers between 0 and 1.
In the process of seeking the optimal solution, the selection of flight speed is related to the function of the algorithm.An excessively high speed may cause the particle to fly past the optimal position, while a speed that is too low may easily trap the particle in a local optimum.Therefore, the speed was adjusted by introducing an inertia weight, and Equation ( 15) was modified as follows: where ω represents the inertia weight.
To improve the global search ability in the early stage, ω is usually set to 0.9.To enhance the local search ability in the later stage, ω is usually set to 0.4.During the search process, the value of ω decreases linearly over time.The calculation formula is where ω s represents the initial inertia weight; ω T represents the final inertia weight; T max represents the maximum number of iterations; and t represents the iteration durations.
To improve the performance of the PSO algorithm, time-varying parameters were used to dynamically adjust the values of c 1 and c 2 .
The adjustment method for c 1 was where c 1s represents the initial cognitive learning factor, and c 1T represents the final cognitive learning factor.
The adjustment method for c 2 was where c 2s represents the initial social learning factor, and c 2T represents the final social learning factor.The PSO algorithm flowchart is shown in Figure 10.
Simulation Analysis
To verify the effectiveness of using the PSO algorithm to optimize PID control parameters, a simulation was conducted in Simulink R2016a software.The step response analysis of the inclination angle of the boom was carried out to validate the angle tracking effect of the controller.Using a boom inclination angle of 0.15 rad as the control target, a fuzzy model was established through the built-in fuzzy logic toolbox of Matlab R2016a.
Simulation Analysis
To verify the effectiveness of using the PSO algorithm to optimize PID control parameters, a simulation was conducted in Simulink R2016a software.The step response analysis of the inclination angle of the boom was carried out to validate the angle tracking effect of the controller.Using a boom inclination angle of 0. , kl s 2 = 0, and g = 9.8 m/s 2 .Taking into account the speed limit of the electric linear actuator (12 mm/s), the displacement saturation limit, the time delay of 0.1 s between the forward and reverse directions of the motor, and the system delay of 0.1 s, the simulation model shown in Figure 11 was established.
The step response of the above three methods is shown in Figure 12.According to Figure 12 the overshoot of the system was 56.80% and the settling time was 17.90 s when using conventional PID control.The overshoot was 38.33% and the settling time was 12.30 s when using fuzzy PID control.The overshoot was 1.53% and the settling time was 5.4 s when using PSO-based fuzzy PID control.The performance of the The step response of the above three methods is shown in Figure 12.
The step response of the above three methods is shown in Figure 12.According to Figure 12 the overshoot of the system was 56.80% and the settling time was 17.90 s when using conventional PID control.The overshoot was 38.33% and the settling time was 12.30 s when using fuzzy PID control.The overshoot was 1.53% and the settling time was 5.4 s when using PSO-based fuzzy PID control.The performance of the According to Figure 12 the overshoot of the system was 56.80% and the settling time was 17.90 s when using conventional PID control.The overshoot was 38.33% and the settling time was 12.30 s when using fuzzy PID control.The overshoot was 1.53% and the settling time was 5.4 s when using PSO-based fuzzy PID control.The performance of the boom control system was optimized using PSO-based fuzzy PID control, which not only reduced the overshoot but also reduced the settling time.The control effect and robustness of the system were better.
Experimental Device
The experimental device comprised a spray boom, suspension system, data acquisition system, and control system, as shown in Figure 13.The spray boom was a purchased threestage foldable boom, with a middle spray frame length of 2 m and two side spray arms on the left and right sides, both 3.5 m long.The mass of the spray boom was measured by a scale and was found to be 49 kg.The 3D model of the spray boom was established using Pro/Engineering, and the moment of inertia of the boom around its center of mass was calculated to be 93 kg•m 2 .The first pendulum rod of the suspension system was 0.45 m long, and the second pendulum rod was 0.25 m long.The damping coefficient of the damper was 1875 N•s/m.The stiffness coefficient of the spring was 730 N/m.Agriculture 2023, 13, x FOR PEER REVIEW 14 of 18
Experimental Device
The experimental device comprised a spray boom, suspension system, data acquisition system, and control system, as shown in Figure 13.The spray boom was a purchased three-stage foldable boom, with a middle spray frame length of 2 m and two side spray arms on the left and right sides, both 3.5 m long.The mass of the spray boom was measured by a scale and was found to be 49 kg.The 3D model of the spray boom was established using Pro/Engineering, and the moment of inertia of the boom around its center of mass was calculated to be 93 kg•m 2 .The first pendulum rod of the suspension system was 0.45 m long, and the second pendulum rod was 0.25 m long.The damping coefficient of the damper was 1875 N•s/m.The stiffness coefficient of the spring was 730 N/m.
Experimental Method
There are three methods for testing the stability of a spray boom: a field test, a simulation test, and a runway test.Field tests take place in a real field environment, but it is difficult to control the ground undulation, and this method cannot provide the required excitation signal.Simulation tests require special simulators for generating simulation stimuli, such as sine vibration or random signals, to force the stationary sprayer to move in different directions, and the test conditions are demanding.Therefore, runway tests were used in this study, placing obstacles on the runway and using a bumpy runway to generate low-frequency or high-frequency signals in order to simulate field operating conditions in a reproducible manner.
The commonly used types of stimuli for testing the stability of spray booms include step signals, sine signals, and pulse signals.Since the input represented by the step signal is relatively harsh and has strong representativeness, if the performance indicators of the system can meet the requirements under step signal excitation, the performance will also meet the requirements under other types of signal excitation.Therefore, in this study, a step signal was used as an input to excite the tractor wheels in order to test the stability of the spray boom [4].The test pavement was cement ground, and in order to facilitate disassembly, wooden boards of a certain height were placed down to simulate ground slopes.A cement pavement is more demanding than a soil environment and could also ensure the integrity of the signals required for the experiment [32].
In order to test the performance of the active suspension system on sloping terrain, an experimental area was designed to simulate field sloping terrain, as shown in Figure 14.The simulated slope represented a step excitation for the suspension system.If its angle was too small, the excitation would not be sufficient, and if it was too large, nonlinear factors could easily be introduced.Referring to the relevant literature and considering experimental conditions, the angle was set to be 1.5°.With the known spray boom length of 9 m and tractor wheelbase of 1.4 m, the corresponding dimensions in the figure could be
Experimental Method
There are three methods for testing the stability of a spray boom: a field test, a simulation test, and a runway test.Field tests take place in a real field environment, but it is difficult to control the ground undulation, and this method cannot provide the required excitation signal.Simulation tests require special simulators for generating simulation stimuli, such as sine vibration or random signals, to force the stationary sprayer to move in different directions, and the test conditions are demanding.Therefore, runway tests were used in this study, placing obstacles on the runway and using a bumpy runway to generate low-frequency or high-frequency signals in order to simulate field operating conditions in a reproducible manner.
The commonly used types of stimuli for testing the stability of spray booms include step signals, sine signals, and pulse signals.Since the input represented by the step signal is relatively harsh and has strong representativeness, if the performance indicators of the system can meet the requirements under step signal excitation, the performance will also meet the requirements under other types of signal excitation.Therefore, in this study, a step signal was used as an input to excite the tractor wheels in order to test the stability of the spray boom [4].The test pavement was cement ground, and in order to facilitate disassembly, wooden boards of a certain height were placed down to simulate ground slopes.A cement pavement is more demanding than a soil environment and could also ensure the integrity of the signals required for the experiment [32].
In order to test the performance of the active suspension system on sloping terrain, an experimental area was designed to simulate field sloping terrain, as shown in Figure 14.
The simulated slope represented a step excitation for the suspension system.If its angle was too small, the excitation would not be sufficient, and if it was too large, nonlinear factors could easily be introduced.Referring to the relevant literature and considering experimental conditions, the angle was set to be 1.5 • .With the known spray length of 9 m and tractor wheelbase of 1.4 m, the corresponding dimensions in the figure could be calculated: h 1 was 0.099 m, h 2 was 0.136 m, h 3 was 0.235 m, l 1 was 1.4 m, and l 2 was 3.1 m.The experimental process and data acquisition are shown in Figures 15 and 16 Two ultrasonic sensors were mounted on the left and right tips of the boom, as shown in Figure 17.The angle between the boom and the ground could be calculated as where H1 represents the measured value of the right sensor; H2 represents the measured value of the left sensor; and L represents the installation distance between the two sensors.Two ultrasonic sensors were mounted on the left and right tips of the boom, as shown in Figure 17.The angle between the boom and the ground could be calculated as Two ultrasonic sensors were mounted on the left and right tips of the boom, as shown in Figure 17.The angle between the boom and the ground could be calculated as where H 1 represents the measured value of the right sensor; H 2 represents the measured value of the left sensor; and L represents the installation distance between the two sensors.When an ultrasonic sensor is used, it emits a conical beam of ultrasonic waves.The ultrasonic waves within the beam angle can generate valid echoes.Since the ultrasonic sensor calculates distance based on the timing of the echo, the measured value represents the distance between the sensor and the nearest point.As long as the ground slope angle is not greater than the beam angle of the sensor, the vertical distance from the sensor to the ground can be measured.In field conditions, air temperature, humidity, and wind velocity can influence the performance of ultrasonic sensors.In our study, the ultrasonic sensor had a built-in temperature senor to compensate for the effect of temperature fluctuation.The abnormal values caused by other factors could be eliminated through filter algorithms.
The inclination angle of the spray boom could be calculated by the angle θ and the inclination angle of the vehicle body.The inclination angle of the vehicle body could be measured by the angle sensor installed on the sprayer frame.
Results and Discussion
The inclination angle of the vehicle body is easily influenced by factors such as ground excitation, tire excitation, and engine vibration.In order to reduce the influence of interference on the results, data with an absolute difference greater than 1.0° from the previous sampling value were first removed.Then, a five-point third-order smoothing method was used to process the inclination angle of the vehicle body.The inclination angle of the vehicle body and the boom is shown in Figure 18.When an ultrasonic sensor is used, it emits a conical beam of ultrasonic waves.The ultrasonic waves within the beam angle can generate valid echoes.Since the ultrasonic sensor calculates distance based on the timing of the echo, the measured value represents the distance between the sensor and the nearest point.As long as the ground slope angle is not greater than the beam angle of the sensor, the vertical distance from the sensor to the ground can be measured.In field conditions, air temperature, humidity, and wind velocity can influence the performance of ultrasonic sensors.In our study, the ultrasonic sensor had a built-in temperature senor to compensate for the effect of temperature fluctuation.The abnormal values caused by other factors could be eliminated through filter algorithms.
The inclination angle of the spray boom could be calculated by the angle θ and the inclination angle of the vehicle body.The inclination angle of the vehicle body could be measured by the angle sensor installed on the sprayer frame.
Results and Discussion
The inclination angle of the vehicle body is easily influenced by factors such as ground excitation, tire excitation, and engine vibration.In order to reduce the influence of interference on the results, data with an absolute difference greater than 1.0 • from the previous sampling value were first removed.Then, a five-point third-order smoothing method was used to process the inclination angle of the vehicle body.The inclination angle of the vehicle body and the boom is shown in Figure 18.
From Figure 18, it can be found that when the sprayer was traveling on the flat ground, the range of the inclination angle of the vehicle body was −0.35~0.40• , while the inclination angle of the boom remained at −0.13~0.17• .When the sprayer drove up the slope from the flat ground, there was a sudden change in the inclination angle of the vehicle body.The control system could make the inclination angle of the boom follow the change in the inclination angle of the vehicle body, so the boom could remain parallel to the ground.The response time was about 5 s.When the sprayer was on the slope, the range of the inclination angle of the vehicle body was 1.15~1.89• , while the inclination angle of the boom remained within the range of 1.42~1.60• after balancing.The experimental results showed that the control system for the active suspension system designed in this paper could automatically adjust the inclination angle of the boom according to the change in the slope angle and meet the requirements of spray operation.
At present, there are no clear requirements regarding the response time of a control system during the field operation of a boom sprayer.Ref. [21] reported that the response time of the boom inclination angle was approximately 4 s when using adaptive fuzzy sliding mode control.Ref. [22] reported that it took 5.71 s for the boom roll control system to bring the boom to the equilibrium position.Referring to the above research conclusions, the response time of our study could meet the actual control requirements.In future research, we will focus on shortening the response time by utilizing a linear actuator with a higher power and further optimizing the control algorithm.
tuation.The abnormal values caused by other factors could be eliminated through filter algorithms.
The inclination angle of the spray boom could be calculated by the angle θ and the inclination angle of the vehicle body.The inclination angle of the vehicle body could be measured by the angle sensor installed on the sprayer frame.
Results and Discussion
The inclination angle of the vehicle body is easily influenced by factors such as ground excitation, tire excitation, and engine vibration.In order to reduce the influence of interference on the results, data with an absolute difference greater than 1.0° from the previous sampling value were first removed.Then, a five-point third-order smoothing method was used to process the inclination angle of the vehicle body.The inclination angle of the vehicle body and the boom is shown in Figure 18.From Figure 18, it can be found that when the sprayer was traveling on the flat ground, the range of the inclination angle of the vehicle body was −0.35~0.40°,while the inclination angle of the boom remained at −0.13~0.17°.When the sprayer drove up the slope from the flat ground, there was a sudden change in the inclination angle of the
Conclusions
An active suspension control system based on a CAN bus was constructed, consisting of a main control node, two distance measurement nodes, a vehicle inclination detection node, and an execution node.The transfer functions of the active suspension and electric linear actuator were established.In order to improve the performance of the control system, the initial parameters of the fuzzy PID controller were optimized using the PSO algorithm.The fuzzy controller was constructed based on the deviation and deviation rate of the boom inclination angle, and the fuzzy control algorithm was used to optimize the increments in the PID control parameters.The simulation results showed that compared to the conventional PID controller and fuzzy PID controller, the PSO-based fuzzy PID controller exhibited a reduced overshoot and decreased settling time, effectively improving the control performance.The results of the simulated slope test showed that the boom could quickly follow changes in the ground slope angle, and the fluctuation range of the inclination angle of the boom was significantly lower than that of the vehicle body.This indicated that the control system for double-pendulum active spray boom suspension effectively isolated high-frequency disturbances and followed low-frequency ground undulations, meeting the operational requirements.
Figure 3 .
Figure 3. Step response of the actuator piston rod velocity.
Figure 3 .
Figure 3. Step response of the actuator piston rod velocity.
Figure 3 .
Figure 3. Step response of the actuator piston rod velocity.
3 .
Design of the Control System 3.1.Design of the Control System Hardware 3.1.1.Overall Structure of the Control System
Figure 4 .
Figure 4. Overall structure of the control system.3.1.2.Hardware Selection (1) Main control chips of the functional nodes
Figure 4 .
Figure 4. Overall structure of the control system.
. The system used a CAN conversion module to convert USB/TTL signals to CAN bus signals and vice versa.When the communication adapter received CAN bus signals, it converted them to TTL signals and then to USB interface signals.The process was reversed when converting TTL/USB signals to CAN bus signals.
. The system used a CAN conversion module to convert USB/TTL signals to CAN bus signals and vice versa.When the communication adapter received CAN bus signals, it converted them to TTL signals and then to USB interface signals.The process was reversed when converting TTL/USB signals to CAN bus signals.
3. 2 .
Design of the Fuzzy PID Controller 3.2.1.Structure of the Fuzzy PID Controller A digital PID controller was used to adjust the input voltage of the electric linear actuator.The general expression is
Figure 6 .
Figure 6.Structure of the fuzzy PID control system.3.2.2.Fuzzification of Input and Output Variables The basic domain of the input e was [−20°, 20°], which was converted to radians as [−0.35, 0.35].The basic domain of the input ec was [−40°, 40°], which was converted to radians as [−0.35, 0.35].The fuzzy domains of e and ec were both[-6, 6], so the quantization factor ke of e was 17.14 and the quantization factor kec of ec was 8.57.We used seven linguistic values for the fuzzy variables, namely negative big (NB), negative medium (NM), negative small (NS), zero (ZO), positive small (PS), positive medium (PM), and positive big (PB).The triangle membership functions were used because they are suitable for online adjustment, as shown in Figure7.
Figure 6 .
Figure 6.Structure of the fuzzy PID control system.3.2.2.Fuzzification of Input and Output Variables The basic domain of the input e was [−20 • , 20 • ], which was converted to radians as [−0.35, 0.35].The basic domain of the input ec was [−40 • , 40 • ], which was converted to radians as [−0.35, 0.35].The fuzzy domains of e and ec were both[-6, 6], so the quantization factor k e of e was 17.14 and the quantization factor k ec of ec was 8.57.We used seven linguistic values for the fuzzy variables, namely negative big (NB), negative medium (NM), negative small (NS), zero (ZO), positive small (PS), positive medium (PM), and positive big (PB).The triangle membership functions were used because they are suitable for online adjustment, as shown in Figure7.Increasing the proportional coefficient Kp can speed up the system response, but if it is too large, the system will become unstable.Likewise, a too large K d may lead to instability.Therefore, the basic domains of ∆Kp, ∆K i , and ∆K d were all set as [−3, 3], and the quantization levels were {−3, −2, −1, 0, 1, 2, 3}.The corresponding membership functions are shown in Figure8.
Agriculture 2023 , 18 Figure 7 .
Figure 7. Membership functions of input e and ec.
Figure 7 . 18 Figure 7 .
Figure 7. Membership functions of input e and ec.
Figure 8 .
Figure 8. Membership functions of ∆K p , ∆K i , and ∆K d .
Figure 10 .
Figure 10.PSO algorithm flowchart.Note: n represents the number of particles, Dim represents the dimensionality of the particles, Ub represents the maximum value of the search range, Lb represents the minimum value of the search range, fid represents the optimal fitness value of an individual particle, fgd represents the global optimal fitness value, fmin represents the minimum fitness value, and fzi represents the fitness value of the i-th particle.
Figure 10 .
Figure 10.PSO algorithm flowchart.Note: n represents the number of particles, Dim represents the dimensionality of the particles, U b represents the maximum value of the search range, L b represents the minimum value of the search range, f id represents the optimal fitness value of an individual particle, f gd represents the global optimal fitness value, f min represents the minimum fitness value, and f zi represents the fitness value of the i-th particle.
15 rad as the control target, a fuzzy model was established through the built-in fuzzy logic toolbox of Matlab R2016a.The fuzzy logic file was loaded into the fuzzy logic controller model.The values of the parameters in the transfer function of the active suspension were: l 1 = 0.45 m, l 2 = 0.25 m, l 3 = 0.2 m, m = 49 kg, I = 93 kg•m 2 , cl d 2 = 200 Nms•rad −1
Figure 11 .
Figure 11.Simulation model of the PSO-based fuzzy PID boom control system.
Figure 12 .
Figure 12.Step response of the inclination angle of the boom for different control strategies.
Figure 11 .
Figure 11.Simulation model of the PSO-based fuzzy PID boom control system.
Figure 12 .
Figure 12.Step response of the inclination angle of the boom for different control strategies.
Figure 12 .
Figure 12.Step response of the inclination angle of the boom for different control strategies.
Figure 16 .Figure 16 .
Figure 16.Data acquisition.Two ultrasonic were mounted on the left and right tips of the boom, as shown in 17.The angle between the boom and the ground could be calculated as 12 arcsin( ) HH L − =
Figure 18 .
Figure 18.Test curves of the inclination angle of the vehicle body and the boom.
Figure 18 .
Figure 18.Test curves of the inclination angle of the vehicle body and the boom.
Figure 18 .
Figure 18.Test curves of the inclination angle of the vehicle body and the boom.
Table 1 .
Fuzzy rules of ∆Kp, ∆K i , and ∆K d .
|
v3-fos-license
|
2019-12-10T22:44:24.943Z
|
2019-12-08T00:00:00.000
|
209174264
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-4292/11/24/2993/pdf",
"pdf_hash": "8b713f29d8c4ecc754632816518f01c55908552f",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42433",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Education",
"Computer Science"
],
"sha1": "bb5db4475f69dcce7a0367d87904c9d22f5509c3",
"year": 2019
}
|
pes2o/s2orc
|
An Innovative Virtual Simulation Teaching Platform on Digital Mapping with Unmanned Aerial Vehicle for Remote Sensing Education
: This work mainly discusses an innovative teaching platform on Unmanned Aerial Vehicle digital mapping for Remote Sensing (RS) education at Wuhan University, underlining the fast development of RS technology. Firstly, we introduce and discuss the future development of the Virtual Simulation Experiment Teaching Platform for Unmanned Aerial Vehicle (VSETP-UAV). It includes specific topics such as the Systems and function Design, teaching and learning strategies, and experimental methods. This study shows that VSETP-UAV expands the usual content and training methods related to RS education, and creates a good synergy between teaching and research. The results also show that the VSETP-UAV platform is of high teaching quality producing excellent engineers, with high international standards and innovative skills in the RS field. In particular, it develops students’ practical skills with technical manipulations of dedicated hardware and software equipment (e.g., UAV) in order to assimilate quickly this particular topic. Therefore, students report that this platform is more accessible from an educational point-of-view than theoretical programs, with a quick way of learning basic concepts of RS. Finally, the proposed VSETP-UAV platform achieves a high social influence, expanding the practical content and training methods of UAV based experiments, and providing a platform for producing high-quality national talents with internationally recognized topics related to emerging engineering education.
Introduction
Remote Sensing Core Curriculum program was initiated in 1993 to meet the demands for a college-level set of resources to enhance the quality of education across national and international campuses [1]. The basic knowledge in RS is generally acquired through a bachelor's degree program, within the fundamental skills taught in surveying. Surveying is based on one of the oldest sciences, i.e., geodesy, referring to the science of measuring, understanding and mapping the Earth's shape and surface together with the definition of global coordinate systems and data. It is a traditional discipline that has greatly evolved with the recent advances in the past two (1) Much of the teaching content involves complicated theories and techniques, e.g., UAV digital mapping technology [18], GNSS data processing and adjustment theory [19] or RS image processing [20]. Due to a limited amount of time, the lectures cannot be comprehensive, resulting in less prominent teaching topics. It is not easy for students to grasp the key concepts of the course, hence, increasing the difficulty of learning the bulk of the theory. (2) The course is relatively difficult, involving many formulas, complex calculations and the assimilation of a long list of teaching methods. Therefore, students' interest in learning quickly decreases with time; therefore, they are not mastering the topic or performing at the required level (Bachelor or Master). (3) Although there are practical exercises with RS equipment (e.g., UAV) and dedicated software to analyze the recorded data, students' feedback shows that this teaching method is insufficient; therefore, the experimental exercises do not achieve the goal at the required level.
(4) Technical background (e.g., geostatistics and image processing) required by the fast development of modern technologies grows at a fast pace, but the training of scientific and technological talents in RS are slowing down compared with the training of other engineering areas (e.g., civil Engineering, Disaster prevention and mitigation Engineering), hence, resulting in a decrease of student's enrollment in scientific research at master and PhD levels.
Given this scenario, universities should expand their education programs with new developed/emerged technologies and scientific modules (e.g., UAV-based geomatics operations, virtual simulation experimental teaching platform with computer technology). This is also an effective way to gain practical skills, analyze and solve problems, and create new innovative and entrepreneurial talents. As a result, it is beneficial to introduce UAV-RS based on virtual simulation experimental teaching and learning pattern at university/college studies with an emphasis on RS, natural hazards and global change monitoring. Our case study is the construction and implementation of the VSETP-UAV platform within the digital mapping program in the School of Geodesy and Geomatics (SGG) at Wuhan University.
Virtual Simulation Experiment Teaching Platform: Characteristics and Goals
In order to meet the requirements of important changes in acquiring knowledge and teaching methods and enhancing the in-depth integration of information technology and education, the Chinese Ministry of Education has sponsored the development of a national VSETP platform [21].
The National Virtual Simulation Experimental Teaching Project (NVSETP) adheres to the principle of 'student center, output orientation, continuous improvement', and highlights application-driven, resource sharing, and informatization of experimental teaching. High-quality experimental teaching is used to promote the construction of higher education [22,23]. Table 1 lists the NVSETP (in the RS field) implemented at colleges and universities in 2018. Note that the implementation is still on-going. The VSETP mainly has the following characteristics: highlighting the student-centered experimental teaching philosophy, accurate and appropriate experimental teaching content, innovative and diverse teaching methods, advanced and reliable experimental research and development technology, stable and safe open operation mode and continuous improvement of the experimental evaluation system.
The main objectives of NVSETP are "to produce broad-based, flexible graduates who can think interactively, solve problems, and be life-long learners" [24]. It is bound by laws and regulations on Remote Piloted Aircraft System (RPAS), and on experimental projects with safety risks. It is intended to establish virtual experimental environment and experimental objects for simulation, together with the construction of a virtual simulation experiment teaching resource based on internet-based online sharing. Through the 'virtual and real combination' teaching method, the experimental teaching is effectively improved, enabling students to intuitively experience the entire workflow of drone digital mapping, and enhancing the understanding of the digital mapping principle, the image data processing technology and the core algorithms related to drone.
Characteristics and Objectives of VSETP-UAV Teaching Platform on RS at SGG
The School of Geodesy and Geomatics was founded in 1956 as the Wuhan Institute of Surveying and Mapping. In this section we define the Virtual Simulation Experimental Teaching Platform of UAV digital mapping platform (VSETP-UAV) using a case study at Wuhan University [25]. The online learning website (Chinese webpage) interface (in Chinese) of VSETP-UAV is displayed in Figure 1. Figure 1 shows that the VSETP-UAV provides a virtual simulation teaching mode for UAV digital mapping experiment. Through a combination of "virtual data acquisition, virtual reality and cross integration of experimental data", we integrate the teaching of virtual simulation experiments with traditional practical experiments and classroom theory. It contributes to develop students' comprehensive practice and innovative ability. The objectives of experimental projects are as follows: (1) Learn image acquisition and image control point measurement of drone digital mapping; (2) Learn the process of image data processing and digital production; (3) Teach students' self-learning and innovative practical ability.
For the teachers, the VSETP-UAV has realistic objectives and uses real-time assessments, while the teacher only gives explicit educational objectives. From the student's point of view, VSETP-UAV stimulates collaborative and cooperative learning (problems solving). It then allows continuous improvements in their products among others (i.e., team work).
Systems Design and Function of VSETP-UAV Teaching Platform
In order to facilitate online training skills and according to UAV Photogrammetry workflow and experimental teaching needs, a virtual simulation experiment platform for digital mapping of drones consists of three experimental units using a specific teaching software. The overall architecture of this software is described in Figure 2. Table 2 displays the equipment and software requirements for VSETP-UAV platform. The overall architecture of this software is described in Figure 2. It displays the relationship between various modules (student experiments, teaching) and what is stored in the database. To ensure the operation of the VSETP, VSETP-UAV adopt cloud data computing management technology, relying on high-performance servers and a powerful campus network, and it can meet thousands of concurrent accesses (i.e., for students learning online). Figure 2 shows that VSETP-UAV breaks the information barrier between different operating systems. VSETP-UAV supports both PC terminal and PAD terminal, and it also supports cross-platform applications. VSETP-UAV platform uniformizes the sharing of data and information by the central server group. The experimental teaching activities are unified through the web portal of the VSTEP-UAV inside and outside the school through teacher and student accounts. Teachers and students need to use their personal accounts and validate it by clicking on the "Login" button to enter the system for teaching management course learning (i.e., the public account for studentss is 15010440119 with password 332211). The two functions, i.e., experiment and management, are taught in separated modules: This covers the whole process of experimental training (see the Experimental flow chart of VSETP-UAV platform in later Section 3.4). The entire teaching activities are completed through the combination of online experiments and real-time evaluation. The information of the user during the experiments can be recorded into the database in real time through the server. The digital mapping with UAV is one of the main activities for the national topographic maps, especially with the continuous development of RS [26,27]. This platform covers all aspects of the work flow including the 'data acquisition', 'data processing' and 'data output', to enable students to understand and master the knowledge points in each operation step as explained in Table 3 ("n" means the number of knowledge points for the VSETP-UAV platform). b) VSETP-UAV Management Module: Firstly, VSETP-UAV provides simulation experiment web interface and information management function for different users, e.g., students want to do experiments via the online platform. Each student or teacher has a separate account. After the user logs in, they are guided by the server to the experimental function module (see step 1, 2, 3 and 4 in Figure 1). Secondly, the VSETP-UAV system records the information of all aspects of the experimental teaching process into the database (MySQL), and uses this database to perform any statistical analysis. Through the database real-time recording functions, all (behavioral) data on the VSETP-UAV platform are stored in the server together with some statistical analysis (e.g., Student learning ability, learning effect and teaching effect analysis). Finally, these statistical analysis results will be fed back to students and teachers to help the adjustment of learning focus and to improve any teaching methods.
Teaching and Learning Strategies for VSETP-UAV Teaching Platform
As mentioned above VSETP-UAV experiment platform includes: Aerial area selection (place for UAV-digital mapping filed work), UAV Route Planning, UAV instrument installation, Image acquisition and storage, layout of image control points and measurements, Image matching, Aerial Triangulation Digital line drawing (DLG) production, Digital elevation model (DEM) production, Digital orthophoto map (DOM) production [28][29][30].
In order to facilitate the online experiment, the VSETP described in Table 1 is decomposed into four experimental modules. It can be applied to the inter-class experiment of Digital Topography, GNSS Principles and Applications and Digital Photogrammetry for surveying and mapping majors. It can also be applied to the experimental teaching of digital mapping practice, GNSS measurements practice, digital photogrammetry practice and modern surveying and mapping comprehensive practice. The UAV digital mapping virtual simulation platform mainly through the 'virtual and real combination', 'online and offline collaboration' teaching methods, the realization of the 'Delicacy Management Experimental teaching' mode. The specific teaching methods/functions are as follows: a) Virtual and real combination. Firstly, after the theory of 'control point data collection' is taught, the students learn the technical methods and requirements of 'layout and data acquisition of image control points' through the corresponding virtual simulation experiment module software. Secondly, the actual experiment is arranged to complete the 'image control point data acquisition' of the specified task. In practice, the teacher can use the virtual software to provide the thin soft links (Difficulties and Key Points) in the online practice for the students, and conduct targeted guidance and explanations. Finally, the teachers through the 'UAV image processing' virtual simulation experiment module software experimental information real-time evaluation feedback function to complete the inter-class experimental assessment. Therefore, the "virtual and virtual combination" teaching method has strengthened the "study-practice-test" experimental teaching process based on "experimental teaching quality." b) Online and offline collaboration: For 'teaching' process, classroom theory teaching and experimental demonstration teaching intuitively and reproduce the 'measurement work scene/field work' and 'measurement work flow/office work' of UAV digital mapping by means of online virtual imitation experiments. It enriches the content and methods of teaching, and improves the teaching efficiency, and it allows teachers to have more time and energy to carry out targeted guidance in the actual experimental teaching.
Furthermore, for students' "learning", the online virtual experiment breaks through the limitations of traditional experiments limited by space, time and equipment, enabling students to carry out experimental training independently, and "selecting" their own training content on demand and strengthening it, improving the actual experiment under the line. The operation efficiency has realized the students' "autonomy" and "individualization" learning, and has exercised the students' ability of independent study. During the process of "teaching" and "learning", with the aid of social tools such as QQ and WeChat, through the "online and offline collaboration", the interaction between teachers and students is strengthened, and the relationship between teachers and students is also tight.
In summary, the teaching method of "virtual simulation and real data processing operation combination" and "online and offline collaboration", together with the experimental teaching mode of "fine" and "real-time objective experimental evaluation mechanism" have been discussed.
Experimental Methods and Steps of VSETP-UAV Teaching Platform
According to the workflow displayed in Figure 1 and Table 3, the modules can be sequentially executed according to the actual measurement workflow, and can be independently performed according to the needs of the teacher. The module adopts the experimental method of "Combination of virtual simulation and experimental data" in the comprehensive practice teaching. Based on the virtual simulation experiment to understand the internal operation process, the actual software processes the data and completes the production and display. Moreover, the VSETP-UAV platform can track, automatically record, analyze and evaluate the operational information of the experimental process in real time with the experimental result feedback model.
Before conducting experiments, firstly, students need to get access to the webpage of VSETP-UAV platform according. Students can use the student number and registration password to log in to the experimental platform, and they can conduct experiments according to their own or teachers' needs.
UAV image acquisition and Image control point data acquisition model can be carried out through the experimental method of "human-computer interaction" way. The students use the mouse to control the VSETP-UAV and perform virtual simulation experiments on the measurement area, to complete UAV course area selection, VSETP-UAV Teaching Platform Airspace Application, UAV installation, Camera Parameter Setting, UAV image acquisition, Image control point measurement experimental content and tasks and the Experimental step flow chart of VSETP-UAV. Student interaction steps are as follows: Step 1: Selection of experimental area/Location of the "UAV-experimental" and Application for Airspace for UAV experimental. Defining the area on the map according to the existing data, the specific operation mode is as follows: after logging in to the system, select the range of the area with a rectangular frame, then click with the left mouse button (see in Figure 3). Once the experimental area is determined, then the students should know the relevant provisions of the Civil Aviation Administration of China on "Aircraft Management of Unmanned Aerial Vehicles", and submit relevant application materials. For more detail on the user-interface of airspace applications please see the "Experimental steps" of "UAV Image Acquisition" as shown in previous Figure 1 with the online VSETP-UAV platform.
Step 2: UAV air flight preparation, including UAV installation (simulate the installation of the sensor with VSETP-UAV platform), security check and communication link, see Figure 4.
Step 3: UAV Camera Parameter (e.g., camera model, camera focal length) Setting [31], select a different camera model, the image frame will change and you will get different aerial camera technical parameters (see in Figure 5).
Step 4: Aerial photography parameter selection and calculation. According to the defined experimental area and UAV Camera Parameter, calculate the following aerial technical parameters: 1) pixel size (dimension), 2) photography altitude, 3) photography baseline, 4) interval between the airline/route of the UAV surveying, 5) exposure interval, 6) number of routes, 7) number of image layouts on each route (see in Figure 6).
Step 5: UAV image acquisition and Layout of image control point [32]. Observe the status of aircraft, GNSS satellite signals, remote controls, cameras and battery power to meet flight requirements. The image acquisition is started according to the calculated aerial photography technical parameters, the process of capturing the image is displayed, and the acquired image is displayed (and stored), and the acquired image is observed (see in Figure 7). Step 6: Image control point measurement and analytical aerial triangulation. Coordinate measurements were made on each image control point using GNSS RTK technology [33]. First, set the project name, coordinate system, radio parameters and perform point correction; then, measure the image control points one by one; after all the image control points are measured, export the coordinate results. Image control points measurement mainly include: GNSS reference station, GNSS mobile station settings [34], conversion parameter, point correction (to get the conversion parameters between the WGS84 coordinate system and the local coordinate system) and control survey [35]; the interactive interface is shown in Figure 8. Then, we implemented analytical aerial triangulation, with matching the image with the same name, using the beam method adjustment, calculate the coordinates of the ground point and the position and attitude of the image, and output the accuracy report.
Step 7: Digital Line Graphic (DLG) and Digital Elevation Model (DEM) are generated. For DLG generate, we orient the image and input the parameters, such as the scale of the map and the coordinates of the map, measure the graph point-by-point, input the attribute code and edit the vector data. In case of the DEM generate, we constructed an irregular triangulation on the ground point of the null three encryption, setting the grid resolution and selecting the sampling method, interpolating to generate digital surface models, then, eliminate buildings and vegetation points on digital surface models to generate DEM. The results of DLG and DEM are shown in Figure 9. Step 8: Digital Orthophoto Map (DOM) is generated. In this step, students input the DEM, set the resolution of Orthophoto Map and generate DOM. After all the images are orthorected, image stitching lines are generated and all the images are stitched; the results are shown in Figure 10. Step 9: Result display. Loading the generated DOM and set it as the bottom layer, then loading the DLG or DEM, and set it as the top layer. Finally, the processing results are displayed by superposition of layers DOM and DEM/DLG (see in Figure 11). Step 10: according to the student's online experiment operation (step 1~9), the experimental result feedback of VSETP-UAV automatically analyzes a result feedback form.
From the above discussion, the VSETP-UAV platform utilizes technologies such as virtual reality, multimedia, human-computer interaction, database and network communication, with its technical features such as real-time calculation, visualization, simulation and simulation and dynamic interaction, constructed a virtual simulation experiment teaching mode for drone digital mapping of "virtual and virtual integration, virtual reality and cross integration". It realizes the deep integration of virtual simulation experiment teaching and traditional practical experiment teaching and classroom theory teaching, guiding students to learn independently and design independent experiments in the digital environment, and comprehensively cultivate students' comprehensive practice and innovation ability.
It innovates in terms of time and space restrictions of teaching and learning. The online and offline collaborative teaching methods were constructed to realize the two-way interaction between teaching and learning. It also enriches the choice of students' learning content and learning style, and strengthens students' individualization and independent learning.
Finally, through the intelligent management functions such as tracking, statistics and analysis of experimental information in the virtual simulation experiment platform, it is possible to track, count and prompt the experimental information carried out by the students in real time, capture the mastery of each student's knowledge points and conduct research on the students. The overall situation of the experiment was analyzed globally, and the quality control of refined teaching that could not be carried out in the traditional teaching process of surveying and mapping was realized.
In summary, VSETP-UAV expanded the practical content and training methods of traditional surveying and mapping/Remote sensing course experiments, and created a high-level combination of teaching and research, production and training as a whole. It provides a platform for generating high-quality compound talents with internationally competitive on Emerging engineering education.
Results of Implemented VSETP-UAV Teaching Platform
The VSETP-UAV platform has been applied to classroom teaching, student self-learning and communication, assessment and evaluation and skill competition at SGG. At present, this virtual simulation experiment platform has been applied to the fields of "Digital Topography", "GNSS Principles and Applications" and "Digital Photogrammetry". Figure 12 shows the inter-class experiment of GNSS, as well as the experimental teaching of "Digital Photogrammetry Internship" and "Modern Surveying and Mapping Comprehensive Internship" course. The number of courses and exams is about 1600, and the number of self-learning students is more than 5600. It is widely used in the "Second Class" students "open independent experiment", skill competition training and "innovative practice project". The main implementation effects are reflected in the following aspects: virtual course teaching effect, student evaluation of the course, Student's practical ability and innovative ability and social influence.
Firstly, the virtual course teaching effect/results (see Table 4) show that the virtual simulation experiment mode is applied in the experimental course teaching. From both the teacher and the student, the effect and quality of the promotion of experimental teaching are more significant. After the virtual simulation experiment is applied to the experimental course teaching, the effect and quality of the promotion and promotion of experimental teaching are more significant for both the teacher and the student; in particular, the cultivation of students' practical skills is most evident in the improvement of hands-on ability and meticulousness. In addition to the improvement in the teaching process, we also achieved good results in the cultivation of students' innovative practice ability. During the construction of the virtual simulation course, teachers guide the students to do a lot of design, development and testing work, and conducted independent experiments and innovative experiments. Students have achieved many achievements in patents (e.g., Camera shutter delay time measuring device and method-CN105445774A, Display method and device-ZL201510708180.3) and software development (e.g., GNSS simulation experiment teaching system-2016SR056847, Total station virtual simulation measurement system-2016SR075387 and Surveying and Mapping Industry Integration Management System V1.0 based on Android+ASP.NET platform-2018SR420057). These results reflect the cultivation and promotion of students' innovative practice abilities with the application of the VSETP-UAV platform.
After the virtual simulation, the experiment mode is applied in the experimental course teaching, the effect and quality of improving and promoting the experimental teaching are more significant in both the teacher and the student. Especially for the students' practical skills training, the most obvious is the hands-on ability and meticulousness, which have greatly improved.
Secondly, after participating in the virtual simulation experiment platform, the overall satisfaction of students in the evaluation of experimental practice teaching methods and curriculum forms is high, and most students think that the curriculum exceeded their expectations. The students pointed out that: 1) the design of VSETP-UAV experiment is comprehensive, detailed process, clear thinking; 2) more vivid, very fun and at the same time exercise personal thinking ability and hands-on ability; 3) combined with the virtual work, we can cultivate surveying thinking and ability very well; 4) the comprehensiveness of the problem of thinking has been exercised.
Thirdly, there is a platform application effect. The VSETP-UAV platform resources are jointly developed by Wuhan University of Surveying and Mapping and social enterprises. During the process of development and application, the participating teachers lead the students to do a lot of design, development and testing work. Students have carried out independent experiments and innovative experiments to effectively serve the cultivation of students' innovative ability, and achieved many achievements in research and development patents and software development (e.g., related software development, patents, papers).
In addition, the VSETP-UAV platform provides learning resources for undergraduates and graduate students, it helps students to fund self-developed experiments and surveying skills training, helps students participate in the National Surveying and Mapping Skills Competition, science and technology competitions at all levels and gives awards, e.g., in the 5th College Students Surveying and Mapping Skills Competition for China universities, our team won the special prize for the surveying and mapping program design, 1:500 digital mapping, second-class leveling and the group's first prize. The project "Surveying and Mapping External Service and Management System" won the "Internet +" College Student Innovation and Entrepreneurship Competition Hubei Hubei Bronze Award. More students participate in the National College Students Surveying and Mapping Technology Innovation Paper Competition and achieve better results. It improves students' practical ability in field data collection and internal data processing, and improves the comprehensive ability of college students to solve production practice problems. Last but not least is the social influence, the talent training platform, course system, syllabus and laboratory construction plan involved in this virtual simulation experiment platform has been promoted and applied in the talent training and teaching of surveying and mapping engineering majors and related majors in colleges and universities nationwide. It has played a leading and demonstrative role in the training of more than 140 colleges and universities in the field of surveying and mapping professionals in CHINA.
Discussion
VSETP-UAV teaching platform is based on the principle of "combination of virtual and real experiments". This was achieved by setting up three experimental links: UAV image acquisition, image control point data acquisition and UAV image processing, in order to realize the virtual simulation experiment of drone digital mapping. It is suitable for inter-class experiments and comprehensive internships in Remote Sensing and Surveying and Mapping Engineering and related majors. The project broke through the time and space restrictions of traditional surveying and mapping experiment teaching. This allows the experimenter to immerse the entire process of drone digital mapping technology, which is beneficial to accurately understand and fully grasp relevant knowledge points. It realized the refined teaching that cannot be carried out in the traditional experimental teaching process; therefore, the experimental teaching effect has been effectively improved.
From Section 3, 'Virtual Simulation Experiment Teaching Platform for Remote Sensing Higher Education at SGG of Wuhan University', we showed that the VSETP-UAV platform has played a significant role in modern higher education in RS. VSETP-UAV platform covers the whole process of experimental training. The teaching activities are completed through the combination of online experiments and real-time evaluation. The information of the user in the experiment can be recorded into the database in real-time through the server. Virtual simulation experiment teaching information platform is a new teaching medium, and it combines with informatization experiment environment and internet-based online sharing for modern RS education. Compared to the traditional teaching methods, VSETP-UAV platform are more conducive to promote the continuous improvement of the quality of personnel training, and it has a great value for training "high-quality, internationalization, innovation" surveying and Mapping Engineering professionals.
Conclusions
This paper introduced the teaching status of undergraduate education on "Digital Mapping" with Unmanned Aerial Vehicle for Remote Sensing at Wuhan University. Combining the talents of RS, surveying, mapping engineering and modern information technology application, we proposed an innovative virtual simulation teaching platform on Digital Mapping with UAV for RS Education. Firstly, we presented a short analysis on the issues and challenges of undergraduate education on RS. We then evaluated the performance on the Higher Education in Virtual Simulation Experiment Teaching platform of UAV digital mapping for Remote Sensing. We explored the VSETP on unmanned aerial vehicle and analyzed the initial results achieved by the platform. The developed VSETP-UAV platform contributes to the integration of virtual simulation experiment teaching, traditional practical experiment teaching and classroom theory teaching. The online and offline collaborative teaching methods are built to realize the two-way interaction between teaching and learning. It breaks through the time and space restrictions of teaching and learning, enriches the choice of students' learning content and learning style and strengthens students' individualized and independent learning. Moreover, it expands the practical content and training methods of traditional RS experiments, generating a high-level combination of teaching, research, production and training altogether. The main implementation effects are reflected in virtual course teaching, student evaluation of the course, gain of students' practical ability and innovative ability and social influence. It shows that the VSETP-UAV platform contributes to train excellent engineers of high quality, internationalization and innovation in the RS field, in particular to teach students' practical skills with hands-on ability and meticulousness. Finally, foreign students can only use online translation software, e.g., the use of Internet Explorer to browse VSETP-UAV platform, because the data resources are mainly in Chinese. In the near future, we will improve the platform with an English online learning system. We will also continue to improve the experimental interactive experience in data processing and analysis of results based on project teaching application feedback.
|
v3-fos-license
|
2020-07-09T09:15:03.254Z
|
2020-06-01T00:00:00.000
|
225830217
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2020/13/epjconf_ilrc292020_08001.pdf",
"pdf_hash": "4e4825b78429b73644b48633f8a965062a46bb2e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42434",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"sha1": "be48346e271993aa1661d6ab8f1da2a52193e4dc",
"year": 2020
}
|
pes2o/s2orc
|
A NEW METHOD FOR A SIDE-SCATTERING LIDAR TO RETRIEVE THE AEROSOL BACKSCATTERING COEFFICIENT
The side-scattering lidar based on the CCD camera is a powerful tool to measure the near-ground aerosol, which is most interesting for the environmental and meteorological departments. The inversion method for the side-scattering lidar is different from the conventional Fernald method due to the differences in two lidar equations. A new inversion method for the side-scattering lidar to retrieve the aerosol backscattering coefficient is proposed for the first time, which is based on the aerosol backscattering coefficient at the ground as a restriction condition. Numerical simulation is used to analyze the convergence of this new method. Case studies indicate that this new method is reasonable and feasible.
INTRODUCTION
Aerosols are colloids of fine solid particles or liquid droplets suspended in the atmosphere.Atmospheric aerosols have substantial influences on human health [1][2], air quality [3][4] and climate changes [5][6] and their loading has significantly increased.Thus backscattering lidar takes an important place in the studies of the atmospheric environment, air pollution and diffusion with its capable of providing rangeresolved atmospheric profiles continuously.But the backscattering lidar system has a shortcoming in the lower hundreds of meters because of the overlap factor caused by the configuration of the transmitter divergence and the receiver's field of view(FOV) at ranges close to the instrument [7].This limits the backscattering lidar to be applied to the near-range measurement, especially for the fixed vertical-pointing lidar [8].Therefore, to study the aerosol properties in haze day a new technique without the overlap factor near the ground would be quite useful.And a new inversion method for the side-scattering lidar to retrieve the aerosol backscattering coefficient is also proposed for the first time, which is based on a prior reference value measured by itself at the ground as a constrained condition.
METHODOLOGY
In the side-scattering lidar, we receive sidescattering light instead of backscattering light and use a CCD detector instead of a telescope [9].The side-scattering lidar equation can be written as Where P(z,θ) is the received photo electron number for the altitude z and the scattering angle θ in pixels, P0 is the laser photo electron number, K is the calibration constant representing the system optical efficiency and A is the effective collecting area of the optics, D is the distance from the CCD to a laser beam, Tt and Tr and is the total atmospheric (aerosol and molecular) transmittance from the laser to the altitude z and from the altitude z along the slant path to the CCD receiver, respectively.Β(z,θ) is the total atmospheric side-scattering coefficient, ∆θ is the FOV of a pixel.For the lidar equation, there are two different aspects compared to the backscattering lidar: (I) received signals without range square dependence, (II) no overlap factor.Without the overlap problem, the side-scattering lidar is suitable for detecting the aerosol vertical profile in the near range, especially [8,10].
As to the backscattering lidar, there are two variables --the aerosol extinction coefficient and the backscattering coefficient.Fernald [11] proposed an iterative inversion solution for the backscattering lidar, which is not suitable for sidescattering lidar due to six unknown variables in the side-scattering lidar equation, i.e. the relative phase functions, the backscattering and extinction coefficients for aerosol and molecules.
Using the relative aerosol phase function, our group has developed a numerical inversion method for the side-scattering lidar [12].In order to use the backward integral for reducing inversion errors, the reference point is selected at about 1 km altitude in the method.However, there is a shortcoming lying in the fact that the aerosol backscattering coefficient value at about 1 km altitude is not easy to get, especially in the foggy and hazy weather condition, which is usually obtained simultaneously from the backscattering lidar.Fortunately, the aerosol backscattering coefficient value at the ground is found easily even in foggy and hazy weather.So in this letter, using the aerosol backscattering coefficient value at the ground as a restriction condition, we propose an iterative inversion method for the sidescattering lidar to retrieve the aerosol backscattering coefficient based on the backward integral.Based on the previous method in the literature [12], our new inversion procedure can be obtained to get the solution for the aerosol backscattering coefficient profile.
RESULTS
In order to check the convergence of this new inversion method, numerical simulation has been investigated from the backward and forward integral directions.Supposing there is a simulation aerosol backscatter coefficient profile shown in Fig. 1, then the simulated side-scatter lidar signal can be derived from the sidebackscattering lidar equation (1).
Fig. 1 Retrieval of the aerosol backscattering coefficient profile using the forward integral
As to the forward integral, when we select the reference point at 0.01 km altitude, the original values of the aerosol backscattering coefficient at the reference point are 0.0033 km -1 sr -1 in Fig. 1.
Supposing the aerosol backscattering coefficient is times of the original value at the reference point, and =1.10, 1.05, 0.95, 0.90 , respectively, the retrieval aerosol backscattering coefficient profiles are calculated in Fig. 1.
Fig. 2 Retrieval of the aerosol backscattering coefficient profiles using the backward integral
As to the backward integral, the reference point is selected at 3.00 km altitude, the original values of the aerosol backscattering coefficient at the reference point are 0.00024 km -1 sr -1 in Fig. 2. Supposing the aerosol backscattering coefficient is times the original values at the reference point, and =1.20, 1.10, 0.90, 0.80 , respectively, the retrieval aerosol backscattering coefficient profiles are calculated in Fig. 2.
It should be noted that the backward integral will be convergent, and the forward integral cannot be always convergent as shown in Fig. 1 and Fig. 2. As to the forward integral, the bigger the aerosol coefficient and the larger the bias at the reference point, the worse the convergence.The simulation calculations indicate that the backward integral is reasonable direction for the side-scattering lidar inversion.
One measurement by the side-scattering lidar system are presented in the following.In the evening on February 13, 2015, the sky was overcast.The side-scattering lidar and the visibility instrument were operated together at the same place.The visibility instrument was on top of a building at 16 m altitude.At 19:00, the visibility was 3.87 km, and the corresponding aerosol extinction coefficient was 1.01 km-1 and the corresponding aerosol backscattering coefficient was 0.024 km-1sr-1.Taking the aerosol backscattering coefficient 0.024 km-1sr-1 at 16 m altitude as the restriction condition, the aerosol backscattering coefficient profile was calculated from the side-scattering lidar equation using the new method, which is shown in Fig. 3 (a)(black line).The red line in Fig. 3(a) was the aerosol backscattering coefficient profile from the previous method [12].In summary, a new method for the side-scattering lidar to retrieve the aerosol backscattering coefficient is proposed.This new method solves the important problem: retrieving the aerosol backscattering coefficient without using an expensive backscattering lidar.Numerical simulation and compared experiments indicate that this method is feasible and very suitable to foggy and hazy weather conditions, especially.
Fig. 3
Fig. 3 Retrieved profiles of the aerosol backscattering coefficient from the two methods, at 19:00 LST on February 13, 2015
|
v3-fos-license
|
2023-08-07T15:31:26.006Z
|
2023-08-03T00:00:00.000
|
260660344
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2023.1207869/pdf",
"pdf_hash": "7fb93722d093610e2cf52d31fc274865252143c8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42435",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c3cb951331b46ad2d8dc96b2c62427151c4fb483",
"year": 2023
}
|
pes2o/s2orc
|
New insights learned from the pulmonary to systemic blood flow ratio to predict the outcome in patients with hypoplastic left heart syndrome in the pre-Glenn stage: a single-center study
Background To the best of our knowledge, no study has been made until now to determine whether the ratio between pulmonary and systemic blood flow (Qp/Qs) in the pre-stage II (PS2) or pre-Glenn stage can predict the outcome in patients with hypoplastic left heart syndrome (HLHS) who underwent Norwood (NW) palliation. Patients and methods From January 2016 to August 2022, 80 cardiac catheterizations in 69 patients with HLHS in NW palliation stage with modified Blalock–Taussig shunt (MBTS) were retrospectively recruited. The Qp/Qs was measured under stable conditions using the Fick formula. None of the patients were intubated. Patients were divided into two groups: Group 1 included patients who underwent planned cardiac catheterization (n = 56), and Group 2 had unplanned examination (n = 13), in which the indication for cardiac catheterization was desaturation in 11 patients and pulmonary over-circulation in two. The composite primary outcome was defined as accomplishing the planned operations (Glenn and Fontan) with freedom from death and reoperation, referring to palliative therapy or heart transplantation. The secondary outcome was freedom from transcatheter intervention in MBTS or pulmonary arteries. Results The median follow-up was 48 months (range 6–72 months). The median value of Qp/Qs in Group 1 was 1.75 (range 1.5–2.2). In Group 2, the 11 patients with desaturation, the median value of Qp/Qs was 1.25 (range 0.9–1.45). The two patients with suspected pulmonary overcalculation showed Qp/Qs of 2.3 and 2.5, respectively; a reduction of the shunt size was required. Approximately 96.4% of patients in Group 1 achieved the primary outcome compared with only 30.7% in Group 2. The need for reintervention was 1.8% in Group 1 compared with 61.3% in Group 2. There is a significant relationship between Qp/Qs and the impaired outcome (death, palliative therapy, or heart transplantation) with a p-value of 0.001, a relative risk factor of 3.1, and a 95% confidence interval of 1.4–7.1. No significant relationship between the Qp/Qs and the size of MBTS (p-value of 0.073) was noted. Conclusion The Qp/Qs in PS2 can predict outcomes in patients with HLHS in Norwood stage with MBTS. The Qp/Qs between 1.5 and 2.2 with a median of 1.75 seems to be optimal in the patients in PS2. Qp/Qs of <1.5 is associated with pulmonary stenosis, shunt stenosis, and pulmonary hypertension.
Introduction
Hypoplastic left heart syndrome (HLHS) is the most common lethal cardiac malformation in newborns. Norwood (NW) palliation stage I with either a modified Blalock-Taussig shunt (MBTS) or a Sano shunt is considered initial palliation in these patients. This procedure includes the connection of the divided main pulmonary artery to a reconstructed aortic arch, the creation of a shunt between the right subclavian artery and the pulmonary artery (MBTS), or a conduit between the right ventricle and the pulmonary artery (Sano shunt), ligation of the ductus arteriosus, and atrial septectomy. Several studies were undertaken to identify the optimal value of the pulmonary to systemic blood flow ratio (Qp/Qs), where the saturation and the hemodynamic situation were in the optimal range. Most of these studies were performed intraoperatively (1) and shortly after the operation during the hospital stay in the intensive care unit (ICU) (1)(2)(3)(4)(5)(6)(7).
Other studies (8,9) have compared the hemodynamics between the Sano shunt and the MBTS in pre-stage II (PS2) palliation and showed that the Qp/Qs was lower in patients with the Sano shunt compared with those operated with MBTS (10). Migliavacca et al. (8,9,11) showed that in 2000 a Qp/Qs of 1 resulted in optimal O 2 delivery in patients with a median age of 5 months.
Our current study attempts to determine whether the pulmonary to systemic blood flow ratio in PS2 can predict the outcome in the patients who underwent NW palliation with MBTS. In addition, we try to find the range of Qp/Qs values, in which the patients in this cohort were hemodynamically stable and highlight the pathologic values of Qp/Qs in some pathologic situations, such as pulmonary overcirculation, pulmonary hypertension (PHT), shunt stenosis, and pulmonary stenosis.
Patients and methods
Sixty-nine patients were retrospectively recruited in our center from 2016 to 2022. We performed 80 examinations on the 69 patients under sedation.
All cardiac catheterizations were done under sedation. Patients in whom the measurements were incomplete and those who suffered from sedation-related respiratory or hemodynamically compromise (n = 12), which can impact the outcome of the study, were excluded. Due to a low number of patients who were operated with a Sano shunt in our center from 2016 to 2022, they were also excluded from the current report.
The saturations were measured in the aorta (Ao-Sat), pulmonary vein (PV-Sat), inferior vena cava (IVC-Sat), and superior vena cava (SVC-Sat). The mixed venous saturation was measured as follows: SV-Sa = 3 × SVC + IVC/4. The pulmonary pressure was measured using an angiographic catheter, GLIDECATH ® (Terumo, Radifocus GLIDECATH TM , Non-taper Angle, 65 cm, 4 F), which was introduced through the shunt in the pulmonary arteries. The end-diastolic pressure of the systemic right ventricle was documented in all patients, as well as the hemoglobin (HB) and the hematocrit at the time of catheterization. The size of the MBTS, shunt stenosis, pulmonary stenosis, associated major aortopulmonary collateral arteries (MAPCAs), and the medication for heart failure therapy during cardiac catheterization (CC) were documented.
The examination was performed planned in 56 patients and unplanned in 13 due to desaturation (n = 11) or increased signs of pulmonary over-circulation (n = 2).
To analyze the optimal shunt flow, the patients in this study were divided into two groups: Group 1 includes patients who underwent a routine cardiac catheterization in preparing for the next step operation, and Group 2 includes patients who underwent an unplanned, more or less emergency, cardiac catheterization in PS2.
This study's composite primary outcome was freedom from all of the following: death, reoperation, referring to palliative therapy, or heart transplantation. The secondary outcome was freedom from transcatheter reintervention in MBTS or pulmonary arteries.
During follow-up, we documented the following events: death, need for a shunt stent, need for a shunt clip or shunt revision, time of Glenn and Fontan operation, and the need for heart transplantation or palliative therapy.
Statistical analysis
All statistical analyses were performed using SPSS version 22 (IBM). Continuous variables were reported as mean ± standard deviation (SD) and categorical variables as count (percentage). Non-paired Student's t-test was used to compare the means of continuous variables between the two different categories. Chisquare test was used for comparing categorical variables. Odds ratios (ORs) ± 95% confidence intervals (95% CI) for the following parameters were calculated to assess any differences between Group 1 and Group 2: deaths in pre-Glenn stage (PGS) and the pre-Fontan stage, failing to arrive at the Glenn operation or the Fontan operation and being referred to palliative therapy or heart transplantation, need for shunt revision, or reoperation for pulmonary reconstruction or aortic arch reconstruction in PGS.
A p-value of 0.05 was set as the threshold for clinical significance. Kaplan-Meier survival curve analysis of the two different groups was performed.
Ethical statement
The study complies with the Declaration of Helsinki (as revised in 2013). Owing to a purely retrospective study design, using available institutional clinical records, with an absence of the impact on the management of the patients included and completely anonymous data presentation, informed consent of the subjects (or their parents) and ethical approval were not obtained.
Group 1: patients who underwent planned cardiac catheterization in PS2
Sixty-five catheterizations were done electively in 56 patients as a routine examination in PS2 ( Table 1). MBTS diameter was 4 mm in 15 patients and 3.5 mm in the rest of the patients. The median values of the age, body surface area (BSA), and weight were 4.1 months, 5 kg, and 0.31 m 2 , respectively. The median value of the Qp/Qs was 1.75, in which the median aortic saturation was 79.5%, and the mean hemoglobin was 12 g/dl. The median pulmonary artery pressure (mPAP) value was 12 mmHg, and the median value of end-diastolic pressure of the right ventricle (RV-EDP) was 9 mmHg. The median values of SVC-Sat, IVC-Sat, and PV-Sat were 48%, 52.3%, and 96%, respectively.
In six patients, the MAPCAs needed to be occluded with coils, and no change in the Qp/Qs before and after the occlusion was documented. All patients became the standard medication of our center after NW palliation at the time of catheterization, which includes beta-blocker and cardiac glycoside (digoxin). The need for increased doses of beta-blocker and digoxin was noticed in two patients, in whom Qp/Qs was 2.1 and 2.2, respectively.
One patient needed shunt stenting due to apparent shunt stenosis without notable desaturation.
The median follow-up in this group was a median of 48 (range 6-72) months. In midterm and long-term follow-ups, two deaths were documented. The first patient had Kabuki syndrome. Due to his long-term stay in the ICU postoperatively and his syndrome, he was not discharged from the hospital. The Glenn operation was not amenable during the hospital stay due to a chronic Cytomegalovirus (CMV) infection. He died 5 months after the NW procedure because of respiratory failure. The second patient had a 3.5 MBTS implanted and was successfully brought to Glenn operation. He suddenly died at home after an infection at 3 years old.
Group 2: patients who underwent unplanned cardiac catheterization in PS2
Twenty unplanned examinations were performed on 13 patients. The indication for cardiac catheterization was desaturation in 11 patients and increased signs of pulmonary over-circulation in two. The median age, weight, and BSA values were 5 months, 5.8 kg, and 0.31 m², respectively. The size of the MBTS was 3.5 mm in seven patients and 4 mm in six.
Unplanned catheterization due to desaturation
Sixteen examinations were unplanned in 11 patients due to desaturation ( Table 1). The diameter of the MBTS was 3.5 mm in seven patients and 4 mm in four.
The median value of the Qp/Qs was 1.25, in which the mean aortic saturation was 75% by mean hemoglobin of 13 g/dl. The mPAP value was 15.5 mmHg, and the median value of RV-EDP was 12.6 mmHg. The median values of SVC-Sat, IVC-Sat, and PV-Sat were 52%, 52%, and 95%, respectively. Six patients had shunt stenosis, five needed shunt stenting, and one was operated on with the Glenn procedure 2 days after the catheterization. One patient had severe stenosis of the truncus brachiocephalic entrance, which needed to be stented; later, the patient received a central shunt due to restenosis.
We have documented associated pulmonary stenosis in five patients and pulmonary hypertension (18-31 mmHg) in four.
In follow-up, four deaths were documented in PS2 due to shunt occlusion in one patient (6 months old) with multiple thrombotic events and a stent in the shunt. Sudden death was recorded in the second patient (8 months old) with a stent in the shunt in whom shunt occlusion was highly expected to cause the death. The third death was documented in a patient (6 months old) with PHT in PS2, and the fourth death was in a patient with severe stenosis in the truncus brachiocephalicus (TBC), which was treated with a stent. Because of the restenosis, the shunt was revised to a central shunt. The patient was then operated on with a right Glenn (to the right pulmonary artery) and a left shunt (to the left pulmonary artery); the patient died in a palliative care center at 12 months.
Unplanned catheterization in PGS due to cardiac insufficiency
Two patients were referred to cardiac catheterization due to increased signs of pulmonary over-circulation: sweating, tachypnea, failure to gain weight, elevated saturation (>85%), and excessive diuretic need. The first patient had a 4 mm MBTS implanted, which was mildly clipped. The cardiac catheterization showed Qp/Qs of 2.3 and severe stenosis in the aortic arch. Echocardiographic examination showed significant tricuspid regurgitation. The patient was referred for surgery and had a reconstruction of the aortic arch and the tricuspid valve. Three months after the tricuspid valve reconstruction, the patient died at 7 months due to cardiac decompensation and had not received the Glenn operation. The MBTS in the second patient Frontiers in Cardiovascular Medicine was 4 mm, the Qp/Qs was 2.5, and a shunt clip was required. The patient was operated on during follow-up, and a Glenn anastomosis was established. At this stage, the patient developed severe heart failure due to massive collateral artery flow (3-4 mm in diameter) and significantly reduced RV ejection fraction (EF of <25%). The Qp/Qs in this Glenn stage was 0.8. After most of the MAPCAs were closed, the echocardiography showed only moderately reduced RV-EF, and the patient is awaiting a Fontan operation.
Morbidity and mortality
There is a significant relationship between death, failure to reach the Glenn operation, being referred to a palliative therapy or HTx, and Qp/Qs ratios with a p-value of 0.001 with a relative risk factor (RR) of 3.1 and a 95% CI of 1.4-7.1 (odds ratios: 65; 95% CI 10.4-409).
Discussion
The challenge after the Norwood palliation with MBTS is to reach a balance between the pulmonary circulation and the systemic circulation to avoid pulmonary over-circulation, unbalanced high systemic oxygen delivery, and the negative effect of the aortic diastolic runoff on coronary perfusion, which is one of the leading causes of morbidity and mortality early after the palliation (5,6). Many studies were published for a better understanding of the hemodynamics of Norwood palliation in patients with HLHS intraoperatively and shortly after the operation. Some results suggested that the Qp/Qs should be over 1.5 for an improved course early after the palliation stage (6). Another found that the circulation in these patients was balanced when the Qp/Qs is equal to 1.4:1 (3).
Other authors compared the hemodynamics between the Norwood palliation stage I operation with MBTS, the Sano shunt in the early stage after Norwood palliation, and the PGS. The results showed that the Qp/Qs was higher in patients with MBTS than in those who underwent Sano shunt. The end-diastolic RV pressure was lower in the Sano shunt compared with the MBTS (8,9).
In the current report, we have focused on the patients who underwent Norwood with MBTS in PS2, trying to find the optimal Qp/Qs ratios at which hemodynamic stability was observed and analyzing if the hemodynamics of the patients who underwent catheterization in PS2 can predict the outcome in the pre-Fontan stage (PS3).
The current study showed that the patients in Group 1 who had Qp/Qs between 1.5 and 2.2 (median 1.75) were clinically stable with adequate saturation and had no or minimal cardiac insufficiency compared with those in Group 2 who had Qp/Qs under 1.5 or more than 2.2. The mean pressure of the pulmonary artery and the RV-EDP were higher in Group 2 compared with patients in Group 1. The need for shunt or pulmonary stenting, revision of a shunt, shunt clips, or pulmonary artery enlargement or aortic arch reconstruction in PGS was 69% compared with 1.6% in Group 1. In a follow-up with a median of 48 months, 54% of patients in Group 2 died in PGS or were referred to palliative palliation compared with 3.3% in Group 1.
This study showed a significant relationship between morbidity and mortality and Qp/Qs. The survival analysis (Figures 1 and 2) demonstrates an explicit disassociation between the two groups. No A need for shunt stenting related to shunt stenosis was observed in patients with Qp/Qs of <1.5, in whom the desaturation indicated catheterization in PS2. This observation was also documented in patients with severe pulmonary stenosis. It is worse to notice that the patients with pulmonary hypertension in PS2 showed a low Qp/Qs and had failed to reach the stage of Glenn. Severe heart failure with pulmonary overcirculation and the need for shunt clips were observed in two patients with Qp/Qs of >2.2.
Although some centers seek to replace the routine cardiac catheterization before Glenn with MRI to evaluate the development of pulmonary arteries and lymphatic system and calculate the Qp/Qs, MRI cannot calculate pulmonary vascular resistance, which is essential for evaluating the conditions in PS2 and PS3. However, the calculation of Qp/Qs in cardiac catheterization labor could be limited in patients with sedationrelated respiratory or hemodynamic instability or those needing respiratory support or medication which impact the hemodynamic situation or the pulmonary vascular resistance.
Conclusion
Qp/Qs ratios in the PS2 in patients with HLHS who underwent Norwood palliation stage I with MBTS can predict the outcome of these patients. The optimal Qp/Qs ratios in PS2 in our cohort range between 1.5 and 2.2 with a median of 1.75 as morbidity and mortality were observed to be significantly higher if Qp/Qs is outside these limits.
Data availability statement
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.
Author contributions
NM: conception and design of the article; collection, analysis, and interpretation of the data; and drafting of the article. PZ, MS, and NM: critical revision of the article. All authors contributed to the article and approved the submitted version.
|
v3-fos-license
|
2020-03-17T13:03:28.399Z
|
2020-03-16T00:00:00.000
|
212728955
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.15151",
"pdf_hash": "02defb5dfb675f7095f08ac9082b6ed74510c176",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42439",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "6bb2e6243cd2b2f0ae49a1cbd559ca3bb8894170",
"year": 2020
}
|
pes2o/s2orc
|
Humanin analogue, HNG, inhibits platelet activation and thrombus formation by stabilizing platelet microtubules.
Abstract HNG, a highly potent mutant of the anti‐Alzheimer peptide‐humanin, has been shown to protect against ischaemia‐reperfusion (I/R) injury. However, the underlying mechanism related to platelet activation remains unknown. We proposed that HNG has an effect on platelet function and thrombus formation. In this study, platelet aggregation, granule secretion, clot retraction, integrin activation and adhesion under flow conditions were evaluated. In mice receiving HNG or saline, cremaster arterial thrombus formation induced by laser injury, tail bleeding time and blood loss were recorded. Platelet microtubule depolymerization was evaluated using immunofluorescence staining. Results showed that HNG inhibited platelet aggregation, P‐selectin expression, ATP release, and αIIbβ3 activation and adhesion under flow conditions. Mice receiving HNG had attenuated cremaster arterial thrombus formation, although the bleeding time was not prolonged. Moreover, HNG significantly inhibited microtubule depolymerization, enhanced tubulin acetylation in platelets stimulated by fibrinogen or microtubule depolymerization reagent, nocodazole, and inhibited AKT and ERK phosphorylation downstream of HDAC6 by collagen stimulation. Therefore, our results identified a novel role of HNG in platelet function and thrombus formation potentially through stabilizing platelet microtubules via tubulin acetylation. These findings suggest a potential benefit of HNG in the management of cardiovascular diseases.
diseases, such as AZD. 12 Resting platelets in circulation maintain their discoid shape. Upon stimulation, platelets are activated followed by secreting intracellular granules that contain pro-inflammatory and pro-coagulant factors including Aβ40. The deposition of Aβ peptides, mainly Aβ40 in the vessel wall, causes the vascular destruction and contributes to the severity of AZD pathology. 13 Activated platelets are the main source of circulating Aβ40. 14,15 Aβ40 in turn activated platelets and enhanced platelet aggregation via integrin αIIbβ3 and the intracellular signalling pathway, further deteriorated cerebral diseases. 13 The protection role of HNG led us to propose that platelets might be a potential target of HNG. To date, whether and how HNG affects platelet function has not yet been reported. Moreover, the role of anti-AZD agents in thrombotic disorders is far less understood.
Given the emerging signalling nexus between AD and thrombosis, it is tempting to assume a common therapeutic target. Therefore, we have been suggested that HNG, a neuroprotective peptide, may inhibit platelet activity. In the current study, we demonstrated that HNG inhibits platelet activation and thrombus formation by stabilizing microtubules and enhancing tubulin acetylation.
| Animals
Wild-type (WT) C57BL/6J mice were housed in a specific pathogen-free environment with constant day/night light cycles and were given free access to food and water. Mice aged 8 to 10 weeks were used for all experiments. The experimental protocols were approved by the Institutional Animal Care and Use Committee of Soochow University and conformed to the National Institutes of Health Guide for the Care and Use of Laboratory Animals (National Institutes of Health, 8th edition, 2011). 16
| Isolation of human platelets
Venous blood was drawn from healthy donors, anticoagulated with 1/5 volume ACD buffer and centrifuged at 900 r.p.m. for 20 minutes to yield PRP, which was applied to a column packed with Sepharose TM 2B resin. Platelets were eluted, counted, pooled and adjusted to a concentration of 2-3 × 10 8 /mL with Tyrode's buffer and allowed to equilibrate at RT for 30 minutes before analyses. All human studies were in accordance with the Declaration of Helsinki and approved by the ethics committee of Soochow University. 17
For clot retraction, human platelets were stimulated with thrombin (1U) and recorded at an indicated time-point using a camera. The clot area was quantified by the ratio of clot area to platelet suspension area at different time-points.
| Adhesion of platelets under flow conditions
Platelet adhesion under shear was measured using a BioFlux200 TM flow chamber system (Fluxion Biosciences Inc) according to the manufacturer's instructions. Channels in BioFlux TM plates were primed, coated with type I collagen (200 µg/mL, Chrono-log) for 1 h at RT and blocked with 0.5% BSA/PBS. Sodium citrate-anticoagulated human whole blood was pre-incubated with HNG (10 μM), labelled with calcein-AM (10 μM, Molecular Probes) for 30 minutes and perfused through the channels at 10 dyne/cm 2 . Adherent platelets were quantified by fluorescence recorded under a ZEISS inverted microscope equipped with a monochrome digital camera.
| Platelet spreading and microtubule depolymerization
Glass coverslips were precoated with fibrinogen. After incubation with HNG or vehicle at 37°C for 10 minutes, platelets were allowed to spread on coated coverslips in 48-well plates for 60 minutes. After washing, adherent platelets were fixed with 4% paraformaldehyde, permeabilized with 0.1% Triton X-100 and stained with TRITClabelled phalloidin at RT for 2 h. Coverslips were mounted on glass slides, and pictures were obtained under a fluorescence microscope (Olympus, FSX100) and were analysed using ImageJ software (NIH).
For the platelet microtubule acetylation assay, resting platelets or microtubule depolymerization reagent, nocodazole-treated platelets after incubation with HNG (10 μM) at 37°C for 10 minutes were fixed in microtubule-preserving fixative (PHEM) buffer supplemented with 4% PFA (1:1), followed by centrifugation on poly-L-lysine-coated coverslips. Alternatively, platelets were incubated with HNG and then allowed to adhere to fibrinogen-coated slides for 1 h before fixation. Samples were stained using an anti-β1-tubulin (T4026, clone TUB 2.1; Sigma-Aldrich) or anti-acetylated tubulin antibody (T6793; Sigma-Aldrich) and examined using a Leica TCS SP5 confocal microscope (Leica Microsystems). Assays were performed in biological triplicates. The number of platelets with intact microtubule rings was analysed using ImageJ.
| Tail bleeding, blood loss and jugular vein bleeding
Tail bleeding times were determined as previously described. 18 Briefly, HNG (final concentration in whole blood was about 10μM) or saline was administered through the jugular vein to mice. Ten minutes later, the mice were placed on a heating pad, and their tails protruded. After transection of the distal 3 millimetres, the tail was immediately immersed in 13 ml of 0.9% sodium chloride for 30 minutes at 37°C. The time to the first bleeding cessation was recorded. After 30 minutes, each tube is then centrifuged at 550 g during 5 minutes. After the removal of the supernatant, a lysis buffer (NH4Cl 150 mM, KHCO3 1 mM, EDTA 0.1 mM, pH 7.2) was added on the clot for a final volume of 2 ml. The tubes were vortexed, and the absorbance at 540nm was measured.
For the jugular vein bleeding time test, the other jugular vein was stabbed using a needle and then washed with 0.9% sodium chloride.
The time to bleeding cessation was recorded.
| Laser-induced cremaster arterial thrombus formation by intravital microscopy
Wild-type male mice aged 8-12 weeks were anaesthetized with 1% sodium pentobarbital via intraperitoneal (i.p.) injection. A cannula was inserted into the jugular vein to infuse HNG (10 μM) or saline and 3,3'-dihexyloxacarbocyanine iodide (DIOC6, 200 μM) before laser injury. Ten minutes later, arterioles (30-40 µm) were injured using a Laser Ablating system (Intelligent Imaging Innovation). Digital images were captured by a Zeiss camera for 4 minutes after vessel wall injury. Data from 26 thrombi per group were analysed using Slidebook version 6.0 software.
| Western blotting
Pre-incubated platelets (250 µl, 2 × 10 8 /ml) were stimulated by agonist with stirring. At the indicated time-points, the reaction was terminated by adding radioimmunoprecipitation assay (RIPA) buffer supplemented with protease and phosphatase inhibitor cocktails and lysed for 30 minutes on ice. After centrifugation, supernatants were collected, and protein concentrations were determined using the BCA method. Samples were separated on a 10% SDS-PAGE gel and incubated with the indicated primary antibody and corresponding fluorescence-conjugated secondary antibodies (goat anti-rabbit IRDye 800CW, goat antimouse IRDye 800CW; CST). Fluorescence intensity was recorded using an Odyssey infrared imaging system (LI-COR Biosciences). Densitometry analysis was performed using ImageJ software.
| Statistics
Data are expressed as the means ± standard errors of the mean (SEMs) or medians with interquartile ranges (IQRs). Comparison of the differences among multiple groups was performed using oneway analysis of variance (ANOVA) with the Dunnett's multiple comparisons test. A two-tailed Student's t test or Mann-Whitney test was used compare the differences in variables with or without a normal distribution. Statistical analysis was performed using Prism 8.0 software (GraphPad Software). P values <.05 were considered statistically significant.
| HNG inhibits platelet aggregation and granule release
To test the effect of HNG on platelet function, isolated human platelets were pre-incubated with HNG or vehicle and stimulated with various agonists. The results showed that HNG inhibited platelet aggregation simulated by collagen, convulxin, collagen-related peptide (CRP), thrombin or ADP ( Figure 1A). Quantification of the maximal inhibition percentage of platelet aggregation at 10μM HNG treatment showed that the inhibitions are significant compared to the vehicle (collagen, P < .01; convulxin, P < .01; CRP, P < .001; thrombin, P < .05). As collagen, convulxin and CRP shared GPVI signalling pathway and caused similar inhibition on platelet aggregation ( Figure 1A), we use collagen or CRP in the following experiments.
As HNG inhibited ADP-and thrombin-induced platelet aggregation, thrombin was used for the following experiments as well.
Activated platelets release α-granule containing P-selectin, a PSGL-1 ligand, mediating platelet-leucocyte or leucocyte-endothelial interaction. 19,20 To determine whether HNG affects this process, human platelets pre-incubated with 10μM HNG were stimulated by platelet agonists CRP and thrombin, and cell surface P-selectin | 4777 REN Et al. expression was measured using flow cytometry ( Figure 1B,C).
Compared with vehicle, HNG significantly inhibited the increment of P-selectin in CRP ( Figure 1B, P < .001) and thrombin ( Figure 1C, P < .0001) treated platelets. Next, we measured the secretion of ATP, a strong pro-inflammatory factor and a hallmark of platelet dense granule release. Similarly, HNG inhibited ATP release from collagenand convulxin-stimulated platelets ( Figure 1D, P < .05). Moreover, we also showed that HNG did not affect platelet viability and have no peptide cytotoxicity at high doses (20 μM) ( Figure S1). Therefore, HNG showed an inhibitory effect on platelet aggregation and granule secretion from activated platelets without affecting platelet viability.
| HNG inhibits platelet integrin α IIb β 3 activation, spreading and clot retraction
Given its inhibitory effect on platelet aggregation and secretion, we then asked whether HNG affects the 'inside-out' conformation change of integrin α IIb β 3 , a pivotal glycoprotein required for the later phase of platelet activation and cytoskeletal modification. 21 After preincubation with 10μM HNG or vehicle, human platelets were incubated with FITC-conjugated soluble fibrinogen or FITC- To assess the role of HNG in this 'outside-in' signal transduction, we evaluated platelet spreading on immobilized fibrinogen by labelling the actin skeleton with phalloidin. Compared with vehicle or scramble-HNG, HNG reduced the average spreading area of adherent platelets ( Figure 2D, P < .01). Integrin α IIb β 3 -mediated 'outside-in' signals lead to platelet clot retraction. 22 To test the role of HNG in platelet clot retraction, human platelets were incubated with fibrinogen and thrombin. Compared with vehicle or scramble-HNG, HNG delayed platelet clot retraction and significantly inhibited clot retraction 15 min after thrombin stimulation ( Figure 2E, P < .05).
| HNG inhibits thrombus formation without impairing haemostasis
We showed that HNG inhibited platelet function under static or Figure 3D) and the peak fluorescence intensity ( Figure 3E) showed that HNG attenuates thrombus formation in laser injury of the cremaster arterial (P < .05). Furthermore, we evaluated the impact of HNG on normal haemostasis. Tail bleeding time in mice was recorded. The results showed that HNG did not prolong the tail bleeding time and did not increase the blood loss compared with that of vehicle group ( Figure 3F, P > .05). In addition, we also found that there were no significant changes in the mouse jugular vein bleeding time in the HNG group ( Figure 3G, P > .05) compared with that in the control group, suggesting a minimal effect of HNG on blood coagulation.
| HNG stabilizes platelet microtubules by enhancing tubulin acetylation and inhibits AKT and ERK phosphorylation downstream of HDAC6
Tubulin acetylation has been recognized as an important event in platelet activation via co-ordinating cytoskeleton changes and signalling transduction. Inhibition of the deacetylase HDAC6 inhibits platelet aggregation and Rho-kinase activation. 23 To determine F I G U R E 1 HNG impairs platelet aggregation and granule release. (A) Gel-filtered human platelets were pre-incubated with HNG (4 μM, 8μM or 10 μM) or vehicle solution (ddH 2 O) for 10 minutes and stimulated with collagen 2 µg/mL, thrombin 0.01 U/mL, convulxin 0.2 nM, ADP 10 μM and CRP 0.2 µg/mL, respectively. Statistical data of HNG10μM and vehicle were shown. *P < .05, **P < .01, ***P < .001, ****P < .0001, N > 3, unpaired t test. (B, C) The surface expression of P-selectin was analysed by flow cytometry. 10 μM HNG or scramble-HNG was pre-incubated with gel-filtered human platelets for 10 minutes at 37℃ and then stimulated by 2 μg/mL CRP (B) or 0.05U thrombin (C). A representative fluorescent histogram of PE-conjugated P-selectin is shown. Statistical data were analysed using X geometric mean fluorescence and the percentage of gated cells. ***P < .001, ****P < .0001, ordinary one-way ANOVA, Dunnett's multiple comparisons test. (D) ATP released from platelets was monitored using an aggregometer. After incubation with HNG or vehicle, gel-filtered platelets were induced by 2 µg/mL collagen or 0.2 nM convulxin. ATP standard (final concentration: 2 nM) was used to normalize the released ATP. *P < .05, mean ± SEM, N > 3, paired t test whether HNG stabilizes platelet microtubules by enhancing tubulin acetylation, we evaluated the number of microtubule coils (ring structure), designated as the marginal band, and tubulin acetylation in platelets. As shown in Figure 4A, resting platelets maintained intact marginal band. Activation of platelets led to the depolymerization of microtubules. After preincubation with HNG, a higher number of microtubule ring structure in fibrinogen-stimulated were observed compared to the vehicle ( Figure 4A, P < .001).
Furthermore, nocodazole, a microtubule depolymerization reagent, was used to depolymerize microtubules without activating F I G U R E 2 HNG inhibits platelet integrin αIIbβ3 activation, spreading and clot retraction. (A-C) Human platelets were pre-incubated with 10μM HNG, scramble-HNG (10μM) or vehicle for 10 minutes at 37℃, and the fluorescence of FITC-conjugated fibrinogen binding (A) or FITC-conjugated PAC-1 (B, C) was measured by flow cytometry after stimulated by 1μg/ml CRP or 0.05U thrombin. A representative histogram is shown. Statistical data were analysed using X geometric mean fluorescence or the percentage of gated cells. **P < .01, ****P < .0001, ordinary one-way ANOVA, Dunnett's multiple comparisons test. (D) Effect of HNG on platelet spreading. 10µM HNG or scramble-HNG-treated platelets were placed on fibrinogen-coated glass coverslips for 1 h at 37℃. After washing with PBS to remove nonadherent platelets, adhered platelets were stained with TRITC-labelled phalloidin. Images were obtained with an Olympus fluorescence microscope. Representative images are shown. Statistical data were calculated from the mean of the average surface area of individual platelets. *P < .05, **P < .01, N > 3, ordinary one-way ANOVA, Dunnett's multiple comparisons test. (D) HNG impairs platelet clot retraction. After incubation with 10 µM HNG or vehicle, human platelets were stimulated with 20 µg/mL fibrinogen and 1 U thrombin and recorded at the indicated time-point using a camera. The clot area was quantified by the ratio of clot area to platelet suspension area at different timepoints. *P < .05, unpaired t test platelets. Results showed that HNG-treated platelets preserved ring structure compared to the vehicle ( Figure 4B, P < .05), suggesting that HNG increases platelet microtubule stability. When stained with anti-acetylation antibody, preincubation with HNG preserved microtubule ring structure by maintaining tubulin acetylation in fibrinogen-, nocodazole-, CRP-, or thrombin-stimulated platelets ( Figure 4C,D and Figure S2).
As previous studies have suggested that defective integration of tubulin subunits may alter microtubule stability, we further evaluated the potential impact of HNG on the incorporation of β1-tubulin with α-tubulin. As shown in Figure S3 most β1-tubulin was colocalized with α-tubulin in HNG-treated platelets, and there was a persistent marginal ring. Taken together, these data suggest that HNG is a positive regulator of tubulin acetylation, although microtubule composition is unchanged.
HDAC6 is identified as the canonical enzyme responsible for deacetylating platelet tubulin. 24 It acts as a signal mediator connecting early tyrosine kinase activation and downstream Rho GTPase signalling. Through the P21-activated kinase PKA, Rho GTPase RAC activates ERK1/2 and AKT in platelets, giving rise to the reorganization of the platelet cytoskeleton. 23 To determine that HNG may inhibit the activity of HDAC6, we measured the phosphorylation of ERK1/2 and AKT in collagen-activated platelets. Consistent with the increased tubulin acetylation observed in Figure 4, platelets in the HNG group showed attenuated activation of ERK1/2 ( Figure 5A, P < .01) and AKT ( Figure 5B, P < .05) at 5 minutes after stimulation.
On the other hand, agonist-induced phosphorylation of PLC gamma 2 and P38MAPK was not changed by HNG ( Figure 5C,D). These data implied that HNG-mediated tubulin stabilization might be attributed to the inhibition of HDAC6.
| D ISCUSS I ON
HNG is a derivative of humanin with serine-14 substitution by glycine, which potently enhances its neuroprotective activity. 1 In this study, we demonstrated that HNG inhibits platelet activation and thrombus formation probably via stabilizing platelet microtubules. F I G U R E 3 HNG inhibits thrombus formation without impairing haemostasis. (A) 10 µM HNG inhibits the flowassociated platelet adhesion on collagen (100 µg/mL). HNG-or vehicle-treated human whole blood was labelled with calcein-AM (10 µM) for 30 min and perfused through the channels at 10 dyne/cm 2 . Live fluorescence was recorded. *P < .05, paired one-tail t test, fluorescence of area under curve (AUC), vehicle versus HNG. (B-E) HNG impaired laser-induced mouse cremaster arterial thrombus formation. HNG (10 µM) or saline was infused into the jugular vein of male C57 mice using a cannula. The thrombus was visualized by 3,3′-dihexyloxacarbocyanine iodide (DIOC6) staining and monitored real-time under an intravital microscope (B). The median thrombus fluorescence intensity curve (C), the area under the curve (D) and the peak fluorescence intensity (E) were analysed. *P < .05, mean ± SEM, number of thrombus: 25-26. (F-G) HNG (10 μM) or vehicle was administered through the jugular vein to mice. The tail bleeding, blood loss (F) and jugular vein bleeding time (G) in mice were recorded. NS, P > .05, unpaired t test To the best of our knowledge, this is the first report on the link between HNG and platelet function.
For more than one decade, emerging evidence has suggested HNG as a multifunctional peptide with antiapoptotic, anti-inflammatory, antioxidant and mitochondrial protective potencies. 5,7,25,26 Moreover, HNG showed substantial benefits in animal models of atherosclerosis, stroke, diabetes, and cerebral and cardiac I/R. [27][28][29][30] Here, we provide new evidence that HNG inhibits platelet activation and thrombus formation. The antiplatelet effect by HNG is present regardless of the type of agonist, suggesting that the potential target was probably not confined to any single receptor pathway.
Subsequent functional analyses demonstrated that HNG promoted platelet microtubule stabilization. Consistently, HNG attenuated tubulin deacetylation and platelet shape change after activation.
Although microtubules are well-established drug targets in cancer, their roles in platelets remain far more elusive. 31 Microtubule-interfering agents showed consistent antimitotic functions in tumour cells, whereas their effects vary vastly in platelets. [32][33][34] For instance, colchicine disturbs microtubule assembly and inhibits platelet release, although changes in granule secretion were not observed in vinblastine-treated platelets. 35 An early study showed that the colchicine effect could be reversed by stabilizing platelet microtubules with D2O, which alone may enhance platelet aggregation induced by calcium ionophore. 36 Paclitaxel, another widely used microtubule-stabilizing drug, shows a concentration-dependent inhibition of platelet aggregation and secretion. 37 These controversial results may be attributed to the differences in pharmaceutic mechanisms or may be explained by mechanisms beyond microtubule stabilization. Additionally, some studies argued that microtubules might not be required for granule secretion. [38][39][40] However, TUBB1 knockout in vivo leads to spherical platelets, impaired aggregation and reduced granule secretion. Furthermore, a mutation impairing β1-tubulin assembly was also shown to reduce platelet dense granule secretion, aggregation and collagen adhesion. 41 Deletion of RanBP10, a β1-tubulin-binding protein, promotes microtubule stabilization and inhibits platelet aggregation and shape change, whereas adhesion and secretion were not affected.
Our results showed that HNG not only inhibited platelet activation but also suppressed granule secretion. In line with our finding in platelets, HNG has been shown to suppress oxidative stress in F I G U R E 4 HNG stabilizes platelet microtubules from tubulin deacetylation. (A-B) Resting platelets or microtubule depolymerization reagent, nocodazoletreated platelets after incubation with 10 μM HNG, scramble-HNG or vehicle at 37°C for 10 minutes were centrifugated on poly-L-lysine-coated coverslips. Alternatively, platelets were incubated with HNG and then allowed to adhere to fibrinogen-coated slides for 1 h at 37℃ before fixation. Samples were stained with β1-tubulin (A, B) and acetylated tubulin (C, D) antibodies. Representative images are shown. Microtubule stability was analysed using the ratio of ring structure (A, B). *P < .05, ***P < .001, ordinary one-way ANOVA, Dunnett's multiple comparisons test cardiomyocytes 25 and to inhibit ERK1/2 and AKT phosphorylation in neurons and the brain. 27,42 Phosphorylation of P38MAPK was not changed in either neurons or platelets treated with HNG. These data support the role of HNG in organ protection.
For the first time, we showed that HNG might stabilize platelet microtubules. Our findings may help to explain both the acute and chronic neuroprotection by HNG observed in myocardial I/R and AZD. First, HNG may ameliorate platelet hyperactivation induced by I/R, thereby functioning as a chaperone, which prevents the misfolding and thus formation of Aβ oligomers. The resulting lower circulating Aβ burden will reduce its deposition in the brain, which may subsequently attenuate microtubule depolymerization and tau hyperphosphorylation. The latter effect may be secondary to microtubule stabilization, as previous reports suggested that Taxol might also inhibit the tau hyperphosphorylation induced by Aβ. 43 Thus far, whether HNG directly affects tau remains to be elucidated. On the other hand, chronic infusion of HNG is likely to benefit both platelets and neurons, thereby offering sustained protection against neurodegeneration via simultaneously blunting the circulating and topical pool of Aβ and other potentially detrimental releasates. Alternatively, the protective HNG may be ascribed to a marked decrease of the total Aβ levels, likely the consequence of the HNG-induced overexpression of the Aβ-degrading enzyme neprilysin. Neprilysin is an amyloid-β peptide (Aβ)-degrading enzyme, which declines in the brain during ageing, and then leads to a metabolic Aβ imbalance. 44 However, the underlining mechanisms of HNG mediating neprilysin activity remain unclear.
Interestingly, microtubules have long been known as a key factor for AZD and are recently emerging as potential targets for thrombotic disease. 45 Regardless, available candidates for enhancing microtubule assembly are mostly chemotherapeutic agents or nonphysiological synthetic peptides. [46][47][48] Their application tends to be hampered by potential cytotoxicity and immunoneutralization. In addition, a higher incidence of cardiovascular events is noted in the AZD population, calling for an efficient prophylactic intervention. 49 Unfortunately, clinical studies using aspirin have not presented any benefit in controlling AD. 50 This could be due to aspirin-insensitive pathways that contribute to AZD pathology.
Mechanistically, inhibitors of integrin α IIb β 3 may be effective but are not intended for prolonged use, as life-threatening bleeding may occur. Double antiplatelet therapy using aspirin and P2Y 12 inhibitors appears to be an alternative solution via blocking multiple platelet pathways involved in AZD, albeit increased bleeding risks and costs are limitations. 51 Thus, our finding of HNG as a dual microtubule-stabilizing and antiplatelet agent suggests that it may become a reasonable candidate for thrombotic comorbidities in patients with AZD. Alternatively, HNG may also yield instant and long-lasting benefits in preserving organ function during and following cardiovascular I/R, probably through alleviating vascular inflammation and improving microcirculation. Future development F I G U R E 5 HNG inhibits AKT and ERK phosphorylation downstream of HDAC6. Gel-filtered human platelets were pre-incubated with 10 μM HNG for 10 minutes at 37℃ and then stimulated with 2 μg/mL collagen at different timepoints under stirring. After lysis with RIPA buffer, samples were analysed by Western blot, and ERK1/2 (A), AKT (B), PLCgamma2 (C) and P38 (D) phosphorylation was detected using antibodies and quantified. *P < .05, **P < .01, t test of a more stable and affordable version of HNG will facilitate its translation into clinical application.
In summary, the results presented in this study demonstrated that HNG inhibits platelet activation and thrombus formation, potentially through stabilizing microtubules. Additionally, the therapeutic role of HNG in cardiovascular comorbidities of AZD requires further evaluation.
|
v3-fos-license
|
2022-01-12T16:17:55.706Z
|
2022-01-01T00:00:00.000
|
246166266
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4360/14/2/233/pdf",
"pdf_hash": "bd91980ddc98cfa3bad47d8715a591c3586e3d36",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42441",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "7c76e7a7053b9cfa747f852b52951cf8745838f3",
"year": 2022
}
|
pes2o/s2orc
|
A Flexible Sandwich Structure Carbon Fiber Cloth with Resin Coating Composite Improves Electromagnetic Wave Absorption Performance at Low Frequency
In order to improve the electromagnetic wave absorbing performance of carbon fiber cloth at low frequency and reduce the secondary pollution caused by the shielding mechanism, a flexible sandwich composite was designed by a physical mixing coating process. This was composed of a graphene layer that absorbed waves, a carbon fiber cloth layer that reflected waves, and a graphite layer that absorbed transmitted waves. The influence of the content of graphene was studied by a control variable method on the electromatic and mechanical properties. The structures of defect polarization relaxation and dipole polarization relaxation of graphene, the interfacial polarization and electron polarization of graphite, the conductive network formed in the carbon fiber cloth, and the interfacial polarization of each part, combined together to improve the impedance matching and wave multiple reflections of the material. The study found that the sample with 40% graphene had the most outstanding absorbing performance. The minimum reflection loss value was −18.62 dB, while the frequency was 2.15 GHz and the minimum reflection loss value compared to the sample with no graphene increased 76%. The composites can be mainly applied in the field of flexible electromagnetic protection, such as the preparation of stealth tent, protective covers of electronic boxes, helmet materials for high-speed train drivers, etc.
Introduction
With the wide application of various types of electronic equipment and with the communication facilities in many aspects of industrial production and daily life, the problem of electromagnetic pollution is of wide public concern [1,2]. Harmful electromagnetic waves can cause information leakage, interfere with the operation of electronic equipment, threaten human health, and shorten the survivability of weapons on the battlefield [3][4][5]. Preparation of electromagnetic protective materials has become a research focus. Compared with the wave absorbing materials, the shielding materials can cause secondary pollution, thus researchers need to prepare materials with a more absorbing mechanism [6,7]. In the past few decades, a lot of research has focused on wave absorbing materials but has been mainly concentrated in the Super High Frequency, the studies of wave absorbing materials below the Super High Frequency have been fewer or not ideal. However, a large number of electronic devices have been used in these lower frequency bands [8], so that the research at the lower frequency bands of wave absorption materials has become very significant.
In recent years, researchers have studied many absorbing materials. Magnetic loss type metal materials belong to the major categories. They are characterized by high density, The main experimental material was plain carbon fiber cloth, provided by Weiduowei Technology Co., Ltd., Tianjin, China. Other chemicals included graphite powders, graphene, polyurethane, thickener, and defoaming agent. The graphite powders (≥98.0%, Q/HG3991-88) were purchased from Tianjin Fengchuan Chemical Reagent Technology Co., Ltd, Tianjin, China. Graphene (of fineness 5-15 µm, and purity larger than 95%) from Tianjin Kairuisi Fine Chemical Co., Ltd, Tianjin, China. Polyurethane (PU-2540) from Guangzhou Yuheng Environmental Protection Materials Co., Ltd, Guangzhou, China. Thickener (7011) from Guangzhou Dian Wood Composite Material Business Department, Guangzhou, China. And defoaming agent (1502) from Wuxi Redwood New Material Technology Co., Ltd., Wuxi, China.
Preparation of Materials
Preparation of the carbon fiber cloth: The plain carbon fiber cloth was cut into samples of 50 × 25 cm and fixed on the needle plate of a blade coating machine (produced by Werner Mathis, a Swiss company, LTE-S87609 type), requiring that the base cloth be in a state of tension and with a smooth surface without wrinkles; a uniform tension should be applied on the carbon fiber each time.
Preparation of the coatings: The schematic preparation of the coating is included in Figure 1. First, the polyurethane and the functional particle materials (graphene or graphite) were weighed; next the weighed polyurethane was placed in an agitator, and one of the functional particle materials was added to the polyurethane at a low speed of 600 RPM, after all functional material particles had been added to the polyurethane, the speed of the agitator was uniformly raised to 2000 RPM, and the solution was stirred for 5 min. Next thickener (1-2% of the total weight) and defoaming agent (1-3% of the total weight) were added and the solution was stirred for 35 min, and a well-mixed coating was obtained; The viscosity of the dope was measured with the No. 4 rotor of the Digital Viscometer (produced by Shanghai Hengping Instrument Factory, SNB-2 type) with a rotating speed of 6 RPM and a viscosity range between 30,000 and 40,000 mPa·s.
Preparation of Materials
Preparation of the carbon fiber cloth: The plain carbon fiber cloth was cut into samples of 50 × 25 cm and fixed on the needle plate of a blade coating machine (produced by Werner Mathis, a Swiss company, LTE-S87609 type), requiring that the base cloth be in a state of tension and with a smooth surface without wrinkles; a uniform tension should be applied on the carbon fiber each time.
Preparation of the coatings: The schematic preparation of the coating is included in Figure 1. First, the polyurethane and the functional particle materials (graphene or graphite) were weighed; next the weighed polyurethane was placed in an agitator, and one of the functional particle materials was added to the polyurethane at a low speed of 600 RPM, after all functional material particles had been added to the polyurethane, the speed of the agitator was uniformly raised to 2000 RPM, and the solution was stirred for 5 min. Next thickener (1-2% of the total weight) and defoaming agent (1-3% of the total weight) were added and the solution was stirred for 35 min, and a well-mixed coating was obtained; The viscosity of the dope was measured with the No. 4 rotor of the Digital Viscometer (produced by Shanghai Hengping Instrument Factory, SNB-2 type) with a rotating speed of 6 RPM and a viscosity range between 30,000 and 40,000 mPa·s. Preparation of the graphene and graphite layers: The preparation of the sandwich structure can be seen in Figure 1. First, the prepared carbon fiber cloth was placed on the blade coating machine, and its scraper was fixed, then the thickness (thickness = graphite layer thickness + thickness of the base cloth) was adjusted; Next the speed and coating distance of the blade coating machine were adjusted, and an appropriate coating was made on the surface of the carbon fiber cloth. The scraper was removed after the coating process and then the obtained coated fabric was placed in an oven and dried at 80 • C under vacuum for 10 min. After drying, the preparation of the graphite layer was finished, the thickness of the coated material obtained was measured, and the material was reversed. Finally, the scraper was re-fixed and the thickness (thickness = thickness of the graphite layer + the thickness of graphene layer) was adjusted, and the graphene layer was prepared in the same way. Each layer area needed to be no less than 30 × 22 cm.
Test Indicators and Methods
Test for the viscosity: An SNB-2 digital viscometer (made by Shanghai Hengping Instrument Factory, Shanghai, China) was used to measure the viscosity of prepared coatings, and the appropriate rotors and rotation speed were selected according to the range table.
Test for the thickness: A YG141D digital fabric thickness meter (made by Laizhou Electronic Instrument Co., Ltd., Laizhou, China) was used to measure the coating thickness. Multiple measurements were made at different locations on the coating materials, the measured data was recorded, and the mean thickness was calculated to reduce the measurement error.
Test for the shielding effectiveness: A ZNB40 vector network analyzer (made by Rohde & Schwarz, Munich, Germany) was used to measure the shielding effectiveness of the samples. According to the standard of GJB 6190-2008-"measuring methods for shielding effectiveness of electromagnetic shielding materials"-the test frequency range was 0.01-3.00 GHz and the sizes of the samples were 13 cm in diameter [19][20][21][22].
Test for the reflection loss: A ZNB40 vector network analyzer (made by Rohde & Schwarz, Munich, Germany) was used to measure the reflection loss of samples. The test frequency range was 0.02-3.00 GHz, the sample size was a circle with an outer diameter of 7.6 cm and an inner diameter of 3.35 cm [19][20][21][22].
Test for the dielectric properties: The dielectric properties of materials were measured with a BDS50 dielectric spectrometer (made by Novocontrol Gmbh, Frankfurt, Germany) according to the standard of SJ20512-1995-"Test methods for permitivity and permeability of microwave high loss solid materials".The size of the sample was 2 × 2 cm and the test range was 0.02-1.00 GHz [19][20][21][22].
Test for the surface resistance: The ohmic range of a F8808A desktop digital multimeter (made by Fluke Testing Instrument Co., Ltd., Everett, WA, USA) was used to measure the surface resistance of each sample. The surface resistance of samples per unit length (1 cm) on the samples' surface was measured, and 20 different locations were continuously selected to carry out the test after the maximum and minimum data had been removed. The average value was taken to reduce the error [19][20][21][22].
Test for the tensile strength: A 3369 INSTRON universal strength machine (made by the American INSTRON Company, Boston, MA, USA) was used to measure the tensile strength of the samples according to the testing method for the tensile properties of the GB1447283 standard, and the size of the samples was 15 × 5 cm.
Results and Discussion
To improve the wave absorbing performance of carbon fiber cloth at low frequency, a flexible sandwich structure of carbon fiber cloth composite was designed with a physical mixing coating method; the structure is shown in Figure 2d. The composite was composed of a graphene layer absorbing the wave on the surface, a carbon fiber cloth layer reflecting the wave in the middle and a graphite layer re-absorbing the transmitting wave at the bottom. To meet the requirement of a thin absorbing material, the thickness of the graphene and graphite layer was set at 1 mm in the experiment. In the previous experiment, we found that only when the graphite absorbing layer was prepared on the surface of the carbon fiber cloth, had the composite with 30% graphite in polyurethane a better electromagnetic absorbing effect, while the content of the graphene in the layer had a great influence on the absorbing performance of the composite. Therefore, five kinds of composites with different graphene contents were prepared by the control variable method for the experiment. The specific technological parameters are shown in Table 1. First, the influence of the content of graphene on shielding and absorbing performance was investigated at a frequency of 0.02-3.00 GHz, and the conductivity performance was observed. The experiments showed that graphene content has a little influence on the shielding properties between the frequency of 0-1 GHz, and the wave absorbing performance was enhanced significantly. To study the absorbing mechanism of this frequency, the dielectric properties of composites in the frequency range of 0.02-1.00 GHz were studied. With the excellent mechanical properties of the carbon fiber cloth, the effects of the content of graphene on the mechanical properties were investigated.
Polymers 2022, 13, x FOR PEER REVIEW 5 of 12 the wave in the middle and a graphite layer re-absorbing the transmitting wave at the bottom. To meet the requirement of a thin absorbing material, the thickness of the graphene and graphite layer was set at 1 mm in the experiment. In the previous experiment, we found that only when the graphite absorbing layer was prepared on the surface of the carbon fiber cloth, had the composite with 30% graphite in polyurethane a better electromagnetic absorbing effect, while the content of the graphene in the layer had a great influence on the absorbing performance of the composite. Therefore, five kinds of composites with different graphene contents were prepared by the control variable method for the experiment. The specific technological parameters are shown in Table 1. First, the influence of the content of graphene on shielding and absorbing performance was investigated at a frequency of 0.02-3.00 GHz, and the conductivity performance was observed. The experiments showed that graphene content has a little influence on the shielding properties between the frequency of 0-1 GHz, and the wave absorbing performance was enhanced significantly. To study the absorbing mechanism of this frequency, the dielectric properties of composites in the frequency range of 0.02-1.00 GHz were studied. With the excellent mechanical properties of the carbon fiber cloth, the effects of the content of graphene on the mechanical properties were investigated. Note: the content of functional particles refers to a percentage of the weight content of functional particles relative to that of polyurethane; the viscosity of each layer of coating was 37,000 mPa·s. There are two main parameters of electromagnetic properties of electromagnetic protection materials, namely shielding efficiency (SE) and reflection loss (RL) value. The former represents the shielding performance of the composite to electromagnetic waves and its value is positive; the larger the value, the better the shielding performance will be. The latter represents the absorbing performance to electromagnetic waves and its value is negative; the smaller the value, the better the absorbing performance will be. Both are very important for the sandwich structure carbon fiber cloth composite that was designed to improve the absorbing performance in this paper. The conductive property has a certain auxiliary role in the study of the electromagnetic properties of the material.
The Shielding Performance
As can be seen from Figure 2a, within a frequency range of 0.05-3.00 GHz, the values of the shielding effectiveness of samples 1, 2, 3, 4, and 5 showed first decreasing, next increasing, and then a decreasing trend. It may be the result of carbon fibers overlapping with each other to form a conductive network for the flow of carriers, the flowing carriers then interacting with the electromagnetic field to shield electromagnetic waves [23]. The absorbing performances of graphite and graphene were limited at this frequency wave band and the shielding performance of the materials was excellent. As the frequency increased, the shielding ability of the material to electromagnetic waves was gradually enhanced, the frequency continued to increase, the amount of incident electromagnetic waves gradually increased, and the amount of electromagnetic waves that could be shielded was gradually saturated until the maximum was reached. However, with further increasing frequencies, the electromagnetic waves that could not be shielded transmitted the composite, and thus the shielding ability showed a gradually weakening trend. The growth of the content of graphene led to an increase of the amounts of electrons, ions, and inherent dipoles, and the probability of graphene particles contacting with each other became larger, the conductive network inside the material was denser, and the conductivity was better. Thus, with the increase of electromagnetic wave frequency, samples with more graphene content tended to have a peak earlier and a higher peak. The shielding efficiency peak of sample 5 with the highest graphene content was 69.89 dB when the wave frequency was 1.53 GHz. In a word, compared with control sample 1, samples 2, 3, 4, and 5 showed improved shielding efficiency in the narrow band range, but it was decreased in others.
Absorbing Performance
As can be seen from Figure 2b, the reflection loss values of all samples fluctuated with the increase of electromagnetic wave frequency in the range of 0.02-1.25 GHz. Compared with sample 1, the absorbing ability of the other samples in this frequency range improved, corresponding with the shielding efficiency diagram in Figure 2a, The shielding ability was being reduced while the absorbing performance was being improved, and the electromagnetic wave was transformed into other energy, mainly heat energy. In the range of 1.25-3.00 GHz, the absorbing performance of samples 1, 2, 3, and 4 tended to be stable, the absorbing performance of samples 3 and 4 were slightly improved compared with sample 1, and the minimum reflection loss values of sample 5 were greatly improved. Sample 5 had the best absorbing performance, and the value was improved of about 76% compared to that of sample 1. The reflection loss value of less than −5 dB almost took up 1/3 of the whole test range which was 0.75 GHz more than sample 1. Graphene has a unique wave absorbing property due to the phenomena of electronic dipole polarization-relaxation and structural defective polarization-relaxation [24,25]. Graphite is one type of electrical loss absorbing agent with a large dielectric loss tangent value, which can absorb electromagnetic waves according to interface polarization attenuation or electronic polarization of the mediums [26,27]. Carbon fiber cloth has almost no wave-absorbing property, and thus the ability to absorb electromagnetic waves is weaker; while the two absorbing particles compound with carbon fibers respectively to form the layer interface. The weak wave absorbing ability of carbon fiber cloth can be enhanced due to excellent impedance matching between the carbon fibers and graphite or graphene. Among them, graphite, graphene and polyurethane, as well as between them and the carbon fiber, all have heterogeneous interfaces. Under the action of an electric field, charge accumulates at the interface of the two heterogeneous materials and the resulting interface polarization loss has a significant attenuation effect on the electromagnetic wave energy [28]. With the increasing content of graphene, the minimum value of the reflection loss became smaller, and the frequency range corresponding to an excellent wave absorbing property became wider. It may be that the increase of the content of graphene led to an increase in the amount of graphene particles in the layer per unit volume. The amounts of electrons, ions, and inherent dipoles were also growing, the impedance matching between carbon fibers and graphene became enhanced, and the ability to absorb electromagnetic waves was improved accordingly.
Conductive Performance
To verify the change of the conductive property of the composites, we measured the surface resistance on a unit length (1 cm) of the graphene layer surface and the test results are shown in Figure 2c. The values of the surface resistance of sample 1 and 2 were extremely large, which exceeded the measuring range of the testing instrument. It may be that the coating of sample 1 was a layer of polyurethane, which is one type of polymer with a stable structure that cannot carry out an electronic transmission. For sample 2, because the content of graphene relative to that of polyurethane was lower, the polyurethane negated the excellent conductivity of graphene. Then, with the increasing contents of graphene, the value of the surface resistance decreased gradually, and the conductivity of the composite also was enhanced gradually. The resistance was still very large, which helped to enhance the absorbing property of the composite.
The Influence of the Content of Graphene on the Dielectric Properties of the Composites
The dielectric properties test is very important for carbon-based wave absorbing materials, as it is an indirect indicator which shows the electromagnetic properties of materials. The dielectric properties test mainly includes the real part of the dielectric constant, the imaginary part of the dielectric constant, and the loss tangent value. The real part of the dielectric constant represents the polarization ability of the electromagnetic wave, the imaginary part represents the loss ability, and the loss tangent value represents the attenuation ability. In this paper, the real part and the imaginary part of the dielectric constant, and the loss tangent value of the sample were tested in the frequency range of 0.02-1.00 GHz, as shown in Figure 3a-c, Figure 3d is an enlarged figure of Figure 3c,e showing the action mechanism of each part of the composite to electromagnetic waves.
It can be seen from Figure 3 that the dielectric properties of sample 1 were not affected by the electromagnetic wave frequency while the other samples were changed. The content of graphene was the main reason for the changes in the polarization, loss, and attenuation capacity of the materials to electromagnetic waves and this may be related to the interfacial polarization between graphene, polyurethane, and carbon fiber. Due to the lower content of graphene in sample 2, each part of the dielectric constant had a small amount of improvement compared with that in sample 1. The values of the other samples with a relatively high content of graphene varied greatly with the change in the electromagnetic wave frequency.
The Real Part of the Dielectric Constant
It can be seen from Figure 3a that compared with sample 1, the real part of the dielectric constant of the other samples increased rapidly at first and then slowly decreased to a steady trend with the increase of electromagnetic wave frequency. This may be the result of an interaction of graphite, graphene, and carbon fibers. The impedance matching of electrons, ions, and inherent dipoles in the composite and the impedance matching of their interfaces were both good, which led to an enhancement of the ability to store charges [29][30][31]. However, with the increasing frequencies of the incident electric field, the effects of the internal structure and impedance matching both reached the upper limit, and the polarization ability to electromagnetic waves reached the upper limit accordingly. As the frequency further increased, the amount of incident electromagnetic waves gradually increased, but the amount of electromagnetic waves that could be polarized was limited, and thus the ability for storing charges weakened gradually. The value of the real part of the dielectric constant of the samples with higher graphene content was larger when the real part of the sample tended to be stable. It may be that the probability of graphene particles contacting with each other became larger with the increasing content of graphene, the gap between particles was smaller, the conductive network inside the material was denser, and the conductivity was better. The impedance matching between carbon fibers and graphene was enhanced, and the polarization ability to electromagnetic waves was also enhanced.
The Imaginary Part of the Dielectric Constant
It can be seen from Figure 3b that compared with sample 1, the value of the imaginary part of the dielectric constant of sample 2 had a trend of first increasing and then slowly decreasing to a stable state, while the value of the other samples fluctuated greatly at a relatively stable state in the measured frequency range. This may be the result of the enhancement of the electronic polarization-attenuation ability of the graphite layer and the electronic dipole polarization-relaxation ability of the graphene layer, and thus the loss of ability to electromagnetic waves was enhanced gradually [32,33]. However, with the increasing frequencies of the incident electric field, the eddy current loss caused by an increase of current gradually dominated, and the positive and negative charges heading
The Real Part of the Dielectric Constant
It can be seen from Figure 3a that compared with sample 1, the real part of the dielectric constant of the other samples increased rapidly at first and then slowly decreased to a steady trend with the increase of electromagnetic wave frequency. This may be the result of an interaction of graphite, graphene, and carbon fibers. The impedance matching of electrons, ions, and inherent dipoles in the composite and the impedance matching of their interfaces were both good, which led to an enhancement of the ability to store charges [29][30][31]. However, with the increasing frequencies of the incident electric field, the effects of the internal structure and impedance matching both reached the upper limit, and the polarization ability to electromagnetic waves reached the upper limit accordingly. As the frequency further increased, the amount of incident electromagnetic waves gradually increased, but the amount of electromagnetic waves that could be polarized was limited, and thus the ability for storing charges weakened gradually. The value of the real part of the dielectric constant of the samples with higher graphene content was larger when the real part of the sample tended to be stable. It may be that the probability of graphene particles contacting with each other became larger with the increasing content of graphene, the gap between particles was smaller, the conductive network inside the material was denser, and the conductivity was better. The impedance matching between carbon fibers and graphene was enhanced, and the polarization ability to electromagnetic waves was also enhanced.
The Imaginary Part of the Dielectric Constant
It can be seen from Figure 3b that compared with sample 1, the value of the imaginary part of the dielectric constant of sample 2 had a trend of first increasing and then slowly decreasing to a stable state, while the value of the other samples fluctuated greatly at a relatively stable state in the measured frequency range. This may be the result of the enhancement of the electronic polarization-attenuation ability of the graphite layer and the electronic dipole polarization-relaxation ability of the graphene layer, and thus the loss of ability to electromagnetic waves was enhanced gradually [32,33]. However, with the increasing frequencies of the incident electric field, the eddy current loss caused by an increase of current gradually dominated, and the positive and negative charges heading off from the original equilibrium position in the layer had to return to the original equilibrium position, but could not keep up with the changing frequencies, and thus the loss ability to electromagnetic waves showed a gradually weakening trend [34,35]. The higher the content of graphene of the samples, the stronger was the loss ability of the electromagnetic wave in this test frequency range. That is because with the increase of graphene content, the eddy current loss inside the material was stronger, and the loss capacity to electromagnetic waves was stronger. Moreover, the impedance match of the sample with 20% graphene content was stronger than that of the sample with 30% graphene.
The Loss Tangent Value
It can be seen from Figure 3c,d that compared with sample 1, the loss tangent value of sample 2 increased somewhat while the loss tangent values of sample 3, 4, and 5 showed a trend of rapid increase at first and then rapid decrease to a steady state. It may be that carbon fibers can be seen as one type of semiconductor material with an excellent conductivity. Internal fibers can overlap with each other to form a conductive network, a component perpendicular to the incident electric field of the carbon fibers and the structure of each absorbing layer can be produced to attenuate electromagnetic waves. As the frequency of the applied electric field increased, the amount of attenuated electromagnetic waves increased gradually [36]. Due to the conductivity of the carbon fibers, the electronic polarization-attenuation ability of the graphite, and the electronic dipole polarization-relaxation ability of graphene, the attenuation ability of the composite to electromagnetic waves within a specific frequency range was greatly enhanced, and attenuated the majority of incident electromagnetic waves inside the material. As the incident frequency increased, the attenuation ability to electromagnetic waves gradually weakened until there remained a stable state. The higher the content of graphene, the greater was the tangent loss value. Sample 5 with the highest graphene content has the best attenuation ability for the electromagnetic wave, and its loss tangent value reaches 304.85. With the increase of graphene content, the number of electrons, ions, and intrinsic dipoles, and the attenuation ability of the electromagnetic waves became enhanced [37,38]. The attenuation ability of sample 3 to electromagnetic waves was better than that of sample 4, which was similar to Figure 3b. It is possible that the impedance matching characteristic of sample 3 was better than that of sample 4.
The Influence of the Content of Graphene on the Graphene Layer of the Composite on Mechanical Properties
The mechanical properties of the testing samples are shown in Table 2, the strength test machine is shown in Figure 4a and the displacement-load curve is shown in Figure 4b. As can be seen from Table 2 and Figure 4b, the content of graphene had little effect on the tensile strength, which indicates that the prepared composites can improve their absorbing properties while having no deterioration in their tensile properties. Samples 1 to 5 basically met the trend in that the maximum load increased with increasing contents of graphene. The content of graphene became the main factor affecting the tensile property of the composite, because graphene has a single-layer carbon atom structure with all atoms in the same plane, thus it has good toughness, excellent strength, and a unique deformation mechanism. This unique deformation mechanism can cause the hexagonal structure of graphene in the layer to be destroyed during the stretching process, and eventually lead to a tensile fracture of the coated material [39]. However, as the content of graphene particles gradually increased, the distribution of graphene particles became more uniform in the composite. Moreover, the higher the content of graphene, the stronger the elastic force and anti-pressure ability of the composite, and thus the maximum load increased with the increasing contents of graphene. The excellent mechanical properties of the material benefit from the joint action of carbon fiber, graphite, and graphene, as well as the special plain woven structure of the carbon fiber.
Conclusions
In this paper, sandwich structure carbon fiber cloth composites prepared by a coating technology can effectively improve wave absorbing performance at low frequency. The electromagnetic parameters of the composites varied greatly in the different test ranges. The sample with a content of 40% graphene in the polyurethane had the most outstanding absorbing performance; the reflection loss value was −18.62 dB when the electromagnetic wave frequency was 2.15 GHz. The absorbing performance is mainly due to its excellent attenuation and loss ability to electromagnetic waves, and conductivity performance. The design of the sandwich structure did not deteriorate the tensile properties of the composites. The design of the absorbing material has the advantages of a simple process, environmental protection, and low price, while being suitable for industrial production. At low frequency, it has the advantages of thin thickness, light weight, good strength, and flexibility.
Conclusions
In this paper, sandwich structure carbon fiber cloth composites prepared by a coating technology can effectively improve wave absorbing performance at low frequency. The electromagnetic parameters of the composites varied greatly in the different test ranges. The sample with a content of 40% graphene in the polyurethane had the most outstanding absorbing performance; the reflection loss value was −18.62 dB when the electromagnetic wave frequency was 2.15 GHz. The absorbing performance is mainly due to its excellent attenuation and loss ability to electromagnetic waves, and conductivity performance. The design of the sandwich structure did not deteriorate the tensile properties of the composites. The design of the absorbing material has the advantages of a simple process, environmental protection, and low price, while being suitable for industrial production. At low frequency, it has the advantages of thin thickness, light weight, good strength, and flexibility.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
1985-02-01T00:00:00.000
|
16160645
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "pd",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.2307/3429869",
"pdf_hash": "4656607ec28eb29ccef04de47b116fc1df3f5998",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42442",
"s2fieldsofstudy": [],
"sha1": "4656607ec28eb29ccef04de47b116fc1df3f5998",
"year": 1985
}
|
pes2o/s2orc
|
Clinical findings and immunological abnormalities in Yu-Cheng patients.
An outbreak of poisoning caused by ingestion of rice bran oil which was accidentally contaminated with polychlorinated biphenyls (PCBs) broke out in Taiwan in February 1979. Diagnosis, management, and follow-up of the patients were performed at special clinics, and subjective symptoms and cutaneous changes such as peculiar acneform eruptions and pigmentation were recorded. The patients were divided into six age groups of both essex, and the body surface of the patients was divided into 12 sections according to the nature of skin. The prevalence of each type of cutaneous change was proved statistically by the chi-square test. The examination of the immune system function in the patients at 1 year revealed: decreased concentration of IgM and IgA but not of IgG; decreased percentage of total T-cells, active T-cells, and helper T-cells, normal percentage of B-cells and suppressor T-cells; suppression of delayed type response to recalling antigens; enhancement of lymphocyte spontaneous proliferation; and enhancement of lymphocyte proliferation with PHA, PWM, and PPD stimulation but not ConA. Follow-up studies 3 years later showed decreased blood PCB levels; some improvement of subjective symptoms and cutaneous changes; recovery of skin testing response to PPD; normal percentage of total T-cells and increased percentage of suppressor T-cells; and enhancement of lymphocyte proliferation spontaneously or under the stimulation of various mitogens.
Introduction
An outbreak of poisoning with peculiar acneform eruptions and pigmentation broke out in Taiwan in February 1979. Probably over 2000 persons were affected. The source of poisoning was found to be a specific brand of rice bran oil which was contaminated accidentally with polychlorinated biphenyls (PCBs). PCB was detected from suspected oil samples at concentrations of 4.8 to 204.9 ppm (52.0 + 38.7 ppm mean value) (1). The blood levels of the poisoned patients were 3 to 1156 ppb with a mean of 89.14 + 6.90 ppb (2). The district of poisoning involved mainly two prefectures, Tai-Chung and Chang-Hua.
In general, the age distribution of patients, the symptomatology, the skin pathology and the way the poisoning occurred (3)(4)(5), were similar to those of the Yusho outbreak in Japan in 1968 (6)(7)(8)(9). The disease has been termed Yu-Cheng, which means oil disease (10). This paper deals with the subjective symptoms, cutaneous changes and immunological abnormalities.
Subjects and Methods
A special clinic was established for the diagnosis, management and follow-up of the patients at the Provincial Tai-Chung Hospital and the National Taiwan Uni-were studied by cellulose acetate electrophoresis at the first year. Serum immunoglobulins (IgG, IgA, and IgM) of 30 patients were measured by the single radial immunodiffusion method of Mancini et al. (11) with commercially available immunoplates (Behring, West Germany).
Skin testing with recalling antigens was performed by intracutaneous injection of 0.1 mL antigen of streptokinase/streptodornase in 143 patients in the first year. Thberculin skin tests were done in 83 cases in the first year and in 30 cases 3 years later.
Thirty patients were tested for T-cell/B-cell number and subpopulation in the first year. Mononuclear cells were isolated by the method of Boyum (12). Active E rosette, E rosette, and erythrocyte antibody complement (EAC) rosette tests were performed according to the method of Kerman et al. (13) with slight modification. Ox RBC-IgG (EAG) and ox RBC-IgM (EAM) were prepared according to the method of Moretta et al. (14) for enumeration of Tr and TL cells. A monoclonal antibody technique was utilized for detection of lymphocyte subset 3 years later (15,16).
A lymphocyte proliferation test (17) with various mitogen stimulants such as PHA, ConA, PWM, and PPD was done in 83 cases in the first year and in 30 cases 3 years later.
Results and Discussion
Subjective Symptoms Symptoms in the Early Stage. The complaints at the beginning of the disease as obtained from histories are listed in Table 1. Ocular symptoms, namely, increased discharge from eyes (29%), swelling of the eyelids (18.4%), weakness of eyesight, and soreness or easy fatigue of the eyes (14%) were the major complaints. The other complaints included cutaneous changes, constitutional symptoms and skeletomuscular disturbances. Subjective Complaints on the First Visit to Clinics. As shown in Thble 2, ocular symptoms such as disturbance of vision and easy fatiguability (51.4%) or soreness and irritation of the eye (18.8%) were still the primary complaint. Many patients developed constitutional symptoms such as general malaise (37.3%), headache or dizziness (21.9%), cough (14.2%), poor appetite (4.9%), skeletomuscular symptoms, i.e., soreness, weakness or swelling of the limbs (6.8%), neck pain or lumbago (15.6%), and numbness of the limbs (37.3%), etc. Abnormal menstruation was noted in 10.7% of female patients, pruritus in 35.6%, and hyperidrosis of the palms and soles developed in some patients.
Subjective Complaints in Follow-up Cases. Among 248 cases with complete follow-up records during the first year, the incidence of subjective complaints changed somewhat, as shown in Table 3. Ocular complaints, pruritus and cough decreased, but constitutional complaints such as headache, dizziness, malaise and reduced appetite increased. About 10% of female patients had abnormal menstruation, but none of male patients complained of impotence.
Cutaneous Changes
The principal dermatological findings can be divided into two groups as follows: (1) abnormal keratotic changes, including follicular keratotic changes such as follicular accentuation, horny plugs, comedo formation, acneform eruptions, cysts, Meibomian gland enlargement, and sudaminalike eruptions and xeroderma, keratotic plaques, and deformity of nails, and (2) pigmentation of mucosa, skin and nails.
Follicular keratotic change caused follicular accentuation with horny plugs and sudaminalike eruptions in the early stage. These were followed by comedo formation and acneform eruptions, then in some cases by cyst formation, including Meibomian gland enlargement. Due to immunological deficiency of patients, these follicular changes were combined with secondary bacterial infection and consequent pustules or abscess formations and residual ugly scars.
A biopsy specimen showed an opened follicular orifice filled with a layered keratinous substance, cell infiltration with giant cells around keratinous cysts or ruptured keratinous cyst wall with inflammatory cell infiltration (Fig. 1). The epidermis had no acanthosis and revealed hyperkeratosis and an increased amount of melanin in the basal layer (5).
The total of 358 cases was divided into six age groups, and the distribution of the main follicular keratotic changes is listed in Table 4. Severe acne and cyst formation appeared to be more frequent in adult group; on the contrary, accentuation of hair follicles and plug formation were more prominent in the young group. According to the physiological and anatomical natures of skin, the body surface was divided into 12 sections, and the distribution of follicular keratotic changes by FIGURE 1. Hyperkeratosis, opened follicular orifice, and keratotic plug; increasing melanin in the basal layer. section is shown in Table 5. The distribution of changes was tested statistically by chi-square test as follows: horny plugs in axillary cavities and on extremities; comedo formation on cheek, forehead, nose, chin, submandibular region, ear, trunk and seborrheic area; acneform eruptions on cheek, forehead, chin, submandibular region, trunk and seborrheic area; and cyst formation on external genitalia. Each cutaneous change is described and discussed. Follicular Accentuation and Horny Plug. The hair follicles became accentuated and elevated, and the orifices enlarged and plugged with blackish keratinous material (Fig. 2). The lesions were prominent in the axillary cavities and on the extremities (Table 6), especially on the extensor aspects. The prevalence of this change was similar in both sexes, but was seen with more frequency in the younger group (below 20 years of age), and the sites most often affected being cheeks, axillary cavities and extensor of extremities (Table 6).
Comedo Formation. Comedo formation was distributed at sites of predilection of ordinary comedones, namely, cheeks, forehead, nose and chin, but development of this eruption at ears, axillary cavities, external genitalia and extremities ( Fig. 3) was one of the characteristics of this poisoning ( Table 7). The seborrheic area is the favorite site of ordinary comedo, but this tendency was not seen in our series. As shown in Table 7, the comparison between males and females indicates that the prevalence in the submandibular region and trunk was higher in males, and a similar comparison between children and adults showed significant differences in the prevalence at these sites and at the chin. The percentages of comedo formation in children and females were almost the same. There were two types of comedo (Figs. 3 and 4), but the black comedo was the primary one, and the size varied from pinhead size to rice grain size. After the removal of the black comedo, depressed scars usually remained (Fig. 3). Acneform Eruptions. The development of acneform eruption was less frequent than the occurrence of comedo formation (Tables 4 and 5), while both lesions occurred with similar frequency in the submandibular region, trunk including seborrheic area, external genitalia and the extremities. The comparisons between males and females show the same trend as comedo formation, but the development of acneform eruption was more prominent in adults, especially on the cheeks, chin, submandibular region and trunk (Table 8). It could be that the increased secretion of sebum due to maturity was one of the main factors. Cyst Formation. Some lesions enlarged to rice grainto pea-sized cysts (Figs. 5 and 6), but the prevalence was much smaller than that of comedo or acne formation (Table 4 and 5). The development on external genitalia, including pubic area, especially in male adults, was characteristic. The difference in distribution of cyst formation between males and females, and between children and adults is illustrated in Table 9, the prevalence being higher for external genitalia and chin in males; for axillary cavities in females; and for chin, trunk and external genitalia in adults.
The follicular keratotic changes were often complicated by bacterial infection, with formation of painful and inflammed atheromalike abscesses or pustules (Fig. 5). After the disappearance of follicular keratotic changes, these lesions remained as numerous depressed atrophic scars (Figs. 3 and 6). In our series, secondary infection occurred most frequently in the 11 to 20 year age group. Enlargement of Meibomian Glands and Edema of the Eyelids. Increased discharge from the eyes was the most frequent subjective complaint ( Table 3). The obstruction of the Meibomian glands was white or yellow, elevated dots at the brim of the eyelids in the beginning (Fig. 7) and enlarged cysts later. The prevalence of this gland enlargement was 15 to 20% as shown in Table 10. It caused irregularity of the lid margin.
The swelling of eyelids was another frequent subjective complaint (Table 3), and most patients, especially females, showed this symptom (Table 10).
Xeroderma. Dryness of skin was more frequent in children and females (Table 10). The skin was coarse to the touch and had very fine, diffuse, branlike scales. This phenomenon was usually combined with accentuation of hair follicles and horny plugs.
Sudaminalike Eruption. The obstruction of hair follicles and hyperkeratosis at the vestibulum of hair follicle orifices caused sudaminalike eruptions in some cases (Table 10), and many fine vesicular red eruptions densely grouped as patches. They were frequently observed at flanks, waists, lateral aspects of thighs, the anterior aspect of knees or the lateral aspect of upper extremities.
Keratotic Plaques. 'rylotic, slightly yellowish thickening of the skin was observed at the eminences of palms and soles (Fig. 8). It was usually present in severe cases, with a prevalence of 5 to 10% (Table 10).
Deformity of Nails. Due to the abnormal keratotic condition, nail deformities occurred in many cases; the prevalence was 30 to 40% (Table 10). Both lateral edges of the nail were concave and entered the paronychial grooves deeply as an ingrowing nail, and the natural concavity of the nail body was obliterated. This change was most frequently observed in the big toe; severe cases showed koilonychial change (Fig. 9).
Pigmentation. This is a specific change of chronic PCB poisoning. It occurred in mucosa, nails and skin ( Table 11). The hue varied from brown to brownish gray to gray, and the tint also varied from light to deep. The mucosal pigmentation had a violet or blue hue in some cases.
The intensity of color of the mucosa varied within the individual. Some patients showed the same shade of color at every site; others had a deeper color on the gingivae and a light color on the conjunctiva or vice versa.
Gingival pigmentation was most frequent, and the prevalence was as high as 93% in girls younger than 10 years of age although the rate of this pigmentation among healthy persons is known to be near 10%. The average prevalence was 82.3%, being higher among females, especially among female children (Table 12).
The pigmentation was a wide band formed on the portion of gingivae in diffuse contact with teeth ( Fig. 10).
Conjunctival pigmentation occurred in two-thirds of the cases. Its prevalence followed the same trend as gingival pigmentation, in that it was very high in the younger group, especially among girls younger than 10 years of age. It occurred on every region of the conjunctival mucosa and more frequently on the lower palpebral conjunctiva.
Lip pigmentation occurred in one-third of the cases and more frequently among children (Table 12). It was prominent in the vermillion of the lower lip (Figs. 10 and 11) and showed diffuse, spotty, linear or mottled patterns. Of skin pigmentation, pigmentation on the nasal tip was characteristic and had high prevalence (63% in all follow-up cases). It was less frequent in adult males (Tables 12 and 13, Fig. 11).
Diffuse brownish-gray pigmentation over the whole body occurred in a small proportion of cases and was (Tables 12 and 13). It was more prominent at extensor aspects such as knee and elbow, or at the sites of skin eruption. Nail pigmentation was a very characteristic feature of chronic PCB poisoning and had the highest prevalence of all the clinical signs (Table 12). It appeared diffusely over the whole nail body and skin surrounding the nail (Figs. 9 and 12).
The severity of cutaneous lesions in hyperkeratosis and pigmentation were not parallel. Some patients with a very high blood PCB concentration and severe pigmentation had no other cutaneous eruption.
Grading of Disease Severity According to Clinical Signs
In the special clinics, the patients were diagnosed clinically on the basis of a positive history of exposure to the specific rice-bran oil as well as on the nature and the extent of mucocutaneous lesions.
Those patients who had a positive history and ocular symptoms, such as hypersecretion, swelling of the eyelids and enlargement of Meibomian glands, were very suggestive of having PCB poisoning and were classified as Grade I° (Table 14). In Grade I, patients presented only pigmentary changes on the mucosa and skin without developing follicular lesions. The patients in Grade II manifested localized comedones and accentuation of the hair follicles. In Grade III, the patients had localized acneform inflammatory lesions with or without external genital cysts. The patients in Grade IV showed the most prominent cutaneous lesions, with generalized acneform or keratotic follicular eruptions and were frequently associated with secondary bacterial infection (Table 14).
When classified in this way, as shown in Table 15, Grade 1°included 24 cases (6.7%); Grade I, 132 cases (36.7%); Grade II, 65 cases (18.2%); Grade III, 97 cases (27.2%); and Grade IV, 40 cases (11.2%). The females outnumbered the males in the lighter grades (I°, I and II), the ratio being 1 to 0.71. However, in Grades III and IV, respectively, the number of male patients was 1.02 and 3 times higher than females. When age was considered, patients 11 to 30 years of age had a higher proportion in Grades III and IV, being 49 and 47%, respectively (Table 16). The patients under 10 years of age and above 50 were mainly in the lighter grades (I', I and II), being 72 and 78% respectively. This tendency suggests that the severity of poisoning among patients 11 to 30 years of age, especially among males, may be caused by larger amounts of daily food intake, namely, they ingested larger amounts of the contaminated rice-bran oil. A total of 89 cases in the special clinics were followed for 8 to 17 months (average 11.5 months) clinically, and the change of disease severity is shown in Tables 17 and 18. Most of the cases (53.9%) remained at the same grade, and 38.2 and 7.9% of the patients showed decreased severity and increased severity of disease, respectively. Many possible methods were tried for the treatment of these patients in the special clinics, but there was bitter disappointment at the lack of success, so the number of follow-up cases rapidly decreased thereafter. Although there are no reliable records, the general condition, including cutaneous changes, seemed somewhat improved in cases observed 3 years later.
The relationship between the severity of disease and blood PCB concentration is shown in Table 19. There I II III IV First Io 2 I 9 3 II 5 17 1 III 5 17 14 3 IV 3 was no notable association. Figure 13 shows the change of blood PCB in these patients. The interval between the first and the second quantitative analysis for PCB, was 72 and 453 days, the average being 293.58 days. The variation of individual PCB concentrations was very large, so no conclusions could be drawn from these results.
The quantitative analysis of PCBs in the patients' blood was continued during these years. At the Chang-Hua Prefecture, PCB concentration of the blood in 83 cases 1 year after onset and that in 17 cases 3 years later were 4 to 558 (average 149.5) and 2 to 161 (average 53.6) ppb, respectively.
Immunological Studies
Albumin, Globulin, and Immunoglobulin. This study was carried out by Chang et al. in the first year (18). As shown in Table 20, (x2-globulin in serum of patients was mildly increased (0.72 + 0.20 g%), while the y-globulin level was mildly decreased (1.17 + 0.39 g%). In view of this result, some suppression in humoral immunity may be suspected. The concentration of blood immunoglobulins was also studied by Chang et al. (19). Table 21 shows the serum levels of immunoglobulin in the patients and normal control. Significant decreases in serum IgA and IgM were noted in the poisoning group (p < 0.01 and p < 0.001, respectively), while the concentration of IgG was in normal range. These data also suggest the suppression in humoral immunity by PCB poisoning.
Skin Testing. Delayed hypersensitivity was studied by skin testing with recalling antigens. An intracutaneous test with a solution containing streptokinase and streptodornase was performed in 143 patients by Chang et al. in the first year (17). As in Table 23, the positive rate during the first year in PCB-poisoned individuals was 48.2%, while that in healthy controls was 73.7%. These data suggest that suppression of cellular immunity occurred in the patients. Three years later, positive response of PPD skin test increased to 61.5% in PCB patients, suggesting some recovery of cellular immunity in PCB-poisoned patients.
Lymphocytes in Peripheral Blood. By using different rosette techniques to enumerate the percentage of lymphocyte subpopulation, the percentage of total T-cells, active T-cells and T -cells (helper T-cells) were shown to be decreased as reiative to controls (Table 24). The number of total lymphocytes and the percentage of B-cells and TY-cells (suppressor T-cells) were not affected. This study was performed in the first year (18).
A newly developed method, a monoclonal antibody technique, was used to identify the lymphocyte subset 3 years later. As shown in Table 24, the percentage of total T-cells by the E rosette method and OKT-3 were recovered. The percentage of OKT-4 (helper T-cells) was still low but that of OKT-8 (suppressor T-cells) increased. Thus the immunoregulating index (percentage of OKT-4/OKT-8) in PCB patients was lower than that of healthy control (1.2 + 0.4 vs. 1.9 + 0.4). These data suggest that the cellular immunity in PCB victims recovered partially.
Lymphocyte Proliferation Test. This test was performed in the first year with the culture media of supplemented RPMI-1640 containing 10% fetal calf serum. The spontaneous proliferation of lymphocytes of PCB patients was slightly enhanced but not statistically significantly. Among the tests stimulated by various mitogens, the response to PHA (phytohemagglutinin) and PWM (pokeweed mitogen) showed some increase, but the response to ConA (Concanavalin A) was not significant (Table 25). The test stimulated by PPD (tuberculin) was also done in the first year, and the response was also significantly enhanced (Table 26).
The lymphocyte proliferation test was studied again 3 years later, with AB serum instead of fetal calf serum being used in the culture media. As shown in Tables 27 and 28, the enhancement of spontaneous proliferation of lymphocytes in PCB patients was very significant, and the tests stimulated with PHA, ConA, PWM and PPD also revealed significant enhancement of lymphocyte proliferation. This may be an abnormal rebound phenomenon caused by sublethal immunotoxic dosages of PCB and its derivatives, but this hypothesis should be studied.
In summary, at present in Yu-Cheng patients the posi- tive rate of the tuberculin test recovered somewhat with time. The total number of T-cells returned to normal. The suppressor T-cells increased, but helper T-cells were still lower, so the immunoregulating index (OKT-4 /OKT-8) was still very low. The number of B-cells remained in the normal range. Lymphocyte proliferation stimulated by various mitogens is still enhanced.
|
v3-fos-license
|
2019-03-11T13:12:22.436Z
|
2013-12-18T00:00:00.000
|
34665481
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.omicsonline.org/open-access/living-conditions-and-illness-among-injecting-drug-users-in-montreal.hccr.1000111.pdf",
"pdf_hash": "e03c87996b46dd3a989ba2f91ea69e96c7ca5250",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42444",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"sha1": "f6165f9ac08d939362e5a02820d39dead0319039",
"year": 2013
}
|
pes2o/s2orc
|
Living Conditions and Illness among Injecting Drug Users in Montreal
Injection drug behavior constitutes a serious public health problem in the developed world, especially in North America [1-6]. At the end of the last decade in Montreal, Quebec, about 12,000 persons [7] were injecting drugs. Public health concerns related to illegal drug injection include the spread of HIV [3,8,9] and HCV infections [10-12], sexually transmitted diseases [13] and mental health problems [14,15]. In spite of their high morbidity, injection drug users (IDUs) reportedly misuse health services, particularly by overusing hospital emergency rooms [16-23].
The living conditions of IDUs result in an accumulation of health risk factors. In addition to illegal drug abuse, alcohol and cigarette consumption are common [17]. Many IDUs have a history of homelessness [4]. Substance abuse in IDUs constitutes a major determinant of unsafe sexual contacts leading to sexually transmitted diseases [13]. Drug dependency, financial strain and debt may drive them to violence or to trade sex to obtain money or drugs [20,24,25]. Such harsh living conditions may eventually lead to illegal behaviors and multiple imprisonments [26][27][28]. The social and health problems of IDUs are interwoven with poverty and social exclusion. Phelan et al. [29] have extensively studied the way ill health conditions could impact people's health status, conditions they called "fundamental causes" of social inequalities in health.
A number of health problems are associated with drug use. It has been estimated that 70 to 80% of IDUs in Montreal may be infected with HCV [10][11][12]. The estimated prevalence of HIV-infected IDUs amounts to 11% in Montreal city [8,30]. Dual diagnosis of drug addiction and psychiatric problems is frequent. At the end of the last decade, the number of illicit drug users (by injection or any other route) suffering from mental illness in Montreal city ranged from 25,000 to 40,000 [14]. A study carried out in other contexts among drug users reported a high frequency of complications stemming from drug injection, such as soft tissue infections, thrombosis, embolism and septicemia [31]. Illicit drug use is frequently associated with alcohol abuse and cigarette smoking [17], sex trade and traumas from violence [13,25,32]. One study has reported an association between overdoses from heroin and suicide attempts [33].
Despite over a decade of intensive harm reduction strategies and the many steps taken to address drug-related health issues, the health status of IDUs is still a cause for concern. In a study carried out in Vancouver, Spittal et al. [34] revealed the high health risk of female IDUs (compared to the female general population of British Columbia) by observing a fifty-fold increase in their mortality rate. Corneil et al. [35] showed an increased risk of HIV infection among Vancouver IDUs who reported living in unstable housing conditions. This study identifies both proximal and distal factors associated with recent episodes of illness among IDUs. By targeting acute manifestations of illness in IDUs, such as overdoses and soft tissue infections, intervention programs only partly address morbidity issues among this population. First, etiologic research should identify distal determinants of IDUs' morbidity, in order to implement welltailored intervention programs targeting the entire spectrum of IDUs' risk factors for morbidity. By focusing prevention on those distal determinants, or "fundamental causes" according to Phelan et al. [29], health service managers could redirect efforts to integrated care for chronic problems such as alcohol dependency, HIV infection, hepatitis C and mental health, in tandem with social services devoted to the IDU population.
Study population and data collection
The study population consisted of injecting drug users living in Montreal who were 18 years of age or older. Participants were recruited on the streets of downtown Montreal using a convenient sampling method in which selected participants could refer their IDU friend to the interviewer [36,37]. The interviewer, a former injection drug user, completed a training session before the survey and had easy access to IDUs. To avoid selection bias due to subjective selection of known IDUs, the interviewer was instructed not to contact friends or relations, but simply to inform IDUs about the study and distribute his business cards so that those interested could call to arrange an interview. Each IDU contacted was also asked to invite other known IDUs to participate in the study. The inclusion criteria were: residence in Montreal for at least one year, age 18 years or over and intravenous drug use at least once in the previous 6 months. Participants signed an informed consent form, which contained a numerical code to match with the anonymous questionnaire. Participants were interviewed. The questionnaire was filled in by the interviewer. Most of the interviews took place in our research office. But some interviews were carried out in other offices in the neighborhood of the participants, such community health centers, syringe exchange program centers, etc. Confidentiality and discretion were the conditions required to use an office for the interview. The Research Ethics Board of the University of Montreal approved the study.
On completion of the questionnaire, participants received a payment of CAN$10 to compensate them for their time. Respondents whom the interviewer judged to require particular services were also given a brief counseling session and referred to a social service. The use of a single interviewer was helpful in preventing people from answering the questionnaire more than once, since he could generally recognize those who had already participated. The study was conducted from February to September 2005.
Measurements
The dependent variable for this study was the self-reported occurrence of any illness episode in the 6 months before the interview, in response to the questions: "Have you suffered from any illness during the last 6 months? What was your health problem?" The list of potential explanatory variables was drawn from the literature on the morbidity of IDUs and included sociodemographic variables, economic conditions, marginality, risk behaviors and health status, according to the schema proposed by Estébanez et al. [38]. Socio-demographic characteristics (gender, age, education and sexual orientation) may have a direct or indirect impact (through marginality and risk behaviors) on the occurrence of episodes of illness. Sexual orientation included three categories: homosexuals, heterosexuals and bisexuals. Employment, type of housing and obvious indicators of financial strain, such as receiving regular help from a community center (i.e. clothing, food or furniture) and begging on the street, were used as a measure of economic condition. Combining the latter two factors (receiving help and begging) yielded the variable "number of indicators of financial strain". Employment status consisted of three categories: full time job, other jobs (part time job, independent job, occasional jobs), and welfare.
Marginality may act directly or indirectly through risk behaviors, on the occurrence of an episode of illness. The marginality indicators included: sex trade, fines for criminal offenses, previous imprisonment, unemployment and homelessness. Living arrangements consisted of three categories: independent living arrangements (rented apartment or house), dependent living arrangements (family house, friend's house, public shelter for homeless people), and homelessness (living on the street or in abandoned houses). The variable "number of marginality indicators" was created by combining history of imprisonment, fine for criminal offense, stealing and lack of identity cards.
Risk behaviors were measured for the six months before interview and included alcohol consumption, the type of drug injected the frequency of drug injection and sharing injection materials. Sharing injection materials was defined as giving to one another used materials (syringe, needle, filter, etc.) to inject drugs, in the prior 6 months. The injection drugs considered were mainly heroin, cocaine and their derivatives (crack cocaine, speedball). Two variableshaving participated in treatment for drug abuse and past or present participation in a needle exchange program -were used to evaluate access to social support, while prior visit to preventive health clinics was the criterion for health services utilization.
The health-related variables were chronic infectious disease (HIV infection, HCV infection) and a history of mental illness. Mental illness was defined as any psychiatric illness diagnosed by a healthcare professional (such as schizophrenia, schizophrenia spectrum disorders, bipolar disorders, mania, major depression, anxiety disorders, etc.), and not merely any self-perceived mental disorder not evaluated by a mental health professional. Depressive symptoms were assessed using a 13-item CES-D (Center for Epidemiologic Studies Depression) scale [39], scored on a scale of 0 to 39 points (i.e., 0 to 3 points per item) with a cut-off at 13 points.
All those potential risk factors for episodes of illness can be divided in two main groups: distal factors (demographic factors, socioeconomic conditions, marginality and risk behavior) and proximal factors (health status and chronic diseases).
Statistical analysis
Data quality was monitored by checking for possible duplications after listing subjects in an Excel file by name, reported age, birthday, and age calculated from reported birthday. Major discordances or incoherencies (e.g. declared age that did not match age calculated from the birthday), similarities in names or age, were then analyzed using SPSS software. Questionnaires considered to be duplicates based on the foregoing information and a comparison of participants' signatures, were excluded.
Bivariate analyses were performed for each independent variable to calculate the potential association with the occurrence of illness in the previous 6 months, and the statistical significance of the relation was assessed using Pearson's X 2 test. Multiple logistic regression models were fitted using staggered entry of variables according to the previously described schema: sociodemographic factors, economic conditions, marginality, risk behaviors, support and service use, and health status. Within each block, variables were selected by using a stepwise backward strategy in which statistical criteria for entry and retention of variables in each model were p ≤ 0.10 and p ≤ 0.05, respectively. Blockwise entry is the best strategy to highlight changes in the values of the coefficients following inclusion of explanatory variables in the model. The log-likelihood statistic and the Chi-square test were used to assess improvement in the model, while the Hosmer-Lemeshow test was used to evaluate its goodness of fit.
Results
A total of 678 subjects responded to the questionnaire. After completing the data-quality monitoring process, 12 questionnaires were excluded. Further analyses were carried out on the remaining 666 participants. Only 17% of them had completed more than secondary school (college or university); 12% had a full time job, while 70% were receiving social welfare benefits. Six percent had no identity card, 38% begged on the street, 48% had a history of imprisonment, 49% reported receiving help from a community center on a regular basis, and 20% were living strictly on the street.
Within the whole sample, 176 subjects (26%) reported an episode of illness in the previous 6 months. These episodes included drug overdoses and abscess at the site of injection; acute infections such as pneumonia, influenza, and gastroenteritis; mental illness and suicide attempts; traumas from violence; and fatigue & indigestion and herpes & sepsis. Overall, 140 participants reported 1 episode, 31 reported 2 episodes and 5 reported 3 episodes. Tables 1-3 show the distribution of the sample according to the selected risk factors for episodes of illness. Overall, 36% of females versus 25% of males had some illness in the 6-month recall period. Older IDUs and bisexuals were more likely to report illness episodes than younger IDUs and heterosexuals or homosexuals ( Table 1). Indicators of financial strain (begging on the street, receiving help in community centers) are associated with episodes of illness. Homeless IDUs were more likely to report episodes of illness than those living in the home of friends or family, or than those who lived in their own house or apartment (respectively, 34%, 28% and 24%; P=0.093).
All the marginality indicators were related to the frequency of episodes of illness: IDUs involved in sex trade (42% versus 24%; P<0.001), and those with a history of imprisonment (34% versus 21%; P<0.001) ( Table 3). The type of drug consumed and the frequency of drug injection were associated with episodes of illness. Those who injected both heroin and cocaine reported more episodes than those who injected only heroin or cocaine (37%, 31% and 23%, respectively; P=0.007). IDUs who injected drugs more than once a day were more likely to have an episode of illness than those reporting a lower frequency of injection (33% versus 21%; P<0.001). Sharing injection materials was also associated with a higher frequency of illness (45% versus 23%; P<0.001). IDUs using community services were more likely to report illness in the previous 6 months (Table 4). In particular, those with a history of drug abuse treatment, those receiving help from community centers and those who had used preventive services in the past for STD testing, hepatitis testing, vaccination or needle exchange had more illness episodes than those who had not used these services. Chronic infections with HIV and HCV, mental illness and current high depressive symptoms were also associated with higher frequency of illness episodes in the 6 previous months.
The estimated odds ratios using multivariate analysis are shown in Table 5. Gender and age were significantly associated with the probability of episodes of illness even after adjusting for all other risk factors, and their coefficients remained stable throughout the five models. Female IDUs had a two-fold increased risk of illness compared to male IDUs. Older IDUs were more likely to have an illness than younger IDUs. Homosexuals had 60% more risk of illness than heterosexuals. This association was not significant, except after adjusting for marginality. Conversely, the likelihood of illness in bisexuals was stable up to the final model and consistently remained twice as high as in heterosexuals. Financial strain and marginality were also independently associated with illness, and had stable coefficients. Persons who injected heroin had a two-fold increase in the risk of illness compared to those who injected cocaine only. Those who injected both cocaine and heroin had the highest odds of illness compared with those who injected just one of these drugs. Sharing injection materials was associated with a two-fold increase in the risk of illness. In the final model, three chronic conditions were associated with the occurrence of illness: HIV infection, HCV infection and mental illness. As shown by the ascending values of the Chi-square test and the descending values of the log-likelihood statistic, the final model improved progressively as covariates were added to the equations.
Discussion
The purpose of this study among Montreal IDUs in 2005, 20 years after the rapid spread of HIV among IDUs in North America, was to identify distal and proximal factors associated with recent episodes of illness. These IDUs, whose mean age was 31 years (± 10), and who had been using injection drugs for an average of 9.52 years (± 7.47), constitute a cohort of people who have managed to survive amidst the HIV epidemic.
This study aimed to increase our understanding of the fundamental causes of IDUs' ill health in spite of many years of harm reduction programs. The results can be summarized as three principal findings. First, financial strain and marginality are associated with recent episodes of illness. Second, risk injecting behaviors continue to be highly prevalent and, as expected, are associated with recent episodes of illness. Third, mental illness, HIV and HCV infections are at the core of poor health in IDUs.
Multivariate analyses show the major predictors of recent illness episodes, illustrating mainly that the pathway from socioeconomic conditions to occurrence of illness in IDUs is shaped by financial strain, marginality and risk behaviors, mostly in those whose health status is already weakened by chronic viral infections.
Financial strain seems to be estimated with accuracy by the two variables "begging on the street" and "receiving help from a community center". Indeed, other variables could be considered such as "homelessness" and "having no job". But those variables are weakly associated with disease occurrence (P=0.093 and 0.150 respectively) and do not strongly illustrate the IDUs' situation of misfortune.
Indicators of marginality seem to be well represented by history of imprisonment, fine for criminal offenses, stealing and having no *P<0.05 a Begging, receiving help in community centers b Imprisonment, fine, stealing, no identity card identity card. We considered including sex trade with these factors but this was rejected in the multivariate model using backward regression, because its association with disease occurrence is mediated by other factors of marginality.
The association between heroin injection and illness was not significant but became stronger once chronic diseases were considered. Among those who injected both heroin and cocaine, the association was even stronger and increased in magnitude when chronic diseases were taken into account. This observation illustrates the synergistic effect between drug abuse and ill health, as shown in the fact that drug abuse is more detrimental in IDUs whose health status is already compromised.
Our findings are similar in some respects to those of other studies in IDU populations. The participants were predominantly male, as has been seen in many other studies [17,20,[40][41][42]. Female IDUs seemed more likely to be ill than male IDUs, as has also been shown by Chitwood et al. [43]. Bisexual IDUs seemed to be at higher risk than heterosexual IDUs. Boulton et al. reported that, while homosexual men are more likely to have protected sexual contact with their male partners, bisexual men usually engage in protected sex with men and unprotected sex with female partners [44]. In our study, the odds ratio for illness in bisexuals was significant in the first (OR: 2.11 (1.19-3.72) and all subsequent models, suggesting that unprotected sex may also be an explanatory factor.
Other studies have reported the link between financial strain and high morbidity [5,42]. In bivariate analyses, access to needle exchange programs has been associated with increased morbidity [45], a finding that may be explained by the attraction of needle exchange programs to IDUs at higher risk of HIV infection.
Selection and measurement biases may have gone undetected in our study. Selection biases could have resulted from the non-probabilistic nature of the design, yielding an unpredictable direction in the associations. In addition, self-reported illness and risk behaviors could have been influenced by social desirability, which would have reduced the magnitude of the associations. Nevertheless, previous studies have already confirmed the reliability of self-reported data in IDUs [46][47][48][49]. Subjects suffering from acute illnesses could have reported their risk factors with more precision than those who were feeling healthy at the time of the interview, a situation that could have led to a recall bias with an association towards the null. Confounding factors such as violence and lifelong victimization were not collected, although the episodes of illness related to violence were high for a 6-month recall period (10% of episodes). Like any cross-sectional survey, this study, along with the statistical inferences yielded by analyses, should be interpreted cautiously. As far as we know, this is the first population based study of IDUs in the city of Montreal. All previous studies have been based on clinic and social service attendees [3,8,12,36,44]. The findings of this study contribute to our knowledge of the relation between living conditions and morbidity among these survivors of the HIV epidemic.
Conclusions
Many harm reduction strategies have been implemented during the last decade [50][51][52]. Efforts have been made to help IDUs reduce risk-taking behaviors as regards safe injection practices and safe sex [53][54][55][56][57]. Integrated programs focused on harm reduction strategies in connection with primary care and drug abuse treatment have been proposed [42].
This study highlights the relevance of taking a broad perspective when studying determinants of morbidity in IDUs. From our analyses and other studies, there is strong evidence suggesting that the high rate of morbidity in IDUs is due to social exclusion and their extremely harsh living conditions. A better organization of primary health care would result in even greater utilization of health services unless measures are taken at the social baseline to improve the living conditions of IDUs, notably for street-entrenched, runaway and unemployed IDUs [57]. In addition, Gunn et al. [50] have proposed meaningful solutions related with harm reduction strategies, notably through improvement of access to the primary health care system. Health improvement programs should prolong downwards to the social ground where the IDU population lives, encompassing living arrangements, mental rehabilitation and occupational therapy. The harm reduction strategies proposed by Palepu et al. [42] should be considered as well. Health needs in IDUs are complex and should be addressed primarily at a more remote step, in the community, providing integrated care according to their individual conditions with the implication of outreach workers, social workers and nurses who have close ties with IDUs.
|
v3-fos-license
|
2024-01-12T06:44:17.531Z
|
2024-01-10T00:00:00.000
|
266933485
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://academic.oup.com/mnras/article-pdf/529/2/1019/56924417/stae124.pdf",
"pdf_hash": "719ccd26caf0ad4e731b7460847f9894d3b3de80",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42445",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "719ccd26caf0ad4e731b7460847f9894d3b3de80",
"year": 2024
}
|
pes2o/s2orc
|
High energy gamma-ray sources in the VVV survey - II. The AGN counterparts
We identified Active Galactic Nuclei (AGN) candidates as counterparts to unidentified gamma-ray sources (UGS) from the Fermi -LAT Fourth Source Catalogue at lower Galactic latitudes. Our methodology is based on the use of near-and mid-infrared photometric data from the VISTA Variables in the V ´ıa L ´actea (VVV) and Wide-field Infrared Surv e y Explorer (WISE) surv e ys. The AGN candidates associated with the UGS occupy very different regions from the stars and extragalactic sources in the colour space defined by the VVV and WISE infrared colours. We found 27 near-infrared AGN candidates possibly associated with 14 Fermi -LAT sources using the VVV surv e y. We also found 2 blazar candidates in the regions of 2 Fermi -LAT sources using WISE data. There is no match between VVV and WISE candidates. We have also examined the K s light curves of the VVV candidates and applied the fractional variability amplitude ( σ rms ) and the slope of variation in the K s passband to characterise the near-infrared variability. This analysis shows that more than 85 per cent of the candidates have slopes in the K s passband > 10 − 4 mag/day and present σ rms values consistent with a moderate variability. This is in good agreement with typical results seen from type-1 AGN. The combination of YJHK s colours and K s variability criteria was useful for AGN selection, including its use in identifying counterparts to Fermi γ -ray sources.
INTRODUCTION
Since its launch in June 2008, the Fermi Large Area Telescope (Atwood 2009, Fermi-LAT) has revolutionised our view of the -ray sky above 100 MeV.The Fermi-LAT offers a significant increase in sensitivity, improved angular resolution and nearly uniform sky coverage, making it a powerful tool for the detection and characterisation of large numbers of -ray sources.The Fermi Fourth Source Catalogue (Abdollahi et al. 2020, 4FGL), based on the first 8 years of data from the mission, lists 5064 sources in the energy range 50 MeV to 1 TeV.Out of these sources, 1336 (26.4%) sources do not have even a reliable association with sources detected at other wavelengths; we will henceforth label them as Unassociated Gamma-ray Sources (UGS).More than 3130 of the identified or associated sources are active galaxies of the blazar class, and 239 are pulsars.
The positions of -ray sources listed in the Fermi-LAT catalogues are reported with their associated uncertainty represented by an elliptical region.The Fermi-LAT -ray catalogues provide the semi-major and semi-minor axes of the ellipses together with the positional angle at 68% and 95% level of confidence.The princi-pal reason for the difficulty of finding counterparts to high-energy -ray sources has been the large positional errors in their measured locations, a result of the limited photon statistics and angular resolution of the -ray observations and the bright diffuse -ray emission from the Milky Way (MW).Therefore, the UGS represent one of the biggest challenges in -ray astrophysics (e.g., Thompson 2008).The key to finding plausible counterparts to the unidentified Fermi-LAT sources is the cross-check with observations at one or more wavelengths, such as radio observations (e.g., Hovatta et al. 2014;Schinzel et al. 2015), infrared observations (e.g., Raiteri et al. 2014) and in the sub-millimeter range (e.g., León-Tavares et al. 2012;López-Caniego et al. 2013).Additional X-ray studies have also been carried out with Chandra and Suzaku have been useful in particular when performed in the crowded region of the Galactic plane (e.g., Maeda et al. 2011;Cheung et al. 2012).Optical spectroscopic identification of Fermi sources has been addressed previously to search for counterparts (e.g., Paggi et al. 2014;Peña-Herazo et al. 2021;García-Pérez et al. 2023).In addition, the properties of the ray sources can be used as a statistical set to perform a multivariate analysis.This is a classification strategy to find plausible counterparts at other wavelengths for sources that remain unassociated (e.g., Hassan et al. 2013;Doert & Errando 2014).
Active Galactic Nuclei (AGN) represent an astronomical phenomenon that emit extremely high-energy radiation, as demostrated by Urry & Padovani (1995) and Padovani et al. (2017).Since their discovery many decades ago, research has been conducted at various frequencies unveiling the diverse manifestations of AGN phenomena, observed from radio to -rays.This has resulted in an extensive and captivating assortment of classifications.Among the distinct classes of AGN are type-1 and type-2 AGN, blazars subdivided in BL Lacertae and Flat Spectrum Radio Quasars (FSQR), alongside other classifications (see, Stickel et al. 1991;Stocke et al. 1991).The AGN unification scheme, as proposed by Antonucci (1993), offers a comprehensive representation of AGN phenomena, including elements such as black holes, discs, torus, clouds, and jets.This model explains how orientation effects, different accretion powers, and black hole spin parameters can account for the wide array of AGN types.Furthermore, AGN typically exhibit variations in their emissions (Edelson et al. 2002;Sandrinelli et al. 2014;Husemann et al. 2022).The extent of this variability differs according to the type of AGN and is generally more pronounced, with higher amplitudes in blazars compared to type-1 AGN (e.g., Ulrich et al. 1997;Mao & Yi 2021;Baravalle et al. 2023).
In recent years, the population of known AGN has substantially grown thanks to new surveys and catalogues (e.g., Véron-Cetty & Véron 2010; Rembold et al. 2017;do Nascimento et al. 2019).Nevertheless, the number of AGN observed at lower Galactic latitudes, obscured by dense regions belonging to our Galaxy, remains limited (e.g., Edelson & Malkan 2012;Pichel et al. 2020).Recently, Fu et al. (2021Fu et al. ( , 2022) ) Extragalactic objects located behind the Milky Way are difficult to identify and detect due to the significant amount of gas, dust, and stars present at low Galactic latitudes (e.g., Kraan-Korteweg 2000; Baravalle et al. 2018Baravalle et al. , 2021)).In this context, observations carried out at near-infrared wavelengths minimise the effects of interstellar extinction in these regions in comparison with optical passbands.Although the density of foreground sources is greater in the near-infrared, the reduced foreground extinction can reveal different physical processes.Studying these unknown MW regions at low Galactic latitudes, which are usually obscured at visible wavelengths, presents a challenging task.The first near-infrared survey in these regions was the Two Micron All Sky Survey (Skrutskie et al. 2006, 2MASS).Later, the ESO Public Surveys, the VISTA Variables in the Vía Láctea (Minniti et al. 2010, VVV) and its extension, the VVVX have been mapping the K s -passband variability of stars in the entire MW bulge and disc.The main scientific goal was to gain more insight into the inner MW's origin, structure, and evolution.The VVV survey included the acquisition of ZYJHK s images whereas VVVX was restricted to the JHK s passbands, increasing significantly the coverage area (see Table 1 in Daza-Perilla et al. 2023).Thousands of new galaxies and galaxy associations have been discovered using the photometric data from VVV and VVVX surveys (e.g., Amôres et al. 2012;Baravalle et al. 2019;Coldwell et al. 2014;Galdeano et al. 2021;Soto et al. 2022;Daza-Perilla et al. 2023).The VVV near-infrared galaxy catalogue (Baravalle et al. 2021, VVV NIRGC) is the final catalogue of part of the Southern Galactic disc using the colour criteria and the visual inspection to identify 5554 galaxies.Only 45 of these galaxies were previously known.Pichel et al. (2020) studied for the first time the active galaxies in these regions using a combination of near-infrared (NIR) and mid-infrared (MIR) data.The Wide-field Infrared Survey Explorer (Wright et al. 2010, WISE) is an ideal mission for identifying a very large number of AGN across the full sky.Additionally, Baravalle et al. (2023) reported four AGN candidates at very low Galactic latitudes (| b |< 2 • ) using this combination of VVV and WISE surveys.Also, these sources presented variability in the K s light curves reported in the VIVACE catalogue (Molnar et al. 2022).
The infrared (IR) emission of AGN can be of thermal and nonthermal origin.In the case of radio-loud AGN, specifically blazar subtypes, the non-thermal character of the IR radiation is produced by the synchrotron emission of relativistic electrons within the jet.Radio continuum emission is also associated with these jets.On the other hand, in radio-quiet objects such as Seyfert galaxies, most of the radiated energy is dominated by thermal emission from the accretion disc, which is formed around the central black hole (e.g., Shakura & Sunyaev 1973).The light of the accretion disc is absorbed by the "dust torus" (see, Netzer 2015) and re-emitted in the infrared.The emission of torus and accretion disc dominate the AGN spectral energy distribution (SED) at wavelengths longer than ∼1 m up to a few tens of microns, giving the AGN distinctive red mid-IR colours (e.g.Stern et al. 2005;Richards et al. 2006;Assef et al. 2010).Therefore, IR passbands are well suited to identify AGN, as their SEDs are very different from those of stars and inactive galaxies.Chen et al. (2005) studied the colour distribution of a sample of blazars and normal galaxies using the 2MASS archival data.The main results from these observations are as follows: (1) the distribution of colours of blazars, in the J-H-K s colour-colour diagram, occupy a region centered at the position (0.7; 0. The main goal in this study is to identify, at lower Galactic latitudes, unidentified 4FGL sources with NIR and MIR counterparts using the VVV and WISE surveys, respectively.The paper is organised as follows.Section 2 presents the data which includes the different samples of high energy sources together with the NIR and MIR photometry used in this study.The applied methodology to detect the counterparts is also discussed including colour-magnitude and colour-colour diagrams using VVV and WISE surveys, and the VVV K s light curves of the near-IR sources and the variability analysis.Section 3 shows the diagrams for the Fermi-LAT source regions with VVV candidates and the analysis of the light curves using the near-IR data.Diagrams with the WISE candidates using mid-IR data are also shown.Section 4 presents a summary of the main results.
The samples of high energy gamma-ray sources
At lower Galactic latitudes, we have found 221 4FGL sources in the bulge and disc regions covered by the VVV survey without any previous source associations at any wavelengths.Figure 1 shows the distributions of interstellar K s extinctions (A Ks ) in magnitudes and uncertainties in the positions of the Fermi-LAT sources as the semi-major axis (a) in arcmin of the error ellipse at 95% confidence level for the 221 UGS.The median values are A Ks = 0.74 ± 3.79 mag and a = 4.25 ± 3.04 arcmin.
According to the distributions of the interstellar extinctions and semi-major axis of the Fermi-LAT uncertainties, we choose to analyse sources in regions with lower interstellar extinctions (A Ks < 1.2 mag).Taking this into account, our sample comprises 78 UGS.We defined three subsamples: the A subsample which contains 13 UGS with a < 2.5 arcmin; the B subsample that contains 12 sources with 2.5 ⩽ a < 3.0 arcmin and the C subsample that contains 53 sources with 3.0 ⩽ a < 5.0 arcmin.Tables 1, 2 and 3 show the positions of the UGS for the three subsamples, respectively.
Figure 2 shows the distribution in Galactic coordinates of the 78 UGS over the region covered by the VVV survey.The samples studied are highlighted as yellow squares (A subsample), green diamonds (B subsample) and orange stars (C subsample).Also the coloured UGS are over plotted on the spatial distribution of A interstellar extinction derived from the extinction map of Schlafly & Finkbeiner (2011).The contours of the different levels correspond to 5, 10, 15, 20, 25 mag.There are 14 UGS located in the Southern disc and 64 in the bulge.The disc (bulge) UGS are 6, 3 and 5 (7, 9 and 48) in the A, B and C subsamples, respectively.
Near-and mid-IR photometry
Our main goal is to identify the selected UGS with near-and mid-IR photometry counterparts using the VVV and WISE photometry, respectively.Pichel et al. (2020) analysed the four blazars located in the VVV region that were identified in the Multi-frequency Catalogue of Blazars (Massaro et al. 2015) as counterparts to 3FGL sources.They defined a specific region with a radius twice the positional uncertainties associated with the high-energy sources and performed a search for all infrared sources within this area.The photometry was conducted in the five VVV passbands: Z, Y, J, H and K s using the combination of SExtractor (Source-Extractor) + PSFEx (PSF Extractor) (Bertin 2011) to assess all the sources in the region as described in Baravalle et al. (2018).The blazars were characterised by their near-and mid-IR properties from VVV and WISE surveys, respectively showing different colours in the infrared diagrams.The photometric results of the blazar 5BZQJ1802-3940 (Pichel et al. 2020) obtained with SExtractor+PSFEx were also compared in Donoso (2020) with the data product provided by Cambridge Astronomical Survey Unit (CASU; Emerson et al. 2006).Both approaches produce comparable results and the studied blazar occupied a similar position in the colour-colour diagrams.
In this work, we analysed all the VVV sources with CASU photometry lying within the positional uncertainty region of the UGS.For this purpose, for each 4FGL sources, we defined a search area centred on the UGS, with radius defined by the semi-major axis of the ellipse (values reported in Table 1-3).We used the positions of the NIR sources, the object classification and the aperture magnitudes within an aperture of radius of 3 pixels, which correspond to ∼1 arcsec (Minniti et al. 2010;Saito et al. 2010).In this way all the Fermi-LAT sources were surveyed in an homogeneous way.
Near-IR: VVV survey
In this section, we use NIR magnitudes and colours of all the VVV objects in the regions of the 4FGL sources.The magnitudes were corrected by interstellar extinction along the line-of-sight, using the dust maps of Schlafly & Finkbeiner (2011) and the VVV NIR relative extinction coefficients of Catelan et al. (2011).Then, we obtained the colours for all the sources.Baravalle et al. (2018) defined extragalactic sources using the colour criteria 0.5 < (J-K s ) < 2.0 mag; 0.0 < (J-H) < 1.0 mag; and 0.0 < (H-K s ) < 2.0 mag with the colour constraint (J-H) + 0.9 (H-K s ) > 0.44 mag to minimise false detections.The main result of this work is the VVV NIRGC, the catalogue of galaxies in part of the Southern Galactic disc.Massaro & D'Abrusco (2016) examined the regions in the colour-colour diagrams using the J, H and K s magnitudes from the 2MASS catalogue, specifically those occupied by Fermi-LAT blazars.The infrared colours of the -ray blazars cover a distinct region, clearly separated from the other extragalactic sources.Also, Cioni et al. (2013) performed an AGN selection using the VISTA Magellanic Survey (Cioni et al. 2011, VMC).In their Figure 2, they divided the JHK s colour-colour space Based on the results of Baravalle et al. (2023); Massaro & D'Abrusco (2016) and Cioni et al. (2011), we improved the colour cuts and we selected sources that satisfy simultaneously: 0.5 < (J-K s ) < 2.5 mag; 0.4 < (J-H) < 2.0 mag; 0.5 < (H-K s ) < 2.0 mag and 0.2 < (Y-J) < 2.0 mag.This selection define the possible candidates to be related to UGS.In addition to the colour selection, a visual inspection of the candidates in the five passbands of the survey was performed.In case of doubts, we created the false-colour red-greenblue (RGB) images using the K s , H and J passbands.Figure 3 shows some examples of these sources as 1 ′ × 1 ′ VVV colour composed images.We eliminated objects with strong contamination by bright nearby stars and those sources that have fainter K s magnitudes.
Mid-IR: WISE
We applied the methodology used in Pichel et al. (2020) andD'Abrusco et al. (2019) to all the Fermi-LAT sources.For the analysis, unless stated otherwise, we considered only WISE sources detected with a minimum signal-to-noise ratio of 7 in at least one passband.Using the WGS and the WISE locus method described in D' Abrusco et al. (2019), we applied the criterion that blazars lie in a distinctive region in the 3-dimensional MIR CCD using photometry at [3.4], [4.6], [12] and [22] m.The identification of WISE blazar candidates involved a selection process based on 2-dimensional projections within the CCD using the WISE locus method, as described previously.This technique may offer multi-Table 1. Fermi-LAT sources of our sample with low positional uncertainties (the A subsample).Column (1) lists the internal identification used in this work; columns (2) to (5), the 4FGL identification, the J2000 coordinates and the semi-major axis of Fermi-LAT error ellipse, a, at 95% confidence level in arcmin taken from 4FGL, and columns ( 6) and ( 7), the VVV tile identification and the interstellar extinction in the K s passband at the source position, respectively.ple possibilities depending on the number of identified candidates.When there is just one candidate, it is assumed to be directly associated with the Fermi-LAT source.Nevertheless, in cases with more candidates it is difficult to determine which one is associated, making further studies essential.In addition, to improve our selection of WISE candidates, we included AGN candidates using the criteria outlined in studies by Stern et al. (2012) and Assef et al. (2018).All identified WISE blazar candidates are also considered to be WISE AGN candidates, so all WISE candidates.
Variability analysis with the VVV photometry
Here, we performed the variability analysis for the objects associated to the Fermi-LAT sources.We have obtained the K s passband light curves using the second version of VVV Infrared Astrometric Catalogue (VIRAC2; see Smith et al. 2018 and Smith et al. in prep.)This is photometry based on PSF.We selected the measurements with photometric flags equal to 0 (see the catalogue) in order to obtain reliable light curves.Their coordinates were cross-matched with the VIRAC2 assuming differences in their positions of 1 arcsec.Twenty seven good light curves have a five astrometric parameter solution (a de-facto 10 epoch selection), not flagged as a probable duplicate, detected in more than 20% of the observations that cover the source, and with a unit weight error less than 1.8.On the con-trary, the rejected objects have not met the above criteria because they are highly contaminated with nearby stars or they are too faint to have reliable magnitudes.
In order to investigate the variability of these objects, we applied the methodology used in Pichel et al. (2020).We examined the fractional variability amplitude, rms (Nandra et al. 1997;Edelson et al. 2002;Sandrinelli et al. 2014;Pichel et al. 2020) defined as where N represents the number of flux values F i with their uncertainties i , and denotes the average flux.This parameter represents the excess variability that cannot be solely attributed to flux errors.Also, we investigated the slope of the light curves, taking into consideration the results of Cioni et al. (2013) that more than 75% of QSO in the VCM survey exhibit a slope variation in the K s passband larger than 10 −4 mag/day.They defined the slope of the overall K s variation in the light curves that were sampled over a range of 300-600 days, 40-80 days, or shorter.In this analysis we followed the same procedure as Baravalle et al. (2023), we performed a linear fit of the K s light curves, considering a range of days defined by the highest and lowest variations observed in the light curve.In all light curves, the range of days considered for this analysis varies from 1200 to ∼ 2300 days (Baravalle et al. 2023)..7-3420 18:14:45.75 -34:20:28.33.564 b234 0.0652 C49 4FGLJ1816.4-27274.698 b266 0.1222 C50 4FGLJ1817.9-3334 18:17:55.17 -33:34:21.7 3.846 b221 0.0635 C51 4FGLJ1819.9-2926 18:19:57.58 -29:26:20.84.992 b252 0.0920 C52 4FGLJ1820.7-3217 18:20:45.81 -32:17:26.54.944 b222 0.0726 C53 4FGLJ1828.2-3252 18:28:13.49 -32:52:10.64.998 b208 0.0599
RESULTS
On the basis of the methodology detailed above, here, the VVV ZYJHK s magnitudes, colours and K s light curves of the AGN candidates are presented.As explained in subsection 2.2.1, we have constructed colour-magnitude and colour-colour diagrams for each 4FGL.For those 4FGL sources with candidate counterparts, the (J-K s )-K s , colour-magnitude diagram and (H-K s )-(J-H) and (Y-J)-(J-K s ) colour-colour diagrams are shown in the Figures 4 to 17. There, grey-scale contours correspond to density of all the CASU objects found in 4FGL regions with size defined by the positional uncertainty of the Fermi-LAT source, including stellar and extragalactic sources.The regions preferentially populated for AGN candidates are: 0.5 < (J-K s ) < 2.5 mag; 0.5 < (H-K s ) < 2.0 mag; 0.4 < (J-H) < 2.0 mag and 0.2 < (Y-J) < 2.0 mag.The candidates were highlighted and represented by red circles for extended sources and as blue circles for objects with point-like morphology.Those AGN candidates that present variability are indicated by triangles, the colour depending on the origin of the sources: red for galaxy-like sources and blue for stellar-like objects.Full triangles are objects that have slope in K s passband higher than 10 −4 mag/day and empty triangles have slopes lower than this value.Also, the regions limited with lines as defined by Cioni et al. (2013) are shown.
After careful visual inspection, we eliminated faint and contaminated sources, leaving only those that were considered VVV candidates.Thus, 7 Fermi-LAT sources have only one VVV candidate: A13, B6, B12, C40, C46, C47 and C51.Some UGS have more than one VVV candidate: the Fermi-LAT source C53 presents 5 candidates; A9, 4 candidates; A12, 3 candidates; and C44, C48, C50 and C52, 2 candidates each one.These VVV candidates are not located in the Southern disc and therefore, there are no sources in common with the VVV NIRGC.
In Figure 19, we present the differential K s light curves of the VVV sources.These curves represent the K s magnitudes with the median subtracted, sampled over a period covering more than 2500 days.We noted that the overall shape of light curves is irregular, lacking any discernible periodic pattern.In some cases, we observe prominent fluctuations in brightness that resemble peaks, exhibiting statistical significance well above the value of the associated uncertainties.Table 4 presents the main results of the K s variability of these sources, showing the mean magnitude, rms and the slope of the linear fits with the range of days used.Also some comments of the visual inspection of the objects are included.Most of them are early-type galaxies or the bulges of galaxies, because the near-infrared is sensitive to detecting the oldest stellar population in the galaxy.We did not include in the analysis those objects with strong crowding contamination or faint magnitudes as mentioned above.In general, most of the studied objects exhibit moderate variability, characterised by rms values ranging from 12.5 to 32.1.These results are in agreement with previous studies on type-1 AGN, such as those by Nandra et al. (1997); Edelson et al. (2002); Baravalle et al. (2023).However, these values are lower than those reported for blazars (e.g., Sandrinelli et al. 2014;Pichel et al. 2020).Since type-1 AGN typically present lower variability amplitudes than blazars (e.g., Ulrich et al. 1997;Mao & Yi 2021), our results suggest that these objects are potential type-1 AGN, such as quasars or Seyfert 1 galaxies.Moreover, the observed light curve slopes are ⩾ 10 −4 mag/day, comfortably lying within the limit established by Cioni et al. (2013) for quasars.On the other hand, there are four objects that present negligible variability, with very low values of rms .These objects are VVV-J181300.69-314505.6,VVV-J173934.82-283746.5, VVV-J180027.63-291007.4 and VVV-J181803.69-333215.7 in the regions of the Fermi-LAT sources A12, B6, C40 and C50, respectively (see Fig. 19 and Table 4).As expected, these objects also exhibit significantly lower slope values, typically below 10 −5 mag/day.(J-H) and (Y-J)-(J-K s ) CCD using near-IR data from the VVV survey, respectively.The targets in red are those showing extended morphology in the images.The objects marked with circles do not have reliable variability curves; thus, the variability analysis was not performed on these targets.The objects indicated by filled triangles are those for which the variability analysis demonstrates their nature as variable sources.Grey lines defined by Cioni et al. (2013) are drawn on the YJK s CCD and labels of regions defined by those authors are also indicated.Grey-scale contours correspond to density of the NIR objects, lying within the positional uncertainty region of the UGS.Although luminosity variability is a common feature of active galactic nuclei, the absence of variability does not necessarily rule out the possibility of an object being an AGN.It is important to note that not all AGN exhibit the same degree of variability, and certain AGN may display very low or nearly negligible levels of variability (e.g., Ilić et al. 2017;Li et al. 2022;Pennock et al. 2022).Beyond this, more than 85% of the objects studied here show a moderate variability, and as mentioned above, these results suggest that these sources are type-1 AGN candidates.It has to be noted that this analysis is based only with photometric data.A spectroscopic study is necessary in order to investigate the nature and type of AGN.
We also searched for WISE candidates coincident with the position of the VVV candidates found before.We could not get any match between the VVV and WISE candidates with the exception of the source VVV-J173934.82-283746.5 in the Fermi-LAT B6 region.This object has a source at an angular distance of 0.64 arcsec classified as the OH/IR star 359.54+01.29 (Sevenster et al. 1997).
The WISE results are not as clear as those in Pichel et al. (2020) and Baravalle et al. (2023).All 4 sources explored in Pichel et al. (2020) had VVV candidate counterparts, but only two of them had WISE ones.In Baravalle et al. (2023), the 4 active galaxies had VVV and WISE counterparts.The main difference between these two studies and the present one is that the VVV candidates were brighter in the K s passband.Here, all the VVV candidates are in the range of 14.5 to 18 mag with the exception of the candidate in the Fermi-LAT B12 region.Another difference is the high interstellar extinction towards the fields studied here and in some cases, strong stellar contamination.In the mid-IR, the results here are more noisy in general.Based on these results, we present candidates in the Fermi-LAT source regions both in the NIR and MIR using VVV and WISE surveys, respectively.However, inside the Fermi-LAT source A8 appears a WISE source (J173612.07-342204.7) that satisfies all the criteria to be a blazar candidate using the WGS method.This region has a high interstellar extinction (A Ks = 0.9297 mag) and the NIR CMD shows bright magnitudes without candi- (Jarrett et al. 2011).These WISE candidates do not have VVV counterparts; thus, no other analysis and cross-match can be done in this paper.Further analysis with IR spectroscopy is needed in order to establish the nature of the WISE sources.
We might note that most of the VVV candidates are found in the B region of the colour-colour diagram defined by Cioni et al. (2013).
Our sample of VVV candidates are centered at the position (0.6; 0.7) in the CCD (J-H) vs (H-K s ) according to Chen et al. (2005).For the 27 candidates listed in Table 4, we then searched for the closest object in a circle of 30 arcsec radius using the SIMBAD database1 and we have not found any catalogued source, with the exception of the object in the region of the Fermi-Lat source B6 mentioned above.There have been no previous photometry or spectroscopy studies performed in these regions.Lefaucheur & Pita (2017) obtained a sample of 595 blazar can- didates from the unassociated sources within the 3FGL catalogue (Acero & Ackermann 2015).They proceeded to train multivariate classifiers on samples derived from the Fermi-LAT catalogue, carefully selecting discriminant parameters.Within their blazar candidates, there are 30 objects in the region of the VVV survey, of which A5, B3, B4, C10, C27 and C49 in our subsamples.They classified the Fermi-LAT source A5 as BL Lac, however, there are no VVV candidates in this region because of the high interstellar extinction (A Ks = 0.9213 mag).The Fermi-LAT C10 was also classified as BL Lac and the near-IR CMD and CCD show that there are a pointlike and a galaxy-like objects in the region that we have defined as possible VVV candidates.This is a region of strong crowding contamination and the K s light curves of these two objects were noisy and did not satisfy our criteria.For these reasons there are no other VVV nor WISE candidates in common with these authors.
All the Fermi-LAT sources in the A subsample with AGN candidates are found in regions with smaller interstellar extinctions (A Ks < 0.15 mag).In the B subsample, there are only two Fermi-LAT sources associated with VVV AGN candidates: B12 has interstellar extinction lower than 0.10 mag and B6 is in a region with high interstellar extinction.Hence, an interesting feature of B6 source's colour-magnitude diagram is that the K s magnitudes are brightest compared to the other diagrams (Figure 7).In the C subsample most of the cases have A Ks < 0.10 mag with the exception of C40, C46 and C47 with values from 0.17 to 0.28 mag approximately.For the other Fermi-LAT sources lying at higher interstellar extinction regions, we did not find any NIR nor MIR candidates.
Considering that some UGS have multiple candidates, it is crucial to establish criteria for prioritising the selection of the objects for follow-up observations.This selection process is based on additional criteria that includes magnitude, distance to the Fermi source, variability, interstellar extinction and visual inspection.As a result, the priority candidate for the Fermi-LAT source A9 is VVV-J175851.46-411016.0, which is the brightest, closest, the most variable source and lowest interstellar extinction.For the Fermi-LAT source A12, the priority candidate is VVV-J181258.71-314346.7, using the same criteria mentioned above.Within the C subsample, the priority candidates are VVV-J180826.32-352214.7 for C44, VVV-J181440.95-341915.4 for C48, VVV-J181751.39-333117.3 for C50, VVV-J182052.11-322058.3 for C52 and VVV-J182807.31-325038.0 for C53.
SUMMARY
In this work we present criteria for selecting AGN candidates as counterparts to Fermi-LAT sources, based on NIR and MIR photometry from the VVV and WISE surveys.We analysed a sample of 78 high energy -ray sources located at low Galactic latitudes without any previous source associations at any wavelength and lying in the footprint of the VVV survey.To start with, we divided the sample in three subsamples, considering the interstellar extinctions and semi-major axis of the Fermi-LAT uncertainties.
We analysed photometric data from the VVV and WISE surveys, following the methodology reported by Pichel et al. (2020) to search for blazars and Baravalle et al. (2023) to identify AGN candidates.The following colour cuts were used to identify VVV AGN candidates associated to the UGS sample in the near-IR data: 0.5 < (J-K s ) < 2.5 mag; 0.5 < (H-K s ) < 2.0 mag; 0.4 < (J-H) < 2.0 mag and 0.2 < (Y-J) < 2.0 mag.These sources are located in specific regions in the NIR CCD, clearly separated from stars and other extragalactic sources.Upon visual inspection, we removed the contaminated sources such as those with nearby bright stars or stellar associations.
We then selected 27 VVV AGN candidates within 14 Fermi-LAT positional uncertainties ellipses using the VVV survey.These objects satisfy the colour cuts and also visually look as a galaxy or have point-like morphology.We have also explored the light curves of all sources reported in Table 4 and applied the fractional variability amplitude and the slope of variation in the K s passband.In general, most of the candidates show variability rms > 12 and slopes in agreement with the limits defined by Cioni et al. (2011).These results suggest the presence of type-1 AGN.However, there are four objects with low variability rms < 8.0 and smaller slopes that might not be ruled out.We also found 2 blazar candidates in the regions of 2 Fermi-LAT sources using WISE data.There is no match between VVV and WISE candidates.
The combination of YJHK s colours and K s variability criteria have been useful for AGN selection, including its use in identifying counterparts to Fermi-LAT -ray sources.Finally, we aim to perform NIR spectroscopic observations to confirm the extragalactic nature of the AGN candidates reported here.Particularly useful explored the Galactic bulge regions in search for quasars (QSO) at lower latitudes.Employing machine learning techniques, they identified and confirmed 204 QSO candidates at (| b |< 20 • ) based on spectroscopic measurements.Ackermann et al. (2012) reported a significant excess of unassociated sources at | b |< 10 • , where catalogues of AGN are incomplete.Hence the fraction of sources associated with AGN decreases in this sky area.
7), and (2) about 30% of the blazars show NIR colours indicating a possible influence from the host galaxy.Such contamination is not present at MIR wavelengths.Using WISE magnitudes, D'Abrusco et al. (2012) discovered that blazars emitting in -rays were clearly distinguished from other classes of galaxies and/or AGN and/or Galactic sources.Fermi-LAT blazars inhabit different regions in the colour-colour diagrams (CCD) because they are dominated by non-thermal emission in the mid-IR.This two-dimensional region in the MIR CCD [3.4]-[4.6]-[12]-[22]m was originally indicated as the WISE Gamma-ray Strip (D'Abrusco et al. 2012, WGS), and the method was improved in the WISE locus of gamma-ray blazars in D'Abrusco et al. (2013, 2014).Massaro & D'Abrusco (2016) also showed that the Fermi-LAT blazars are located in specific regions both in NIR and MIR CCD, clearly separated from other extragalactic sources.Stern et al. (2012) investigated the power of WISE to identify AGN based solely on the [3.4] and [4.6] magnitudes.The selection criteria of [3.4]-[4.6]> 0.8 mag and [4.6] < 15.05 mag produced an AGN sample with a contamination of only 5%.Following this, Assef et al. (2018) presented two additional colour criteria in their AGN sample: [3.4]-[4.6]> 0.5 mag and [3.4]-[4.6]> 0.77 mag, with a 90% and 75% completeness.
Figure 1 .
Figure 1.Plot resuming the interstellar extinctions in the K s passband and uncertainties in the positions of the 4FGL sources in the VVV region.
Figure 2 .
Figure 2. The distribution of the 78 UGS in the VVV region using different symbols for A, B and C subsamples.The A V iso-contours at 5, 10, 15, 20, 25 mag derived from the extinction maps of Schlafly & Finkbeiner (2011) are superposed.
Figure 3 .
Figure 3. 1 ′ × 1 ′ VVV colour composed images of some cases belonging to our sample of sources.The orientation is shown in the bottom-right panel.
Figure 4 .
Figure 4. CMD and CCD for the field of Fermi-LAT source A9.Left, central and right panels report (J-K s )-K s CMD, (H-K s )-(J-H) and (Y-J)-(J-K s ) CCD using near-IR data from the VVV survey, respectively.The targets in red are those showing extended morphology in the images.The objects marked with circles do not have reliable variability curves; thus, the variability analysis was not performed on these targets.The objects indicated by filled triangles are those for which the variability analysis demonstrates their nature as variable sources.Grey lines defined byCioni et al. (2013) are drawn on the YJK s CCD and labels of regions defined by those authors are also indicated.Grey-scale contours correspond to density of the NIR objects, lying within the positional uncertainty region of the UGS.
Figure 5 .
Figure 5.As Fig. 4, but for Fermi source A12.Empty blue triangle represents an object with low or negligible variability and the blue colour indicates a point-like appearance.
Figure 7 .
Figure 7.As Fig. 4, but for Fermi source B6.Empty blue triangle represents the candidate with low or negligible variability and the blue colour indicates a point-like morphology.
Figure 8 .
Figure 8.As Fig. 4, but for Fermi source B12.The candidate with point-like morphology is indicated by blue circle.
Figure 9 .
Figure 9.As Fig. 4, but for Fermi source C40.Empty blue triangle represents an object with low or negligible variability and the blue colour indicates a point-like appearance.
Figure 14 .
Figure 14.As Fig. 4, but for Fermi source C50.Empty red triangle represents an object with low or negligible variability and the red colour indicates a extended appearance.
Figure 18 .
Figure 18.Mid-IR colour-colour diagrams for the Fermi-LAT sources A8 (top) and B10 (bottom) using WISE data (black dots).The two blazar classes of BZB (BL Lac) and BZQ (FSRQ) are shown in dash-and dot-black lines, respectively.The dotted and dashed red horizontal lines represent the limits for AGN from Stern et al. (2012) and Assef et al. (2018), respectively.The solid red box denotes the defined region of QSO/AGN from Jarrett et al. (2011).
Figure 19 .
Figure 19.K s differential light curves of the VVV sources for the A, B and C subsamples.
Table 2 .
Fermi-LAT sources of our sample with intermediate positional uncertainties (the B subsample).The column description is the same of Table1.
Table 3 .
Fermi-LAT sources of our sample with large positional uncertainties (the C subsample).The column description is the same as in Table1.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-04-01T00:00:00.000
|
9341531
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1371/journal.pgen.1004969",
"pdf_hash": "5278278d262893a2ead3ba738c28297c44e2acfe",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42448",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "d2773bfceabac5c4b8ac853287c0969f41e7e8ca",
"year": 2015
}
|
pes2o/s2orc
|
Simultaneous Discovery, Estimation and Prediction Analysis of Complex Traits Using a Bayesian Mixture Model
Gene discovery, estimation of heritability captured by SNP arrays, inference on genetic architecture and prediction analyses of complex traits are usually performed using different statistical models and methods, leading to inefficiency and loss of power. Here we use a Bayesian mixture model that simultaneously allows variant discovery, estimation of genetic variance explained by all variants and prediction of unobserved phenotypes in new samples. We apply the method to simulated data of quantitative traits and Welcome Trust Case Control Consortium (WTCCC) data on disease and show that it provides accurate estimates of SNP-based heritability, produces unbiased estimators of risk in new samples, and that it can estimate genetic architecture by partitioning variation across hundreds to thousands of SNPs. We estimated that, depending on the trait, 2,633 to 9,411 SNPs explain all of the SNP-based heritability in the WTCCC diseases. The majority of those SNPs (>96%) had small effects, confirming a substantial polygenic component to common diseases. The proportion of the SNP-based variance explained by large effects (each SNP explaining 1% of the variance) varied markedly between diseases, ranging from almost zero for bipolar disorder to 72% for type 1 diabetes. Prediction analyses demonstrate that for diseases with major loci, such as type 1 diabetes and rheumatoid arthritis, Bayesian methods outperform profile scoring or mixed model approaches.
Introduction
Genome wide association studies (GWAS) have been used for three different purposes-to map genetic variants causing variation in a trait, to estimate the genetic variance explained by all the single nucleotide polymorphisms (SNPs) that have been genotyped, and to predict the genetic value or future phenotype of individuals. These analyses are usually performed using different statistical models and methods. To map causal variants usually the SNPs are analyzed one at a time, consequently the failure to account for the effects of other SNPs increases the error variance and thus decreases the power to detect true associations [1,2]. The effects of the SNPs are treated as fixed effects and, to account for the multiple testing, a stringent p-value is used, resulting in many false negatives but typically over-estimating the effects of SNPs declared significant [3]. For most traits the significantly associated SNPs only explain a fraction of the heritability, and thus have low predictive power, even when considered in aggregate [4].
To estimate the variance explained by all the SNPs together, all genotyped or imputed SNPs can be included in the model simultaneously with their effects treated as random variables all drawn from a normal distribution with zero mean and constant variance. This gives an unbiased estimate of the variance explained, but all the estimated SNP effects are non-zero [5].
The most accurate method to predict genetic value or phenotype based on the SNP genotypes is to fit all SNPs simultaneously treating the SNP effects as drawn from a prior distribution that matches the true distribution of SNP effects as closely as possible [4,6]. We do not know the true distribution of effect sizes but a mixture of normal distributions can approximate a wide variety of distributions by varying the mixing proportions [7]. Erbe et al. [8] used this prior and included one component of the mixture with zero variance. A similar model was proposed by Zhou et al. [9] but with a mixture of two normal distributions, one with a small variance and one with a larger variance.
The models used for prediction can also be used to map variants associated with phenotype and to estimate the total variance explained by the SNPs. Because they fit all SNPs simultaneously and account for LD between SNPs, they should have greater power to detect associations, find less false negatives and give unbiased estimates of the larger SNP effects. They can also provide information about the genetic architecture of the trait from the hyper-parameters of the distribution of SNP effects.
Here we use a Bayesian mixture model (called BayesR [8]) to dissect genetic variation for disease in human populations and to construct more powerful risk predictors. We show how this method can shed light on the genetic architecture underlying complex diseases as well as demonstrating its ability to map SNPs associated with disease and estimate the genetic variance explained by the SNPs collectively. The approach was evaluated on simulated and real data of seven case-control traits from the Welcome Trust Case Control Consortium. We assessed the power to correctly identify causal and associated variants, to estimate SNP-based heritability and the accuracy to predict future outcomes. Results from BayesR are compared with a traditional single-SNP GWAS analysis, a linear mixed-effects modeling approach [5,[10][11][12] and a Bayesian sparse linear mixed model [9].
Hierarchical Bayesian Mixture Model (BayesR)
In most GWAS studies the number of markers is very large and notably p>>n. This requires some kind of variable selection, either by discarding unimportant predictors or by shrinking their effects to zero. We used a Bayesian mixture model and a priori assumed a mixture of four zero mean normal distributions of SNP effects (β), where the relative variance for each mixture component is fixed [8]: pðb j jp; s 2 g Þ ¼ p 1 Â Nð0; 0 Â s 2 g Þ þ p 2 Â Nð0; 10 À4 Â s 2 g Þþ p 3 Â Nð0; 10 À3 Â s 2 g Þ þ p 4 Â Nð0; 10 À2 Â s 2 g Þ: Here, π are the mixture proportions which are constrained to sum to unity and s 2 g is the additive genetic variance explained by SNPs. Sparseness is included into the model by setting the effect and variance of the first mixture component to zero. Instead of fixing s 2 g at a pre-specified value [8], we estimate a hyper-parameter for the genetic variance from the data. We compare BayesR with traditional single-SNP GWAS analyses [13], a linear mixed-effects modeling approach (LMM) [5,[10][11][12] and a Bayesian sparse linear mixed model (BSLM) [9,14].
Results from Simulated Data using Real Genotypes
We used real genotype data of 287,854 SNPs measured on 3,924 individuals to simulate quantitative phenotypes with heritabilities equal to 0.2, 0.5, and 0.8. Causal effects were drawn from three groups of effect sizes, the first containing 10 SNPs with moderate effects, the second containing 310 SNPs with smaller effect, and a large group of 2,680 SNPs representing a polygenic component (S1 Fig.), where the definitions of moderate, small and polygenic effect size match those of the prior assumptions of BayesR. Note that the contribution of each mixture to heritability is not known a priori (S2 Fig.).
Identifying associated SNPs. Comparisons between methods are assessed on their ability to identify genomic regions of 250kb containing causal SNPs. This was done because the multi-SNP methods tend to split a QTL effect, even if large, across SNPs in LD with the QTL. Moreover, it may be improper to declare a non-causative SNP in LD with the causal variant a false positive. However, we loosely use the term causal variant for convenience. Fig. 1 shows that only the segments harboring the largest SNPs were accurately identified at a meaningful false positive rate. The ability to accurately locate causal variants decreased with decreasing effect size of the SNP. The power to map accurately the 2,680 polygenic SNPs was very low. BayesR yielded more true positive regions than the other methods across the three heritabilities. Qualitatively similar results were obtained using shorter and longer genome regions (S3 Fig.). Both Bayesian approaches outperformed single-SNP analysis at higher heritability (h 2 = 0.5 and 0.8). A likely explanation for the gain of the multi-SNP Bayesian methods is an increase in power to detect subsequent causal SNPs after the strongest associations have been accounted for.
SNP-based heritability. In general all methods gave unbiased estimates of the true heritability values. The mean estimates (± standard deviation) of the proportion of variance explained by typed SNPs (h 2 g ) for heritabilities of 0.2, 0.5 and 0.8 were 0.20 (±0.065), 0.52 (±0.067) and 0.80 (±0.055) for BayesR ( Fig. 2A). Estimates of BSLMM and LMM were 14 and 32% less accurate (larger standard deviations) than BayesR. This may reflect the fact that the assumed effect size distributions of BayesR closely matched those of the true model. The analyzed simulated SNP sets included the causal variants; hence h 2 g equals h 2 . The true SNP-based heritability would be unknown when the causal SNPs were excluded from the panel due to incomplete LD between makers and causative variants. Prediction accuracy. Each data set of the simulation was randomly split into a training sample containing 80% of individuals and a validation sample containing the remaining 20%. Prediction accuracy was measured with Pearson's correlation coefficient between observed and predicted phenotype in the validation sample. The mean (± standard deviation) correlation coefficient for BayesR was 0.13 (±0.041), 0.32 (±0.038) and 0.50 (±0.032) for simulated h 2 of 0.2, 0.5 and 0.8, respectively (Fig. 2B). BayesR and BSLMM yielded almost the same accuracies and their advantage over LMM and GPRS was relatively large. For all heritabilities LMM generated the lowest accuracies.
Genetic architecture. A feature of BayesR is that it can be used to quantify how many SNPs affect a trait and their contribution to the total genetic variance (Fig. 3). We calculated the variance in each mixture component as the sum of the square of the sampled effect sizes of SNPs allocated to each component. Mean (± standard deviation) contribution to genetic variance of components with SNP variance 10 À4 Â s 2 g ; 10 À3 Â s 2 g and 10 À2 Â s 2 g was 43% (±14.0), 36% (±9.8) and 21% (±8.3) for h 2 = 0.2, 34% (±12.6), 45% (±11.9) and 21% (±6.7) for h 2 = 0.5 and 30% (±8.0), 49% (±7.6) and 21% (±4.0) for h 2 = 0.8. Note that the true underlying mixture is not identifiable (S2 Fig.), however the proportion of variance explained by each mixture component showed good correspondence to the simulated genetic architecture. Estimates were generally not very precise, which is partly due to the large sampling variance when simulating SNP effects. BSLMM provides an estimate of the relative contribution of SNPs with an effect above the polygenic component, and this estimate showed a strong increase with increasing heritability of the trait (Fig. 3).
Sensitivity Analyses
In additional simulations, under models that ranged from very sparse to polygenic and using alternate parametric models for the effect-size distribution, we assessed how our prior assumption may affect parameters estimates and interpretation of results (S2 Text). To cover a wide range of architectures from very sparse to polygenic, we sampled 10, 100, 1,000, 10,000, and 20,000 causal SNPs either from a standard normal distribution or a gamma distribution with shape 0.44 and scale 1.66 [15,16]. In general estimates of heritability from all methods were robust across the wider range of settings (S1 Table). Heritability estimates of LMM were unbiased, even under scenarios where its modeling assumptions were not met. BayesR and BSLMM showed a small upward bias under very sparse scenarios and BayesR slightly underestimated heritability under highly polygenic models. BayesR estimates had the smallest variance in the very sparse setting (10 causative variants) despite prior specifications that did not closely correspond to the true model. Similar to the previous results using real genotype data, where the prior model closely matched the analysis model of BayesR, prediction accuracies from BayesR and BSLMM were highest and both methods performed almost the same across all the scenarios (S2 Table). LMM was the least accurate method with the exception of scenarios including 10,000 and 20,000 SNPs. BayesR and BSLMM outperformed GPRS, with the exception of the scenarios involving 10 causative SNPs. These results show that the mixture models are more powerful than GPRS, even in the case of LE markers where the single SNP method might be expected to do very well. Inferences of BayesR about the genetic architecture were consistent with the underlying model and provided insights into the genetic architecture (S4-S5 Figs.). Posterior inference of the BayesR model for the scenario including 10 causative SNPs, which is poorly supported by the BayesR prior, provided strong evidence to revise the prior model. As for the 287K data, BayesR and BSLMM outperformed LMM and GRPS in finding causal variants in all scenarios (S6 Fig.).
Analyses of WTCCC Data
In addition to simulated data we assessed the performance of BayesR for seven diseases of the Welcome trust case control consortium (WTCCC [17]). These data were previously used to estimate heritability [18,19] and for risk prediction [14,[20][21][22].
SNP-based heritability. We report h 2 g for the diseases in WTCCC on the liability scale (S3 Table), but make comparisons on the observed scale since the controls are common between traits so that comparisons reflect the underlying genetic architecture in the cases samples. For five of the seven traits (BD, CAD, CD, HT, RA), estimates of h 2 g were very similar between methods with estimates from BayesR slightly lower than BSLMM and LMM (Fig. 4A). For RA and T1D, which have large associations with alleles in the major histocompatibility complex, h 2 g from the Bayesian methods was much smaller compared to LMM. Estimates of BayesR were less consistent (indicated by larger posterior standard deviations), particularly for traits with a large polygenic contribution to variance, such as BD and HT.
Accuracy and bias of prediction. We created 20 random 80/20 splits for each disease and assessed accuracy by computing the area under the curve (AUC [23]). The predictive performance for all seven diseases is shown in Fig Although BayesR performed well for some diseases, prediction performance assessed in case/ control data suffers from ascertainment bias [24], because the prevalence in the general populations is much lower than the prevalence in the case/control study, where cases are substantially overrepresented. We therefore also report prediction performance of the methods while accounting for prevalence (S4 Table). BayesR and BSLMM performed equally well across the seven traits with a mean AUC of 0.56 and outperformed GPRS and LMM in diseases where the original study identified relatively strong associations (CD, RA, T1D) [17]. GPRS and LMM had comparable prediction accuracy for traits where the known risk loci have effects of small individual magnitude (HT, BD). Prediction accuracy of LMM increased with increasing heritability, but there was no direct relationship between estimates of h 2 g and predictive performance for the other methods.
The regressions of phenotype on predicted value for GPRS were considerably larger than one ( Table 1), showing that the difference in the predictions of a pair of individuals is smaller than the difference in their phenotypes. Predictions from the other methods showed little or no bias. An unbiased predictor is necessary when genomic predictions are to be combined with different information sources (e.g. sex, smoking status, BMI etc.) for risk prediction.
Genetic architecture. A feature of BayesR is that it estimates the number of associated SNPs along with their variance explained. The posterior mean of the number of SNP fitted in WTCCC varied considerably between traits (S5 Table). The number of SNPs was comparatively low for T1D, where 2,633 individual SNPs explained the total genetic variance. The largest number of SNPs was included in the model for BD (N = 9,411) of which more than 99% had very small effects (effect size10 À4 Â s 2 g ). The proportion of variance explained by each mixture component varied markedly across the seven diseases (Fig. 5). Large numbers of SNPs with small effects (10 À4 Â s 2 g ) contributed the majority of the genetic variance explained for BD (94.3%), HT (87.6%), CAD (83.5%) and T2D (77.6%). A substantial proportion of the total variance was explained by a small number of SNPs with larger effect sizes (10 À2 Â s 2 g ) for T1D (71.8%), RA (29.0%) and CD (11.9%). As might be expected prediction accuracy of BayesR was also the highest for these traits and credible intervals indicate that genetic trait architecture is inferred with reasonable precision for most traits (S7 Fig.).
We also assessed the proportion of additive genetic variation contributed by individual chromosomes and the proportion of variance on each chromosome explained by SNPs with different effect sizes (Fig. 6). Estimates of the variance explained by each chromosome were largely related to the length of the chromosome with the majority of variation consistent with a polygenic architecture. Differences in the contribution of single chromosomes on individual traits were mostly due to SNPs with large effect (10 À2 Â s 2 g ) and to a lesser extent to SNPs with smaller effects (10 À3 Â s 2 g ). On the whole, regions and chromosomes that explained large proportions of the SNP-based variance coincide well with the regions that showed the strongest association signals in the original study (Table 3 and Fig. 4 in WTCCC study published in [17]). One example is chromosome 9 that harbors SNPs with large effect on CAD and the most significant SNP (rs1333049) was located within a 44kb region spanned by 6 SNP with a posterior inclusion probability of 1.2 (sum of the posterior inclusion probabilities of the 6 SNP). The region accounted for 27.2% of the genetic variance of chromosome 9. We estimated that chromosome 6 contributed 67.2% of the genetic variance in T1D, which is larger than the 47-58% reported using LMM [11,18]. More than 96% of the variance explained by chromosome 6 in T1D was due to SNPs with large effects. Chromosome 6 accounted for 28.1% of the genetic variance in RA which is slightly less than *33% using LMM [18].
Computational demand. Computing time is important, particularly with the tremendous number of markers in many human SNP data sets. The average running time required for each method to perform prediction analysis for BD and T1D (chosen as examples because of their different genetic architectures) is shown in Table 2. Computation time depends on a number of factors, including programming language and software environment, and for the sampling based methods in particular on the number of iterations used. We did not investigate in detail how many iterations are sufficient and ran BSLMM with its default value of 1,000,000 sampling steps and BayesR for 50,000 iterations. We observed only minor differences in the posterior distributions between replicated chains and interpreted this as evidence that the algorithm converged (S8-S9 Figs.). The requirements of Bayes-R were several orders of magnitude higher than BSLMM when compared on a per-iteration basis; nevertheless, the run time of BayesR was competitive for the data sets considered here. LMM was computationally less demanding than the other methods. LD-based clumping accounted for most of the computational burden of GRPS and had to be repeated 10 times in our cross-validation scheme. Computing time for BayesR scales linearly with the number of SNPs and to reduce the computational burden of BayesR we changed the per-iteration MCMC scheme as follows. For the first 5,000 cycles the effect of each SNP was sampled per iteration. After this we did not sample all SNPs in each MCMC iteration, but updated SNP effects (in random order) only until we had sampled 500 SNPs with non-zero effects. SNPs not updated in the current iteration kept their effect sizes from the previous MCMC cycle. The idea behind this is based on two observations. Firstly, SNPs with larger effects appear more quickly in the model. Secondly, most of the calculation time is spent on sampling small SNP effects in and out of the model to mimic the 'polygenic ' component, but which individual SNP is retained in the model has minimal effect on the posterior. We found that the '500 SNPs' strategy generated similar results to the "All SNPs' strategy (S6 Table), but that the computational burden was significantly reduced by a factor of 3 to 6 ( Table 2).
Discussion
We have presented a single model for analysis of GWAS that maps associated variants, estimates the genetic variance explained by the SNPs collectively, describes the genetic architecture of the trait and predicts phenotype from SNP genotypes. The framework we present applies a Bayesian hierarchical model to human complex traits based on the assumption of a prior distribution that SNP effects come from a mixture of more than two normal distributions. The procedure clusters markers in groups with distinct genetic values where each SNP explains 0.01, 0.1, or 1% of s 2 g and a group of SNPs with zero effect. Instead of fixing the variance component s 2 g to a pre-specified value as in Erbe et al. [8] we treat s 2 g as unknown and estimate it from the data. This is because the shrinkage of SNP effects is affected by s 2 g and determining the amount of shrinkage a priori can have negative impact on performance [9,16].
BayesR showed good performance in estimating the SNP-based heritability across a wider range of simulated genetic architectures ( Fig. 2A, S1 Table) and estimates were similar to BSLMM and LMM for diseases of the WTCCC study (Fig. 4A). If the primary interest is to estimate SNP based heritability, LMM is faster and approximately unbiased under different disease architectures [9,11,18]. BayesR can provide more accurate estimates under certain architectures, for example when effect sizes follow skewed distributions, which is the case for many human diseases [4]. For phenotype prediction BayesR was as accurate as BSLMM which outperformed various other approaches in the study of Zhou et al. [9]. Qualitatively, the main difference between the methods considered here is that the BayesR model is sparse, which seems intuitively appealing, as not every genotyped SNP is likely to be in LD with causative variants. For example, often in GWAS the primarily focus is not on estimating the relative contribution of each genetic variant, but whether or not a particular variant contributes at all. Sparseness and good performance make BayesR an attractive alternative to currently available methods.
The Bayesian framework incorporates model uncertainty by averaging over many different competing models [25], and this allows for more robust inferences about the genetic architecture. The posterior inclusion probability can be directly interpreted as the probability that a variant is an risk factor with a certain effect size [26], which is more intuitive to interpret than an association of zero or one based on a p-value from single-SNP analysis. Our simulations showed that SNPs with high inclusion probabilities have a high probability of being a causal or associated variant (Figs. 1, S3, S6) and an increase in performance of BayesR to identify smaller SNPs that are currently difficult to detect in single SNP GWAS [1,27,28].
Predictions of phenotypes from BayesR, BSLMM and LMM were unbiased (Table 1). Unbiasedness of a disease predictor is important for practical implementations [29,30], yet often ignored when developing GPRS derived from GWAS summary statistics.
We applied BayesR directly to the WTCCC data treating the binary outcome coded 0/1 as the response in an ordinary linear regression. The predicted phenotypes can then be taken as the probability of being a case and heritability estimates can be transformed to liability of disease scale [11]. The model can be extended to binary or ordered categorical traits by fitting a liability model [31], but improvements are expect to be negligible [27,32].
By quantifying the contribution of SNPs and their effect sizes, BayesR can be used to make inferences about the underlying genetic architecture of complex phenotypes (Figs. 3, 5, 6, S4). In our analysis of WTCCC, we found that most of the SNPs had a zero effect (>96%), inconsistent with the 'infinitesimal model' [33], but that thousands each explain a small proportion of the total genetic variance and these estimates suggest a substantial contribution of a polygenic component to these common diseases. However inferences did vary between diseases, with fewer loci contributing to the genetic variance for T1D and RA than for the other traits. This difference is mainly a result of large effects associated with variants in the MHC for T1D and RA. Furthermore the variance explained by larger SNPs (effect size10 À2 Â s 2 g ) varied markedly between chromosomes and between diseases, ranging from 73% of h 2 g for T1D to 0.6% for BD. Consistent with other studies the variance explained by individual chromosomes was largely related to its length [34][35][36], although chromosomes of similar length showed large variability across diseases, which was due to SNPs with larger effects.
We caution against over-interpretation of our results as they relate to genetic architecture [37]. Inevitably the specified mixture model that effect sizes come from four mixture distributions is very simplistic. Nonetheless, since the WTCCC diseases all utilize the same control samples, the differences between diseases allows comparative interpretation and the genetic architectures agree well with the findings in the original [17] and subsequent studies [11,18,22]. However, in practice the true effect size distribution is unknown. We used the same mixture distribution as prior as Erbe et al. [8], where it showed good mixing between SNPs, but alternative prior distributions may lead to better performance. Priors may be influential, however, simulating a large number of different genetic architectures, we found that in general results were not very sensitive to our modeling assumptions and that inferences of BayesR about the genetic architecture were consistent with the underlying simulated genetic architecture (S4 Fig.). Using a distribution with variance 10 À4 Â s 2 g seemed a reasonable choice for the effect size of 'polygenic' SNPs (S5 Fig.). How much of the heavier tail of the distribution can be distinguished from zero effects depends on sample size. For much larger data sets adding more classes, for example one with variance10 À5 Â s 2 g , might help to interpret the data. In addition to the caveats relating to the specific mixture model we emphasize that the inference drawn on SNP effects and genetic architecture is from observed SNPs and not on the causal variants directly. The true but unknown pattern of correlation between unobserved causal variants and genotyped SNPs will impact the inference about genetic architecture. Nevertheless, the comparison across the seven diseases, for which the genotyped SNPs are the same, demonstrates large differences in SNPs effects, variance explained and prediction accuracy, reflecting real differences in the distribution of effect sizes at causal variants.
Incorporating markers beyond the small number of risk variants identified at genome wide significance has the potential to increase the predictive performance of risk models [4,38]. Our results on predicting disease risk in WTCCC are consistent with recent analysis [9,[20][21][22] that demonstrated that predictive ability of polygenic models is trait specific, depending on heritability and genetic architecture. Furthermore, our results extend beyond previous reports of the impact of genetic architecture on genetic risk prediction, most of which have relied on hypothetical effect-size distributions or used results from risk predictions to inform about genetic architecture [38,39]. Here we infer genetic architecture directly from entire GWAS data, which can contribute to our understanding of complex disease and our ability to assess the power of future GWAS depending on the underlying disease architecture. We observed that the pattern of SNP-based heritability did not follow the same pattern as those of AUC. In particular, heritability was not a good indicator of prediction performance for BayesR and BSLMM. For traits where common SNP account for a large proportion of the SNP based heritability (T1D, RA, CD), predictive accuracy was much higher for the two Bayesian methods compared to LMM and GPRS.
BayesR has proved feasible in the WTCCC data set with *300,000 markers, but much larger data sets are currently being collected. Computing time increases linearly with the number of SNPs, however, runtime for large SNP sets can be reduced by avoiding redundant computations through filtering of SNPs that are in perfect or high linkage disequilibrium with at least another SNP. The savings can be quite substantial, ranging from 9-22% (r 2 = 1) to 34-58% (r 2 >0.80) for the Hapmap3 panel [40], depending on the ancestry of the population [41]. Computing performance can further be improved by running multiple MCMC chains instead of a single long chain. Moreover, computing time of the '500 SNPs' implementation does not increase linearly with the number of SNP after the first 5,000 cycles, thus reducing computational burden even more for larger data sets. However, less arbitrary approaches should be developed.
For very large datasets Bayesian-like estimation using MCMC might be infeasible altogether, and fast alternative Bayesian estimation procedures are required [42,43]. On the other hand, the use of a simple Gibbs sampling scheme provides great flexibility in effects size distributions by selecting the number and the variances of the mixture. We illustrated the flexibility of the method by partitioning the genetic variance into contribution of SNPs with different effect sizes by chromosomes. This model can easily be extended to allow for different prior probabilities of the mixture distribution for each chromosome [44], to include dominant genetic variation [28], to partitioned variance attributable to SNPs by annotation [34], or to include prior biological knowledge in genomic analysis and prediction [45].
We found little difference between BayesR and BSLMM in prediction performance, however, differences seem likely when individual effects sizes can be estimated more accurately with increase in sample size. For instance, as sample size increases and genome sequence data is analyzed, causal variants explaining only 0.1% of genetic variance will be identified. An advantage of BayesR is that most SNPs have near zero effect and so could be deleted from prediction of future phenotype in practice. Improvements can also be expected when the prior induced mixture distribution more closely captures the actual distribution of effect sizes. It has been shown in simulation studies [46] that models that include all genetic variants do not take full advantage of high-density marker data if the number of causal SNPs is small, while approaches with an implicit feature selection do.
In conclusion, we proposed and applied a flexible Bayesian mixture model that simultaneously estimates effect size of all SNPs, the genetic variation captured by SNPs and maximizes prediction accuracy. We demonstrate the ability of such a model to dissect genetic architecture and partition genetic variation. The method is highly flexible, can be applied to sequence data and can incorporate prior biological knowledge.
Statistical Framework
Phenotypes are related to markers with a standard linear regression model where y is a n-dimensional vector of phenotypes, 1 n is a n-dimensional vector of ones, μ is the general mean, X is an n×p matrix of genotypes encoded as 0, 1 or 2 copies of a reference allele. The vector β is a p-dimensional vector of SNP effects and is a n-dimensional vector of residuals, $ Nð0; Is 2 e Þ with I being a n×n identity matrix.
BayesR
The BayesR model assumes that the SNP effects come from a finite mixture of K components so that the probability of the β effects conditional on the variance of the components s 2 ¼ ðs 2 1 ; . . . ; s 2 K Þ and the mixture proportions π = (π 1 , . . ., π K ) which are constrained to be positive and to sum to unity: where Nðbj0; s 2 k Þ denotes the density function of the univariate normal distribution with mean 0 and variances 2 k . The Bayesian approach requires the assignment of prior distributions to all unknowns in the model. We followed Erbe et al. [8] and a priori assumed a mixture of four zero mean normal distributions, where the relative variance for each mixture component is fixed: Here, s 2 g is the additive genetic variance explained by SNPs. Sparseness is included into the model by setting the effect and variance of the first mixture component to zero. A key difference in our implementation of BayesR from previous application [8] is that we estimate a hyper-parameter for s 2 g from the data, rather than fixing the marker variance at a pre-specified value. MCMC estimation of the unknown parameters ðm; p; b; s 2 g ; s 2 e Þ used a Gibbs scheme to sample values from each unknown parameter's conditional posterior distribution. Details of the sampling procedure are outlined in S1 Text.
Simulated Data
Simulations were used to assess the accuracy of estimates of model parameters and of inferences provided by BayesR. The first study represents a typical genome-wide association study and uses real genotype data to capture the correlation between SNPs. Moreover, in GWAS most SNPs are not in LD with causative variants and effect size distribution of causative variants is skewed toward smaller effects.
Here we used genotype data of 3,924 Australian individuals [5]. After quality control, imputation of missing genotypes at each loci and removal of SNPs with a minor allele frequency less than 1%, 287,854 measured SNPs remained. The effects sizes of causal SNPs were assumed to come from a series of three zero mean normal distributions with the number of SNPs in each class falling in inverse proportion to the size of the effect. First we randomly selected 3,000 SNPs to be causal. Large effect sizes were drawn for 10 SNPs by sampling from a normal distribution with variance σ 2 = 10 −2 , moderate effect sizes were generated for 310 SNPs by sampling from a N(0,10 −3 ) distribution and the effects of the remaining 2,680 SNP were generated from a N(0, 10 −4 ) distribution. Residual effects for each individual (e i ) were obtained by sampling from a normal distribution with mean 0 and with variance chosen to accomplish heritabilities of 0.2, 0.5 or 0.8. The simulated phenotype for a single individual was then obtained as follows: the centered and scaled genotype and x ij is the number of copies of the reference allele (0, 1, 2) at SNP j for individual i with p j being the frequency of the reference allele in the sample. Sampling from this mixture distribution resulted in a fattailed distribution of effect sizes (S1 Fig.), where large, moderate and small effects contributed around 14%, 46% and 40% of the total genetic variance. Fifty replicates were analysed for each of the three heritabilities and a different set of 3,000 SNPs was selected for each replicate. In each replicate the sampled 3,000 SNP effects were randomly assigned to the selected markers. Note that the contribution of each causal SNP to heritability depends on its frequency, so that the true number of SNPs in each mixture component of the BayesR model and the contribution of each mixture to heritability are not known a priori (S2 Fig.).
Sensitivity Analysis
In the simulation using real genotype data, phenotypes were generated under a model that very closely matched the prior specifications for BayesR. To investigate how the prior assumption may affect parameters estimates and interpretation of results we performed additional simulations, including scenarios where we created mismatches between modeling assumptions and simulated genetic architectures. To avoid the problem of differentiating between causal variants and non-causal SNPs in LD with causal variants we simulated 20,000 independent SNPs in a sample of 5,000 individuals. Genotypes of SNP j were generated by sampling from a binomial distribution with n = 2 (number of successes) and success probability p i , where p i was sampled from a univariate distribution with interval [0.05, 0.5]. We simulated 10, 100, 1,000, 10,000, and 20,000 causal SNPs to cover a wide range of architectures from very sparse to polygenic. Effect sizes were sampled either from a standard normal distribution or a gamma distribution with shape 0.44 and scale 1.66 as in [15,16] and residual effects were added to achieve a heritability of 0.5. Sampling from a gamma distribution generates fewer large and more small effects than the standard normal [16].
WTCCC Data
We analyzed 7 traits of the Welcome Trust Case Control Consortium (WTCCC) study [17]. Following previous analyses of the data [11,18] we performed strict QC on SNP data using PLINK [13]. First, we removed individuals with > 2% missing genotypes. For each of the 7 case and the two control data sets we removed loci with frequency of the minor allele < 0.01 and SNP with missingness > 1%. After combining each case and the two control sets into 7 trait case/control studies, SNPs significant at 5% for differential missingness between cases and controls and SNP significant at 5% for Hardy-Weinberg equilibrium were removed. Relatedness testing was performed using a pruned set of SNPs with LD of r 2 <0.05, pairs of subjects with estimated relatedness > 0.05 were identified and one member of each relative pair was removed at random. Principal components of the genomic relationship matrix were estimated with the same set of pruned SNP using the software GCTA [10] and all phenotypes analyzed were the residuals of case-control status following linear regression on the first 20 principal components. After QC the data included 1,851 cases of bipolar disorder (BD),
Other Methods
Single-SNP GWAS analysis. Single SNP-trait association analyses were performed using a linear regression model in PLINK [13]. A commonly used method to build prediction models from single-SNP GWAS analyses is genomic risk profiling [38], where SNP effect sizes estimated in one population are used to build a multi-SNP prediction model to generate a genomic profile risk score (GPRS) for each individual in another population. Applying GPRS requires the choice of an appropriate p-value threshold used for selecting SNPs into the predictor. We used 10-fold cross-validation to derive the optimal p-value threshold for each replicate of the data used for prediction analyses. First, the training data (80% of the total sample for each replicate) was divided in K = 10 non-overlapping folds of equal size. Then GWAS was performed using K-1 folds of data and later SNPs were pruned for independent associations using the "clump" procedure in PLINK, with a pairwise linkage disequilibrium cutoff of r 2 <0.25 within a 500kb window. Based on various p-value thresholds (0.001, 0.005, 0.01, 0.05, 0.10, 0.15,. . ., 1.0) an increasing number of SNPs were selected in the predictor. At each value of the threshold the accuracy of predicting the phenotypes in the left-out fold was recorded. This process was repeated K times so that every fold was left out once. The p-value threshold that yielded the highest average accuracy of prediction in the K test sets was then used for the prediction model after estimating SNP effects from the full training set.
Linear mixed model (LMM). We used the software GCTA [10] for linear mixed model analysis. LMM assumes that all SNP effects are drawn from the same normal distribution. In GCTA this is implemented by an equivalent model in which a genomic relationship matrix estimated from the SNPs describes the covariance between the genetic values of individuals [5]. The method is often referred to as GBLUP (genomic best linear unbiased prediction) when used to estimate breeding values of related individuals from marker data in plant and animal breeding, assuming that variances are known without error. However, it is less commonly used for prediction of unrelated individuals in humans [12]. We will refer to the method as LMM as in Zhou et al. [9], but note that its main motivation in human applications is estimation of individual SNP effects and not prediction of aggregate genomic values of individuals. For prediction we estimated genetic values directly fitting the covariance between the genetic values of training and validation individuals. For the mapping of causal variants we used the-blup-snp option in GCTA to transform the BLUP solutions for individuals into the BLUP solutions for SNPs.
Bayesian sparse linear mixed model (BSLMM). BSLMM [9] is a hybrid of the classical polygenic model and sparse regression models. It assumes that effects come from a mixture of two normal distributions, with each genetic variant having at least a small effect on phenotype (polygenic component) and only a fraction of these having an additional effect (sparse component). We fit BSLMM using the GEMMA software [14].
Identifying Causal SNPs
We compared the ability of BayesR, BSLMM and LMM and single-SNP to identify causal variants. For the simulated 287K data we focused on the SNPs with large and moderate effect sizes sampled from N(0,10 −2 ) and N(0,10 −3 ), respectively. Although, small effects together contributed *40% to genetic variance, power to identify 'polygenes' with our sample size was expected to be effectively zero. Similar to Guan and Stephens [27] we computed a measure of evidence of association between a genome segment and phenotype. This was done because the multi-SNP methods have the tendency to dilute a QTL effect across SNPs in LD with the QTL. For single-SNP analyses we used the minimum of the p-values of the SNPs within a region as evidence of association. The sum of the absolute effect sizes of SNPs within a region was used for LMM. The GEMMA software that implements BSLMM outputs the posterior probability of a SNP to have an effect above the polygenic background and we summed these probabilities over the SNPs within a segment. BayesR provides separate inclusion probabilities for an individual SNP to fall in each mixture component. We used the sum of the posterior inclusion probabilities that SNPs are allocated to effect size classes 10 À2 Â s 2 g and 10 À3 Â s 2 g as evidence measure. In BayesR the polygenic component is 'mimicked' by SNPs assigned to the mixture class with small effects size (10 À4 Â s 2 g ) and was therefore not included in the calculation. We divided the genome in non-overlapping segments of * 250kb size. For each method we selected a series of cutoff values for the evidence measure and considered all segments containing a causative variant that exceed the cutoff value as true positives and all other regions exceeding the cutoff value as false positives. We then plot the true positive rate against the false positive rate averaged over two different starting positions for the first window (0, 125kb). In the simulations using uncorrelated SNPs we assessed the methods on their ability to identify individual SNPs rather than regions. We used similar measures of evidence of association, except for BayesR where we used the posterior probability of the SNP being included in the model (i.e. 1-posterior inclusion probability of class0 Â s 2 g ).
Accuracy and Bias of Prediction
We assessed predictive performance in the simulated data and the WTCCC data. In the simulated data, each replicate was randomly split into a training sample containing 80% of individuals and a validation sample containing the remaining 20%. For the WTCCC data we generated 20 random 80/20 splits for each trait. We use Pearson's product moment correlation statistic as measure of predictive ability in the simulated data. The accuracy of risk prediction in WTCCC was assessed by the area under the curve (AUC) [23]. We also report the slope of the regression of phenotypes on the predictions. A slope different from one indicates bias in the prediction. A slope of unity from a regression of phenotype on predictor implies that the predictor is calibrated correctly on the scale of absolute risk, which matters in genomic medicine applications, in particular when the genetic predictor is combined with non-genetic factors (e.g. gender, smoking status, BMI etc.) for risk prediction.
Implementation
For BayesR, BSLMM and LMM we centered and scaled each column of the genotype matrix to have mean zero and unit variance in all analyses. The data was analyzed using our BayesR software implemented in Fortran. The software is available at http://www.cnsgenomics.com/ software/. Prior assumptions for BayesR were as described above (see also S1 Text). For all analyses a chain length of 50,000 was used with the first 20,000 samples as burn-in. Posterior estimates of parameters are based on 3,000 samples drawing every 10 th sample after burn-in. GEMMA was run with its default setting of 1,000,000 sampling steps using the first 100,000 as burn-in. The only default parameter we changed was lowering the minor allele frequency threshold to 0.001, to ensure that no SNP was deleted from the model when 80% of the data was used for training.
Web Resources
The Table. Posterior means from BayesR using the full (All SNPs) and a reduced MCMC scheme (500 SNPs). (PDF)
|
v3-fos-license
|
2022-11-10T17:00:53.644Z
|
2022-11-01T00:00:00.000
|
253443323
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-2615/12/21/3065/pdf?version=1667838148",
"pdf_hash": "e9a311cd536b706c0f887cac5d12fa95c9b1e881",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42450",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "11f1a061f637422272c5408aac42efe6758a6adf",
"year": 2022
}
|
pes2o/s2orc
|
The Impact of Thyme and Oregano Essential Oils Dietary Supplementation on Broiler Health, Growth Performance, and Prevalence of Growth-Related Breast Muscle Abnormalities
Simple Summary In recent years, there has been growing interest in the use of thyme and oregano essential oils in feed formulations to promote growth in chicken broilers. Thyme and oregano essential oils are considered promising ingredients to replace antibiotics as growth promotors. The aim of this study was to evaluate the impact of thyme and oregano essential oils on growth performance, broiler health, and the incidence of muscle abnormalities at different slaughter ages. This study showed that the addition of thyme and oregano essential oils, individually or in combination, significantly increased body weight compared to the control group. Thyme and oregano essential oils improved the feed conversion factor, which indicates lower feed intake (feed intake did not change according to our results) with higher meat production. Muscle abnormalities increased with the addition of thyme and oregano essential oils to broiler diets, which could be due to the increase in the growth rate. In conclusion, the inclusion of thyme and oregano oils in broiler chicken feed resulted in an improvement in the growth performance of broiler chickens. Abstract The objective of this study was to investigate the effects of thyme and oregano essential oils (as growth promotors), individually and in combination, on the health, growth performance, and prevalence of muscle abnormalities in broiler chickens. Six hundred day-old Cobb 500 hybrid chickens were randomized into four dietary treatment groups with three replicates each. Chicks in the control group (C) received a basal diet, while the experimental treatment groups received basal diets containing 350 mg/kg of thyme oil (T1), 350 mg/kg of oregano oil (T2), and 350 mg/kg of thyme and oregano oil (T3). Growth performance parameters were evaluated at 14, 28, and 42 days. The broilers in treatments T1 and T2 had significantly higher body weights than the control group. The feed conversion ratio was the lowest in chicks who received oregano oil, followed by those fed thyme oil. The overall prevalence of growth-related breast muscle abnormalities (including white striping and white striping combined with wooden breast) in groups receiving essential oils (T1, T2, and T3) was significantly higher than in the control group (C). The thyme and oregano oil diets showed no significant differences in antibody titers against Newcastle disease or interferon-γ (INF-γ) serum levels. In conclusion, thyme and oregano oils had a positive impact on the growth performance of broiler chickens but increased the incidence of growth-related breast muscle abnormalities.
Introduction
Health concerns and regulatory restrictions on the use of antibiotics motivated the researchers to evaluate several alternatives to antibiotics. It was found that the use of different combinations of additives (such as medium-chain fatty acids, short-chain fatty acids, oregano essential oil, and sweet basil essential oil) exhibited positive effects on the growth performance of broilers [1]. Extracts of medicinal herbs (aromatic herbs) have received increasing attention from both researchers and producers as potential alternatives to conventional antibiotic growth promoters in broiler rations [2]. The beneficial effects of these essential oils as well as plant oils are related to their suitable chemical properties and functional groups, whose mechanisms of action remain to be explained [3,4]. Thyme and oregano essential oils have been extensively studied as feed supplements in broiler rations. However, varying results have been reported on their effects on overall broiler production performance [1,5]. There was no agreement between previous studies about the effects of thyme or oregano essential oils on feed intake, body weight gain, and feed conversion in broilers when these oils were used separately [6][7][8].
Extracts of thyme (Thymus vulgaris) and oregano (Origanum vulgare L.) are rich in several functional compounds such as carvacrol, thymol, lutein, and zeaxanthin, which play an important role in broiler health and growth performance [8,9]. The inclusion of oregano essential oil in broiler feed exhibited a protective effect against necrotic enteritis (NE) caused by Clostridium perfringens [9,10]. Some studies reported positive effects on the performance parameters of broiler chicks [8,11,12], while other studies showed no effect on broiler performance parameters [13,14]. In contrast to these studies, others reported negative effects of supplemental thyme or oregano oils in rations on broiler growth [6,7,15]. The use of thyme with prebiotics, such as mannan-oligosaccharides, in the feed formulation showed positive effects on the growth performance of broilers [16]. A few reports showed positive effects on the meat characteristics of carcasses when essential oils were added to the broiler rations [12,17]. These authors attributed the inconsistent results to differences in the doses of the essential oils used, environmental factors, the durations of the experiments, and health status of the chicks used.
Currently, poultry breeders and the meat industry are concerned about the occurrence of growth-related breast muscle abnormalities such as white striping (WS) and wooden breast (WB) [18]. In this context, several studies indicated that breast meat affected by these disorders had lower quality characteristics than normal breast meat [19][20][21][22][23]. Overall, the incidence rates of these abnormalities are alarming and appear to be unsustainable for the poultry industry [24]. It was found that the incidence of muscle abnormalities was higher in high-breast hybrids than standard-breast hybrids [25]. Moreover, the incidence of muscle abnormalities was higher in males than in females [26]. Incidence rates varied between studies. It was found that the incidence of WS was about 12% [25], while other researchers found that the incidence of WS reached 50% [27]. Another study showed that the incidence of WS was 75% in high-breast-yield hybrids and 74% in standard-breast-yield hybrids [28].
Mudalal et al. [29] examined the effect of a natural herbal extract on the occurrence of muscle abnormalities such as WS and WB. The results showed that the herbal extract reduced the occurrence of WS and WS combined with WB.
In particular, Newcastle disease (ND) is considered one of the most serious diseases affecting broiler flocks worldwide, causing severe losses in the poultry sector [33]. Biosecurity and vaccination strategies are needed to control this disease [34]. Improving the immunization strategy of ND vaccines and host protection can be enhanced by complementary approaches, such as the use of herbal extracts from medicinal natural products [35]. There is growing evidence that the coadministration of herbal extracts with the vaccine showed increases in cytokine production and the antibody responses of immune cells [30].
To our knowledge, there are few studies that investigated the effects of thyme and oregano oils as a mixture on the health, growth performance, and prevalence of muscle abnormalities of broilers reared under commercial conditions. Therefore, the objective of this study was to examine the possible effects of thyme and oregano oils and a combination of both oils on the performance parameters, health status, and meat characteristics of broiler chicks as well as on the prevalence of muscle abnormalities from 1 day to 42 days of age.
Experimental Design
In this study, 600 one-day-old Cobb 500 hybrid broiler chicks were randomly divided into four groups of 150 chicks each, and each group was replicated three times. The chicks from the first treatment group received a basal ration (starter and grower) as a control group (C) ( Table 1). The rations of the second treatment group (T1) were supplemented with 350 mg/kg of thyme essential oil. The rations of the third treatment group (T2) were supplemented with oregano essential oil at a concentration of 350 mg/kg. The rations of the fourth treatment group (T3) were supplemented with 350 mg/kg of thyme and oregano essential oils in equal proportions. In formulating each experimental ration, the essential oils were first mixed with the corresponding oil stock, and the mixture was then homogenized. The rations were mixed in two batches (the starters and the growers) and stored in airtight bags at room temperature for a short time before being fed to the chicks. The chicks were housed on a deep litter (fresh wood shavings) in an open-sided broiler house. Commercial protocols were used to rear the experimental chicks. The broiler house temperature was manipulated and closely monitored to avoid fluctuations, starting at 32 • C on day 1 and decreasing by 2 • C every week thereafter. The chicks were exposed to 24 h of lighting for the first 4 days and then 23 h of lighting and 1 h of darkness until the termination of the experiment. Chicks had access to feed and water around the clock. Body weight and feed intake were determined on days 14, 28, and 42. Mortality was recorded daily. The feed conversion ratio was calculated as feed intake (g) per mean body weight (g) for each replicate of the treatment groups. The feed intake was calculated on a weekly basis, taking into account differences in feed weight. In addition, the weight of each broiler was recorded weekly.
Breast Weights
Seven broilers from each replicate were slaughtered at 42 days of age using a manual operation technique (n = 21/group). Breasts were weighed using a balance with a sensitivity of 0.01 g.
Assessment of Incidence of Growth-Related Breast Muscle Abnormalities
The incidence of growth-related breast muscle abnormalities was assessed at approximately 8 h postmortem. Muscle abnormalities were classified into three levels (normal, WS, and WB combined with WS) based on previously described criteria [27,36]. Breast fillets that exhibited no white striations or hardened areas were considered normal (N). Breast fillets that had white striations of varying thickness (thin to thick striations) were considered to be white-striped fillets (WS). Finally, breast fillets that had pale ridge-like bulges and diffuse hardened areas (namely WB) in combination with white striations were labeled as WS/WB.
The color trait (CIE L* = lightness, a* = redness, and b* = yellowness) of raw breast meat was measured in triplicate using a Chroma Meter CR-410 (Konica Minolta, Japan), and the skin-side surface of each fillet was considered a measuring point.
Newcastle Disease Vaccine Response
The freeze-dried live Newcastle Disease (ND) vaccine (LaSota strain-SPF origin vaccine, Biovac ® , Cape Town, South Africa) was administrated via drinking water when the chicks were 12 days old, and this was repeated when the chicks were 22 days old. Blood samples were collected during the 1st, 3rd, and 5th weeks from the wing vein (n = 24). Each blood sample was left to coagulate at room temperature and was then centrifuged at 3000 rpm for 5 min.
Hemagglutination Inhibition (HI)
The collected sera were subjected to the hemagglutination inhibition (HI) test, and the level of the anti-NDV antibody titer was determined. The HI tests were performed in microplates using two-fold dilutions of serum, 1% PBS-washed chicken red blood cells, and four hemagglutinating units of vaccinal LaSota NDV (Biovac ® , Cape Town, South Africa), following the method of Allan and Gough [37]. Titers were expressed as log2 values of the highest dilution that caused the inhibition of the hemagglutination. All tested serum samples were pretreated at 56 • C for 30 min to inactivate the nonspecific agglutinin.
ELISA Interferon Assay
The interferon concentration was determined by an immunoenzymatic assay (ELISA). At three time points (eight birds in each group at 7, 14, and 35 days) the serum level of interferon-γ (INF-γ) was determined using ELISA kits, following the instructions enclosed in the manufactured kits (Elabscience Co., Wuhan, China). Eight standards of 0, 15.6, 31.2, 62.5, 125, 250, 500, and 1000 pg/mL were added to the wells of the ELISA plate. Absorbance was measured at a wavelength of 450 nm. The interferon concentration was calculated using the standard curve.
Statistical Analysis
The effects of the thyme and oregano oils on the growth performance, feed conversion ratio, and the incidence of muscle abnormalities were assessed using an ANOVA (GLM procedure in SAS Statistical Analysis Software, version 9.1, 2002). Duncan's test was employed to separate means in the case of the presence of statistical differences (p < 0.05). Pearson's correlation was used to test the relationships between pairs of continuous variables (i.e., the feed conversion ratio, carcass, and visceral organ variables).
Results
The effects of thyme and oregano oils on the performance indices of broilers at different slaughter ages are shown in Table 2. Our results showed that the inclusion of thyme and/or oregano oils in feed did not exhibit any effect on feed intake. In general, there were significant differences in body weight between treatments at different slaughter ages (14, 28, and 42 days). Birds in treatment T2 (with oregano) exhibited the highest body weights and the lowest feed conversion ratios at different slaughter ages when compared to other groups. There were no significant differences between treatment C and T3 in these parameters. The birds in treatment T1 had higher body weights and lower feed conversion ratios than the birds of the control group (C) at different slaughter ages (14, 28, and 42 d). Data are reported as means (M, n = 150/group) and standard deviations (STD). Different letters in the same row indicate significant differences (p < 0.05). Treatment C: basal ration as a control group, treatment T1: basal ration supplemented with 350 mg/kg of thyme essential oil, treatment T2: basal ration supplemented with 350 mg/kg of oregano essential oil, treatment T3: basal ration supplemented with 350 mg/kg of thyme and oregano essential oils in equal proportions.
The incidences of growth-related breast muscle abnormalities (normal, WS, and WS combined with WB condition) in all treatments are shown in Figure 1. The results showed that the control treatment had the highest percentage of normal cases (70%) compared with other treatments. Treatments T1 and T3 had quite similar percentages of normal cases, while treatment T2 had 42.9% normal cases, which was higher than treatment T1 and T2. The incidence of WS was the lowest (5%) in the control treatment compared to the other treatments. Treatment T2 exhibited the highest percentage of WS cases (33.3%), while treatments T1 and T3 had 30% and 9.1% FWS cases, respectively. For WS occurring with WB abnormalities, treatment T3 had the most cases (59.1%) compared with the other treatments. The control treatment and treatment T2 had quite similar percentages of the WB condition. Figure 1. Percentages of normal, white striping, and white striping plus wooden breast meat abnormalities of broilers supplemented with herb extract (HE) (n = breasts/group). The basal diet (control, C) was similar to regular broiler starter diets, while the experimental treatments of the T1, T2, and T3 birds included the same diet as in the control group, but they were supplemented with herb extracts: thyme essential oil at 350 mg/kg (T1), oregano essential oil at 350 mg/kg (T2), and equal proportions of thyme and oregano essential oils at 350 mg/kg (T3).
The effects of thyme and oregano extracts on color traits (L*, a*, and b*), pH, and breast weight are shown in Table 3. In general, there were no significant differences between treatments in the color index (L*, a*, and b*), pH, and breast weight. The effects of muscle abnormalities (normal, WS, and WS combined with WB) on the color traits (L*, a*, and b*), pH, and breast weight are shown in Table 4. Muscle abnormalities did not affect the color traits (L*, a*, and b*). Meat affected by the WB abnormality exhibited higher breast weight (213.22 vs. 188.97, p < 0.05) in comparison to normal meat, while whitestriped meat exhibited intermediate values. Table 3. The effects of the inclusion of thyme and oregano extracts on color traits (L*, a*, and b*), pH, and breast weight for raw chicken breast. Data are reported as means (M, n = 21/group) and standard deviations (STD). Treatment C: basal ration as a control group, treatment T1: basal ration supplemented with 350 mg/kg of thyme essential oil, treatment T2: basal ration supplemented with 350 mg/kg of oregano essential oil, treatment T3: basal ration supplemented with 350 mg/kg of thyme and oregano essential oils in equal proportions. * The color trait (CIE L* = lightness, a* = redness, and b* = yellowness) Figure 1. Percentages of normal, white striping, and white striping plus wooden breast meat abnormalities of broilers supplemented with herb extract (HE) (n = breasts/group). The basal diet (control, C) was similar to regular broiler starter diets, while the experimental treatments of the T1, T2, and T3 birds included the same diet as in the control group, but they were supplemented with herb extracts: thyme essential oil at 350 mg/kg (T1), oregano essential oil at 350 mg/kg (T2), and equal proportions of thyme and oregano essential oils at 350 mg/kg (T3).
C M ± STD
The effects of thyme and oregano extracts on color traits (L*, a*, and b*), pH, and breast weight are shown in Table 3. In general, there were no significant differences between treatments in the color index (L*, a*, and b*), pH, and breast weight. The effects of muscle abnormalities (normal, WS, and WS combined with WB) on the color traits (L*, a*, and b*), pH, and breast weight are shown in Table 4. Muscle abnormalities did not affect the color traits (L*, a*, and b*). Meat affected by the WB abnormality exhibited higher breast weight (213.22 vs. 188.97, p < 0.05) in comparison to normal meat, while white-striped meat exhibited intermediate values. Table 3. The effects of the inclusion of thyme and oregano extracts on color traits (L*, a*, and b*), pH, and breast weight for raw chicken breast. Data are reported as means (M, n = 21/group) and standard deviations (STD). Different letters in the same row indicate significant differences (p < 0.05). The levels of white striping (WS) were classified as normal, moderate, or severe according to Kuttappan et al. [27]. * The color trait (CIE L* = lightness, a* = redness, and b* = yellowness).
Dietary supplementation with thyme or oregano essential oils alone or a mixture had no significant (p < 0.05) positive effects on the broilers' humoral or cellular immune reactions to NDV treatments ( Figure 2). No significant effects were found for the treatments on the weekly and accumulative NDV Ab titers and IFN-γ levels of chicks during the experimental period ( Figure 2). Table 4. The effects of muscle abnormalities (normal, white striping (WS), and white striping combine with wooden breast condition (WS and WB)) on the color traits (L*, a*, and b*), pH, and breast weight. Data are reported as means (M, n = 21/group) and standard deviations (STD). The levels of whit striping (WS) were classified as normal, moderate, or severe according to Kuttappan et al. [27]. * Th color trait (CIE L* = lightness, a* = redness, and b* = yellowness)
Normal
Dietary supplementation with thyme or oregano essential oils alone or a mixture ha no significant (p < 0.05) positive effects on the broilers' humoral or cellular immune reac tions to NDV treatments ( Figure 2). No significant effects were found for the treatment on the weekly and accumulative NDV Ab titers and IFN-γ levels of chicks during th experimental period (Figure 2).
Discussion
Thyme or oregano essential oils, when used as growth promoters, have been reporte to improve body weight gain and feed conversion when added to broiler rations [7,8,17 In the present study, essential oils of thyme or oregano at a dosage of 350 mg/kg signifi cantly increased the average body weight at 14, 28, and 42 days of age. A similar tren was observed in the feed conversion ratio. The results of the present study were in disa greement with the results of some previous studies that revealed that thyme or oregan oils did not affect body weight gain and feed efficiency [8,11,17]. It has also been suggeste that dietary supplementation with oregano or thyme oils may exert positive effects o growth parameters when relatively high doses are used [38]. However, other studie
Discussion
Thyme or oregano essential oils, when used as growth promoters, have been reported to improve body weight gain and feed conversion when added to broiler rations [7,8,17]. In the present study, essential oils of thyme or oregano at a dosage of 350 mg/kg significantly increased the average body weight at 14, 28, and 42 days of age. A similar trend was observed in the feed conversion ratio. The results of the present study were in disagreement with the results of some previous studies that revealed that thyme or oregano oils did not affect body weight gain and feed efficiency [8,11,17]. It has also been suggested that dietary supplementation with oregano or thyme oils may exert positive effects on growth parameters when relatively high doses are used [38]. However, other studies concluded that incremental doses of 100 to 1000 mg/kg or 300 to 1200 mg/kg of oregano oils did not always improve production performance [6,15,39]. These contrasting observations could be explained by differences in the concentrations and chemical compositions of the oils used, the lengths of the experimental periods, the numbers of chicks used, and management factors. In the present study, the variation in these factors was minimized to some extent so that the differences in the performance parameters could only be attributed to the supplemental oils.
Saleh et al. [39] reported that the feed intake of chicks that received thyme essential oil (100 to 200 mg/kg) was higher than that of chicks in a control treatment. These findings were in disagreement with the results of the present study. In contrast, Wade et al. [8] reported that supplementing broiler diets with varying amounts of thyme oil had no effect on feed intake.
Regarding the effect of herbal extract addition on the incidence of growth-related breast muscle abnormalities, our results were partially in agreement with previous studies. Mudalal et al. [29] found that the incidence of WS was 19.5-39.2% and that WS combined with WB was in the range of 67-76.5% at a slaughtering age of 41 days. Previous studies showed that the incidence of WS was 25.7-32.3% [20]. Cruz et al. [40] found that the prevalence of WS and WB abnormalities ranged from 32.3 to 89.2%. Mudalal [41] found that the total prevalence of WS in turkey breast was 61.3%. Mudalal and Zaazaa [23] showed that the incidence of muscle abnormalities was highly affected by slaughter age, where it was about 45% at a slaughter age of 34 days and 100% at a slaughter age of 48 days.
The overall results showed that the addition of thyme and oregano extracts to broiler diets increased the incidence of these abnormalities. The overall prevalence of muscle abnormalities (WS and WS combined with WB) was higher in the treated groups (T1, T2, and T3) than in the control group (65%, 57.1%, 68.2% vs. 30%), respectively. These results may be attributed to an increase in the growth rate and the live weight of broilers at slaughter (Table 2). Previous studies have shown that an increase in growth rate was associated with a higher prevalence of muscle abnormalities [19,28,42,43].
The addition of thyme and oregano extracts exhibited no effects on the color traits (L*, a*, and b*), pH, and breast weight. The incidence of muscle abnormalities (normal, WS, and WS combined with WB) had no effect on the color traits (L*, a*, and b*) and pH but affected breast weight. Zambonelli et al. [44] found that WS combined with WB did not affect the a* and b* values, while the L* values were lower than in normal meat. Another study found that meat with WS alone or combined with WB abnormalities did not affect the color traits (L*, a*, and b*) [45]. Even though there was an apparent increase in pH due to the presence of muscle abnormalities, it was not significant. In this context, Tijare et al. [20] found that the WS abnormality did not affect pH values, while Soglia et al. [19] showed that meat affected by both abnormalities (WS and WB) exhibited a higher pH than normal meat.
Meat affected by the WB abnormality exhibited a higher breast weight (213.2 vs. 189.0 g, p < 0.05) compared to normal meat, while white-striped meat exhibited intermediate values. Similar results were obtained by Tasoniero et al. [46], where WB exhibited significantly higher breast weight than normal meat while white-striped meat exhibited moderate values. In addition, Malila et al. [47] found that meat affected by the WB abnormality had a higher breast weight than normal meat.
Dietary supplementation with thyme or oregano essential oils alone or in a mixture had no significant (p < 0.05) positive effects on the humoral or cellular immune reactions of broilers to NDV in the treated groups ( Figure 2). No significant effects of the treatments were detected in the weekly and cumulative NDV-Ab titers and IFN-γ levels of the chicks during the experimental period. Our results were also in agreement with previous studies [30,48] that used thyme in the feed and drinking water of broilers and found no significant differences in antibody titers against NDV compared to the control group. In contrast, our results contradict previous reports in which thyme essential oil supplementation (135 mg/kg of feed) increased the humoral immune response against NDV compared to the control group [39]. Since thyme has been reported to have antibacterial and antifungal activities and the main components of thyme are thymol and carvacrol, which are reported to have strong antioxidant properties, an increase in the immune responses of the chicks was expected [48,49]. Although the dietary treatments had no significant effects on the immune-related parameters measured in this study, no deleterious effects were observed from the addition of thyme, oregano, or a combination to the diet. This could be due to the quantity of the additives used in our study. The results also showed that broilers whose diets were supplemented with thyme and oregano or a mixture of both showed no change in the production of IFN-γ proinflammatory cytokines compared with the control group. No significant differences were observed in the relative expression levels of IFN-γ. This is consistent with results published by Hassan and Awad [50], who claimed that thyme supplementation did not alter relative messenger RNA (mRNA) transcription levels for IFN-γ and other cytokines. Moreover, thymol inhibited the phosphorylation of NFκB and decreased the production of IL-6, TNF-α, iNOS, and COX-2 in LPS-stimulated mouse epithelial cells [51]. These findings support the previously mentioned results and indicate that the anti-inflammatory effects of thyme and oregano make them suitable for use in animal production. On the other hand, it was found that oregano oil combined with a macleaya cordata oral solution improved serum immunological characteristics [52].
Conclusions
In conclusion, the addition of oregano oil was the most effective in improving the growth performance of broiler chickens and was better than thyme oils. The inclusion of thyme and oregano essential oils together had no positive impact on broiler health. While the essential oils of oregano and thyme improved the feed conversion factor, the incidence of muscle abnormalities increased, and this may be attributed to the increase in the growth rate. Therefore, it is important to consider the impact of these muscle abnormalities on meat quality when developing any growth promotion program.
|
v3-fos-license
|
2021-10-30T15:13:55.217Z
|
2021-04-03T00:00:00.000
|
240190104
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://ejournal.iain-tulungagung.ac.id/index.php/nisbah/article/view/3912/1556",
"pdf_hash": "66a18547ec5aa4f0f4c175ec932139cd52edfaa1",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42451",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "6ab739daa9573fa573ee098de1e09deefa38ddfe",
"year": 2021
}
|
pes2o/s2orc
|
DETERMINANT OF ISLAMIC FINANCIAL INCLUSION IN DIGITAL ERA: CROSS-PROVINCE ANALYSIS
In recent years, the Islamic financial sector has become one of the most vital sectors in the Islamic economic system in Indonesia. Therefore, more attention needs to be paid to the measurement of Islamic financial inclusion and its and policy making, especially in facing the digitalization of economy, because the digitalization can be a momentum that provides opportunities as well as threats to Islamic finance. This study attempts to measure Islamic financial inclusion at the provincial level in Indonesia through the dimensions of accessibility, availability and utilization and analyzes the impact of digitalization on Islamic financial inclusion. Measurements are made using the Sarma Index, while the analysis of the impact of digitalization on Islamic financial inclusion employs the fixed effect model on the balanced panel data. The measurement results show that developed provinces tend to have higher levels of Islamic financial inclusion compared to developing provinces. Furthermore, provinces with a Muslim majority have a higher level of Islamic financial inclusion compared to provinces with a Muslim minority populations. In the panel data analysis, it was found that internet penetration has a negative and significant effect on Islamic financial inclusion, which shows that the majority of people in Indonesia still use the internet to access entertainment content and that internet use has not been optimized to access financial services. Nevertheless, the presence of the Islamic fintech platforms has a significant positive effect on Islamic financial inclusion in Indonesia. The variable level of cell phone usage has no significant effect on the level of Zaki Abdullah: Determinant of Islamic.... [61] ж Vol. 08, No.01, April 2021 ж Islamic financial inclusion and the average length of schooling has a significant negative effect on Islamic financial inclusion.
. The above conditions can be caused by disproportionate development in Indonesia which is overly centered in the Province of the Special Capital Region of Jakarta, resulting in development inequality and which subsequently leads to under-development of the Islamic financial sector in other provinces that are less affected by national development. Apart from that, the geographical context of Indonesia, which is archipelagic in nature may also be a barrier to Islamic financial inclusion, as these geographical conditions set Indonesia apart from those of the comparable countries in the chart above. 148 countries using Global Findex data, this research also proves that high-income countries tend to have more inclusive financial services, besides that it was also found that 50% from a sample of adults have not been reached by financial services due to high costs, distance reason and lack of documentation 16 .
In addition to the above factors, individual variables such as education quality, gender, age and economic welfare of a person also have a positive relationship with financial inclusion 17 . Not only individual factors, the macroeconomic conditions of a country will also affect the level of financial inclusion in the country, that financial inclusion is positively correlated with economic growth. Even the financial system stability of a country is also related to financial inclusion in that country 18 . After measuring the contribution of each dimension (di), it is assumed that each dimension has equal priority so that it is given the same weight (wi), namely 1/3, so that the total weight is 1. Next, the calculation is carried out as follows:
Result and Discussion
After calculating the three dimensions of Islamic financial inclusion in 33 provinces in Indonesia from 2014-2019, through calculations that adopt indicators from Sarma (2012), the level of Islamic financial inclusion in each province in Indonesia is obtained as shown in Table 3. DKI Jakarta Province has the highest level of Islamic financial inclusion, followed by Aceh and West Nusa Tenggara. This is reasonable because DKI Jakarta is the center of government whose advancend in its development, so that the entire community is inclusive of Islamic Based on Table 4, it can be concluded that the best model in panel data analysis in this study is the Fixed Effect Model. Furthermore, descriptive analysis is carried out first before estimating the model. Table 5 below is a descriptive analysis of each variable in the model. Source: Author's Calculation (2020) The results of the descriptive analysis show that there are a total of 33 provinces (n) and 5 units of time (years) in the panel data estimation to be carried out. So that a total of 165 data with 4 independent variables.
The estimation results of the fixed effect model can be seen in Table 6 below. The results of panel data regression analysis in Table 6 show an Although internet penetration has a negative and significant effect, the presence of the application of Islamic financial technology or fintech has a positive and statistically significant effect at the 1% significance level. This makes sense because based on Fintech statistics from the Financial Services Authority, people are using fintech to make investments or online loans. Investing through fintech can provide a higher return, making loans is also easy. So that people need to come to banking services to make financial interactions with fintech applications, such as making transfers or withdrawing funds.
The next digitalization variable is control of cellular telephones, where this variable has a positive but not statistically significant. This is very reasonable because the higher rate of the cell phone penetration will make easier and wider to access to information. So that it is easier for people to access and get information on the availability of Islamic The last variable which is the control variable, namely the average length of schooling year has a negative effect on the level of Islamic financial inclusion in the provinces in Indonesia with a significance value of 5%, this could happen because the longer a person is educated, the chance to get a better job will be higher, and good jobs are generally found in the Jakarta area. Currently there is a phenomenon of centralized economic growth in Jakarta, so it is natural that people in the regions choose to migrate to the capital to seek better income, but this actually has a negative impact on Islamic financial inclusion in their home regions. Constants in the model also show a positive and significant effect at the 1% significance level, so it can be concluded that there are other variables outside the model that contribute to the level of Islamic financial inclusion in every province in Indonesia.
Conclusion
The level of Islamic financial inclusion in each province in Indonesia varies widely from year over year, but tends to have a downward trend. Islamic financial services are increasingly inclusive in provinces that have a good level of economic development. In addition, provinces with Muslim majority populations also have high levels of Islamic financial inclusion, in contrast to provinces with Muslim communities as a minority which tend to have low levels of Islamic financial inclusion. The level of Islamic financial inclusion is measured through three dimensions, namely accessibility, availability and utilization. In taking the measurement, it was found that the dimensions It can be concluded that the availability of Islamic financial institutions is already good through its branch offices compared to the use by the community through the provision of financing and its accessibility as measured by the amount of third party funds. Therefore, one of the priority ways to increase inclusion of Islamic finance at this time is by increasing the amount of third party funds, of course this is done without neglecting the utilization factor by the public available through the service channels of Islamic financial institutions.
In current digital era, the digitization variable does not have a direct impact on the level of Islamic financial inclusion but has a positive and significant impact with the existence of a sharia fintech service platform. The presence of fintech has proven to be able to increase the level of Islamic financial inclusion.
|
v3-fos-license
|
2019-01-22T22:34:52.955Z
|
2019-01-01T00:00:00.000
|
58580967
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jia2.25225",
"pdf_hash": "8dc0f10f9d53542055151ca59f4eecc87d05cbbb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42452",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "8dc0f10f9d53542055151ca59f4eecc87d05cbbb",
"year": 2019
}
|
pes2o/s2orc
|
Beyond HIV prevention: everyday life priorities and demand for PrEP among Ugandan HIV serodiscordant couples
Abstract Introduction Pre‐exposure prophylaxis (PrEP) to prevent HIV infection is being rolled out in Africa. The uptake of PrEP to date has varied across populations and locations. We seek to understand the drivers of demand for PrEP through analysis of qualitative data collected in conjunction with a PrEP demonstration project involving East African HIV serodiscordant couples. Our goal was to inform demand creation by understanding what PrEP means – beyond HIV prevention – for the lives of users. Methods The Partners Demonstration Project evaluated an integrated strategy of PrEP and antiretroviral therapy (ART) delivery in which time‐limited PrEP served as a “bridge” to long‐term ART. Uninfected partners in HIV serodiscordant couples were offered PrEP at baseline and encouraged to discontinue once infected partners had taken ART for six months. We conducted 274 open‐ended interviews with 93 couples at two Ugandan research sites. Interviews took place one month after enrolment and at later points in the follow‐up period. Topics included are as follows: (1) discovery of serodiscordance; (2) decisions to accept/decline PrEP and/or ART; (3) PrEP and ART initiation; (4) experiences of using PrEP and ART; (5) PrEP discontinuation; (6) impact of PrEP and ART on the partnered relationship. Interviews were audio‐recorded and transcribed. We used an inductive, content analytic approach to characterize meanings of PrEP stemming from its effectiveness for HIV prevention. Relevant content was represented as descriptive categories. Results Discovery of HIV serodiscordance resulted in fear of HIV transmission for couples, which led to loss of sexual intimacy in committed relationships, and to abandonment of plans for children. As a result, partners became alienated from each other. PrEP countered the threat to the relationship by reducing fear and reinstating hopes of having children together. Condom use worked against the re‐establishment of intimacy and closeness. By increasing couples’ sense of protection against HIV infection and raising the prospect of a return to “live sex” (sex without condoms), PrEP was perceived by couples as solving the problem of serodiscordance and preserving committed relationships. Conclusions The most effective demand creation strategies for PrEP may be those that address the everyday life priorities of potential users in addition to HIV prevention. Clinical Trial Number NCT02775929
| INTRODUCTION
Pre-exposure prophylaxis (PrEP) has proven highly effective in preventing HIV infection when taken regularly [1][2][3]. Moreover, PrEP can be delivered safely, with high uptake and adherence by users [4]. A number of sub-Saharan African countries have acted on these findings to begin making PrEP publicly available. South Africa launched an initiative to provide PrEP to female sex workers in early 2016 [5]. The Strategic Plan for 2017 to 2022 expands access to include men who have sex with men (MSM), injection drug users and youth [6]. In May 2017, the Government of Kenya launched an initiative to provide PrEP as part of combination HIV prevention. Currently, PrEP is being made available to serodiscordant couples, sex workers, adolescent girls/young women and other populations at high risk for HIV infection in a variety of delivery settings [7,8]. The Ugandan government is offering PrEP to key populations at a number of accredited health facilities around the country [9]. Several other countries in East, West and southern Africa have PrEP implementation projects [10].
Information currently available on PrEP uptake in Africa suggests a varied initial response. As of July 2018, the number of PrEP initiations in Kenya stood at two-thirds of national targets, in Zimbabwe at 50% of targets, and in South Africa at 41% of targets [11]. In South Africa's step-wise populationbased approach to PrEP scale-up, 13% of sex workers offered PrEP in the initial stage (starting June 2016) initiated, in contrast to 54% of MSM in the second stage (starting April 2017), and 6% of university students in the third stage (October 2017) [12]. Reports of challenges encountered and lessons learned from implementation initiatives are now appearing.
Retention is developing into a major challenge, with individuals initiating or expressing interest in PrEP choosing not continue due to medication side effects, fear of stigma, negative attitudes from clinical staff and other considerations [13,14]. Underestimation of risk may impede uptake, insofar as at-risk individuals perceive their HIV risk to be low [15]. Lack of awareness continues to be cited as a barrier to PrEP uptake [16].
In sum, limited initial demand is emerging as an important influence on the uptake of PrEP in Africa. Activities aimed at generating interest in new products and technologies by linking them to the priorities of prospective users are known as demand creation [17]. Creating demand for PrEP by linking it to user priorities requires understanding what those priorities are. User priorities may extend beyond HIV prevention and may differ across population groups. This paper addresses PrEP demand creation for East African serodiscordant couples by examining their life priorities. We approach this using qualitative data to describe what PrEP as an effective HIV prevention tool meant to a group of Ugandan serodiscordant couples using it as part of the Partners Demonstration Project.
| Study design and setting
This was a qualitative study carried out in conjunction with the Partners Demonstration Project. The Partners Demonstration Project (Clinicaltrials.gov NCT02775929) was an open-label evaluation of integrated delivery of PrEP and antiretroviral therapy (ART) for higher-risk HIV serodiscordant couples in Kenya and Uganda [4,18]. A validated, empiric risk scoring tool was utilized to recruit couples at higher risk of HIV transmission, who would benefit from the integrated strategy of PrEP and ART delivery [19].
The integrated delivery strategy offered time-limited PrEP to uninfected partners in serodiscordant couples as a "bridge" to long-term ART in infected partners. Uninfected partners were offered PrEP at baseline and encouraged to discontinue once infected partners had taken ART for six months. Counsellors also encouraged condom use to prevent sexually transmitted infections and unintended pregnancies [20]. At the outset of the project, only individuals with CD4 counts ≤350 were eligible for ART initiation. Ugandan national treatment guidelines were revised in 2016 to include any HIV-infected person in a serodiscordant relationship [9].
| Sampling and recruitment
Purposeful sampling was used to identify participants in the qualitative study [21]. We sought to purposefully sample couples with varying experiences of PrEP and ART. We included couples in which uninfected partners accepted and declined PrEP at enrolment, and couples in which the infected partner was eligible and ineligible for ART (e.g. based on CD4 < 350 prior to 2016). Couples in these categories were referred to the qualitative study by project staff. Research assistants approached these couples during follow-up visits to describe the qualitative study and invite participation. Ninety-three couples accepted and were enrolled.
| Data collection
Qualitative data collection for this study took place at the Partners Demonstration Project's two Ugandan sites: the Infectious Diseases Institute -Kasangati, in Kampala; and Kabwohe Clinical Research Center, in the rural southwest. Multiple open-ended interviews were carried out with couples participating in the qualitative study.
Interviews took place approximately one month after enrolment in the Partners Demonstration Project, and at later points in the follow-up period. Examples of interview topics included: (1) discovery of serodiscordance; (2) decisions to accept/decline PrEP and/or ART; (3) experiences of PrEP and ART initiation; (4) experiences of using PrEP and ART; (5) PrEP discontinuation; and (6) the impact of PrEP and ART on the partnered relationship. Partners took part in initial interviews together, to allow insight into relationship dynamics. Subsequent interviews were a mix of individual and joint sessions, depending on the topics to be discussed. Two hundred and seventy-four interviews were completed. One hundred and forty-eight were interviews with both members of the couple; 126 were individual interviews.
Interviews were conducted by trained Ugandan research assistants in local languages (Luganda, Runyankore), using interview guides. Each interview type had a different guide, tailored to the experience being investigated. In the initial joint interviews, the interviewer took notes on relationship dynamics, guided by a predesignated list of relationship characteristics.
Interviews were conducted in private settings in locations selected by interviewees. Participants provided written consent for interviews, which were audio-recorded and lasted about an hour. Audio-recordings were transcribed into English by the research assistants. Transcripts were reviewed for content and technique in weekly feedback phone calls and emails with a supervisor to ensure data quality. Besides the transcripts, the research assistants prepared "debriefs" summarizing interview content. Interview data were collected from November 2013 through December 2016.
| Data analysis
An inductive, content analytic approach was used to analyse the qualitative data [22]. Transcripts were initially reviewed as they were produced, to provide an overall sense of the content. A coding scheme was developed from this process; the dataset was coded using Atlas.ti qualitative data management software. For the analysis reported here, the primary and the senior author reviewed coded data that spoke to meanings stemming from the effectiveness of PrEP for preventing HIV transmission. Where indicated by the review of coded data, selected complete transcripts were re-read.
This process led to the preliminary specification of concepts addressing the research question. Coded and transcript data were then repeatedly reviewed to assign examples to the preliminary concepts. The addition of examples served to refine and elaborate the concepts, transforming them into descriptive categories. Statements summarizing the content of each category were added, along with evidence in the form of illustrative quotes from interview transcripts. Finally, the categories were linked to "tell the story" of the meanings of PrEP that emerged from the analysis.
| Participant characteristics
Couples eligible for the Partners Demonstration Project were ≥18 years of age, sexually active and reported intending to remain together.
HIV-infected and uninfected partners in the qualitative study were in their early thirties. Approximately half (46%) of uninfected partners were female. Median time since discovering serodiscordant status was two months at baseline (Range: 1 to 12). Median time living together was three years (Range: 1 to 9). Almost all couples (N = 91, 98%) reported being married to each other. Fifty-three percent of couples (N = 49) had children together at baseline.
Among qualitative study participants, 88% (N = 82) of uninfected partners initiated PrEP at Partners Demonstration Project enrolment. PrEP initiation increased to 92% (N = 86) during the follow-up period. Sixty-six percent (N = 61) of infected partners among qualitative study participants were eligible for ART at enrolment. Sixty-seven percent (N = 40) of eligible individuals initiated ART at enrolment; all initiated ART during the follow-up period. Twenty-one (23%) couples in the qualitative sample reported ending their relationship after enrolling in the Partners Demonstration Project (Table 1).
| Discovery of HIV serodiscordance threatened partnered relationships
Discovery of HIV serodiscordance resulted in fear of HIV transmission for couples. This in turn led to the loss of sexual interest and sexual intimacy between partners, distancing them from each other. Serodiscordance also suggested infidelity, creating anger and distrust, and exacerbating alienation.
Couples responded to the discovery of serodiscordance in different ways. Some took steps to reduce transmission riskby decreasing frequency of sexual intercourse, abstaining from sex altogether, starting to use condoms or making efforts to use them more frequently. These risk reduction steps further eroded intimacy, creating additional distance between partners ( Table 2, A, 1).
Couples also responded to HIV serodiscordance by abandoning or postponing plans for having children. Loss of family building as a shared goal meant losing a reason to be together. Reasons for changing plans centred on fear of not being able to support children into adulthood if the HIVinfected partner died or became incapacitated, or if the HIVnegative partner became infected ( Table 2, A, 2).
The bond between them weakened, some couples considered separation. Their relationships were not able to withstand the cumulative stress of serodiscordance on top of economic and other challenges. These couples saw separation as the most reliable means of ensuring the uninfected partner remained free of HIV, or found coping with risk reduction measures unacceptably burdensome ( Table 2, B, 1).
| PrEP countered threats to relationships by reducing fear, and reinstating hopes and plans for family building
PrEP countered threats to relationships by reducing fear and reinstating hopes and plans for family building.
As indicated above, couples receiving PrEP and ART as part of the integrated strategy were encouraged to combine condoms with antiretrovirals for maximum protection against unwanted pregnancies as well as sexually transmitted infections [20]. While they understood this, couples also tended to continue to think of condoms as a means of preventing HIV. They described "feeling safer" as a result of adding PrEP as an HIV prevention method to the method(s) they were already using. Couples often characterized PrEP as "back-up" to condoms, protecting them if condoms broke or failed for another reason. Insofar as multiple protection methods reduced fear of HIV, the threat to the relationship also decreased, and alienated partners once again grew closer to each other. A reawakening of sexual desire was often part of this new closeness ( Table 2, B, 2). Also, couples came to accept PrEP as a safe and simple alternative to artificial insemination for safe conception. With increasing PrEP experience, viral suppression in the HIVinfected partner, and support from project staff, couples learned to time sex without a condom to coincide with peak fertility periods-conceiving, as one woman put it, "like human beings do. " In this way, the hopes of HIV serodiscordant couples to have children together were restored through PrEP; the restoration of hope provided a reason for remaining in the relationship ( Table 2, C, 2).
| Couples struggled to combine PrEP with
condom use, as they experienced condoms as working against the re-establishment of intimacy and closeness made possible through PrEP Qualitative study couples described working hard to follow the advice of staff and integrate condoms and PrEP in their sex lives. But they struggled, since they also experienced condoms as working against newly re-established intimacy and closeness.
Couples complained that condoms interfered with sexual pleasure and performance, causing them to once again lose interest in sexual activity. They found condoms to be especially problematic in a committed relationship, in that they suggest sex with outside partners. The introduction of condoms into a committed relationship by one or another partner could be offensive, connoting a lack of trust and suspicion of unfaithfulness ( Table 2, C, 1).
Some couples adopted a compromise position between competing desires for HIV prevention and sexual satisfaction adhering to condoms whenever possible, while also periodically indulging the urge not to use them. Others found themselves able to adjust to condom use over time ( Table 2, C, 2).
Couples struggling with condom use found comfort in the hope that PrEP would eventually eliminate the need for them.
They looked forward to a time when increasing recognition of the effectiveness of PrEP would make barrier methods for HIV prevention unnecessary, allowing for what they termed "live sex" (Table 2, C, 3).
A return to "live sex" made possible through PrEP promised increased intimacy and closeness. Live sex was considered better sex, increasing sexual pleasure in the relationship. Moreover, an uninfected partner remaining free of HIV as a result of PrEP meant that partner would be available to provide for the HIV-infected partner, should his or her health deteriorate. Some uninfected partners saw PrEP use as a way of sharing the burden of HIV prevention in the relationship. Many qualitative study participants spoke of PrEP as a solution to the "problem" of serodiscordancea way of avoiding HIV transmission while remaining in the relationship ( Table 2, C, 4).
| DISCUSSION
This qualitative analysis sought to characterize meanings of PrEP beyond HIV prevention among Ugandan serodiscordant couples participating in the Partners Demonstration Project. Our findings reveal the primary meaning of PrEP for these couples to be its role in reversing the alienation and discord introduced into committed relationships by the discovery of serodiscordance. The roots of this alienation are complex, beginning with fear of HIV infection, and expanding to include larger life disappointments, such as feeling unable to fulfil personal goals and cultural expectations for family building, and experiencing the erosion of intimacy and trust that comes with condom use. For couples participating in this qualitative study, PrEP reversed alienation by reducing fear, making safe conception possible without recourse to "artificial" methods, and awakening the hope for increased satisfaction and closeness through a return to "live sex. " These effects combined to strengthen and restore threatened relationships.
HIV serodiscordant couples participating in this qualitative study reported PrEP strengthened relationships by reducing fear of HIV transmission and increasing sexual intimacy. These themes have also been reported in other couples-focused analyses [23][24][25][26][27][28], and among MSM PrEP users, most of whom are not in a known HIV serodiscordant relationship [29,30]. In this analysis, we draw out the larger significance of couples' views, to consider their implications for future PrEP demand creation initiatives. There has been considerable debate over whether access to PrEP and ART would result in the abandonment of condoms for prevention of HIV and other sexually transmitted infections [31][32][33][34]. A growing body of research suggests this is not necessarily the case [23,30,[35][36][37]. When condoms are not used, there may be several reasonspursuit of pleasure, desire to be free of barriers and hope for children. In some circumstances, intimacy and relationship strengthening may take precedence over prevention of infection [38,39].
The desire for "live sex"sex without condomswas strong in this sample of serodiscordant couples, and the hope that PrEP might open the door to "live sex" in the relationship was seen as an important benefit. In the meantime, couples tried hard to follow the recommendation of staff to use condoms consistently. Some couples acknowledged engaging in sex without condoms, but characterized this as the exception in an overall pattern of condom use, stemming from the desire to conceive a child, or "treat" themselves to a more pleasurable sexual experience.
PrEP served as a "bridge" to ART in the integrated strategy. Uninfected partners in serodiscordant couples took PrEP until their infected partners had used ART for six months. Whereas overall, PrEP users "felt safer" as a result of taking the medication, they were less confident of the protection afforded by their partner's ART. The concept of "treatment as prevention" was understood, but not widely accepted by PrEP users participating in this qualitative study. As a result, ART did not have the same meaning for couples, or exert the same impact on the serodiscordant relationship [40].
The question arises as to whether PrEP may have different meanings for male and female users. Insofar as women may face greater difficulty in negotiating condom use, they may disproportionately benefit from and appreciate a method of HIV prevention that allows them more agency and control. Men, in contrast, may interpret clinic visits and daily medication as part of "women's domains, " and feel more burdened by PrEP use as a result [41]. Characterization of gender differences in meanings of PrEP for serodiscordant couples was not a focus of the analysis reported here.
Our results contribute to the emerging critique of PrEP demand creation strategies that are focused narrowly on risk reduction [38,[42][43][44]. Reducing HIV risk is an important argument to make for PrEP uptake, but adding messaging that reflects what users describe as important additional benefits may increase interest and demand for PrEP.
This study and others suggest that serodiscordant couples see HIV transmission risk reduction through PrEP as the means to larger and more inherently appealing endsfreedom from fear during sex, reinstatement of plans for children, a return to "live sex, " and ultimately, the preservation of a committed relationship. The importance of relief from fear as a benefit of PrEP use has also been noted in qualitative research with male PrEP users participating in iPrEx [30]. The cultural as well as personal significance of producing children, and the stigma of infertility have been described for Ugandan serodiscordant couples [45]. Options for sex without condoms might be included in counselling and education sessions with couples considering PrEP use. Such sessions would make clear the relative roles of PrEP and condoms in preventing HIV and other sexually transmitted infections, while defining decisions about condom use as the choice and responsibility of couples themselves. Couples' own characterizations of the role of PrEP in preserving relationships could be shared in descriptive materials. Messaging that maps PrEP onto the everyday life priorities of potential users in an "optimistic" way may ultimately prove more effective for demand creation than framing messaging content in terms of HIV prevention alone.
Meaningful efforts to inform PrEP demand creation will be grounded in a recognition of the varying approaches to implementation being adopted across Africa. For example, in Uganda, scale-up efforts were initially led by academic researchers and advocates, who lobbied the government for access to PrEP through the public health system. Guidelines for providing PrEP were developed by a multi-stakeholder working group and approved in July 2017. The current (2017 to 2018) programme for distribution is spearheaded by the AIDS Control Programme (ACP) of the Ministry of Health, which has worked to create demand through training for healthcare providers, increasing capacity for HIV testing in clinics, and instituting a clinic accreditation programme. Scaleup is taking place in public health clinics across the country; the number of clinics distributing PrEP is being increased each year. In October 2018, reported PrEP initiations total 9000 to 9500 [11].
In Kenya, a public messaging campaign directly targeting prospective PrEP users plays a prominent role in scale-up. The adoption of positive rather than "fear-based" messaging is a core principle of the campaign. Positive messaging highlights PrEP's contribution to happiness and wellbeing, rather than its role in reducing the risk of acquiring a potentially life-threatening disease. Messaging materials are distributed widely outside as well as inside the healthcare system [46,47].
Fitting PrEP demand creation strategies to the life priorities and meanings of PrEP for users is a principle that spans specific population groups. However, it requires understanding the life experiences and goals of group members, from their own points of view. Research identifying the nurturing of intimate relationships as a priority for couplesheterosexual and MSMis a first step. Similar inquiries focusing on other key populations may help to effectively address suboptimal PrEP uptake and/or retention in those groups [48,49].
The opportunity to investigate couples' direct experiences with PrEP through multiple interviews conducted both jointly and individually is an important strength of this study, as they add to validity of the findings and the level of detail presented. However, we acknowledge that these data reflect the perspectives of Ugandan serodiscordant couples who had recently learned of and mutually disclosed their HIV serodiscordant status, who in most cases defined themselves as couples (rather than as having separated), and who were participating in a PrEP demonstration project. The experiences of couples who are not research participants, have long lived with serodiscordance, do not remain together, and/or whose nationalities and cultural backgrounds differ may not be the same. The similarities observed for Kenyan serodiscordant couples [23][24][25][26] suggests the patterns described here are not characteristic only of Ugandans, however. Finally, the possibility that the qualitative interview data may be subject to social desirability bias, in which interviewees provide responses they believe to be "correct, " or "what the interviewer wants to hear, " must be acknowledged.
| CONCLUSION
Because of its effectiveness in preventing HIV transmission, PrEP represented a solution to the problem of serodiscordance for Ugandan couples. Decreased fear during sex, renewed hopes for family building, and the prospect of eventual "live sex" were intermediate benefits serving this larger end. The most effective PrEP demand creation strategies may be those that meaningfully address the everyday life priorities of potential users, as well as HIV prevention. Understanding the meanings of PrEP for potential users can inform demand creation for PrEP scale-up.
A U T H O R S ' C O N T R I B U T I O N S
NCW and MAW designed the qualitative research. ENJ and NCW analysed the data for this report, and wrote and revised the manuscript. MAW and EEP provided general supervision for the data collection process in Uganda, contributed to the data analytic process and reviewed and commented on drafts of the manuscript. TRM, ETK and SBA supervised data collection at the qualitative study sites. JMB and CLC provided feedback on emerging findings from the qualitative study and reviewed and provided comments on the manuscript. All authors critically reviewed and approved the final version.
A C K N O W L E D G E M E N T S
We are grateful to the couples who contributed to this study by taking part in interviews. Justine Abenaitwe, Robert Baijuka, Brenda Kamusiime, Jackie Karuhanga, Vicent Kasiita, Grace Kakoola Nalukwago, Florence Nambi and John Bosco Tumuhairwe collected the qualitative data. Katherine K. Thomas and Lara Kidoguchi provided data on characteristics of qualitative study participants. Charles Brown contributed technical information on the current availability of PrEP in Uganda. We thank the Honorable Elioda Tumwesigye for his guidance and support. Research and clinical staff at the Infectious Diseases Institute -Kasangati and the Kabwohe Clinical Research Center provided general support for the qualitative research.
F U N D I N G
This work was funded by the US National Institutes of Health (R01 MH101027, Norma C. Ware, PI). The Partners Demonstration Project was funded by the National Institute of Mental Health of the US National Institutes of Health (grant R01 MH095507), the Bill & Melinda Gates Foundation (grant OPP1056051), and through the generous support of the American people through the US Agency for International Development (cooperative agreement AID-OAA-A-12-00023). Gilead Sciences donated the PrEP medication but had no role in data collection or analysis. The results and interpretation presented here do not necessarily reflect the views of the study funders.
|
v3-fos-license
|
2019-09-04T16:30:45.070Z
|
2019-09-03T00:00:00.000
|
201815237
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11095-019-2683-7.pdf",
"pdf_hash": "fcd4454b8b1ff2b835ff2cb141c0562b7dceb961",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42453",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "fcd4454b8b1ff2b835ff2cb141c0562b7dceb961",
"year": 2019
}
|
pes2o/s2orc
|
Multicomponent Conjugates of Anticancer Drugs and Monoclonal Antibody with PAMAM Dendrimers to Increase Efficacy of HER-2 Positive Breast Cancer Therapy
Purpose Conjugation of nanocarriers with antibodies that bind to specific membrane receptors that are overexpressed in cancer cells enables targeted delivery. In the present study, we developed and synthesised two PAMAM dendrimer-trastuzumab conjugates that carried docetaxel or paclitaxel, specifically targeted to cells which overexpressed HER-2. Methods The 1H NMR, 13C NMR, FTIR and RP-HPLC were used to analyse the characteristics of the products and assess their purity. The toxicity of PAMAM-trastuzumab, PAMAM-doc-trastuzumab and PAMAM-ptx-trastuzumab conjugates was determined using MTT assay and compared with free trastuzumab, docetaxel and paclitaxel toward HER-2-positive (SKBR-3) and negative (MCF-7) human breast cancer cell lines. The cellular uptake and internal localisation were studied using flow cytometry and confocal microscopy, respectively. Results The PAMAM-drug-trastuzumab conjugates in particular showed extremely high toxicity toward the HER-2-positive SKBR-3 cells and very low toxicity towards to HER-2-negative MCF-7 cells. As expected, the HER-2-positive SKBR-3 cell line accumulated trastuzumab from both conjugates rapidly; but surprisingly, although a large amount of PAMAM-ptx-trastuzumab conjugate was observed in the HER-2-negative MCF-7 cells. Confocal microscopy confirmed the intracellular localisation of analysed compounds. The key result of fluorescent imaging was the identification of strong selective binding of the PAMAM-doc-trastuzumab conjugate with HER-2-positive SKBR-3 cells only. Conclusions Our results confirm the high selectivity of PAMAM-doc-trastuzumab and PAMAM-ptx-trastuzumab conjugates for HER-2-positive cells, and demonstrate the utility of trastuzumab as a targeting agent. Therefore, the analysed conjugates present an promising approach for the improvement of efficacy of targeted delivery of anticancer drugs such as docetaxel or paclitaxel. Electronic supplementary material The online version of this article (10.1007/s11095-019-2683-7) contains supplementary material, which is available to authorized users.
INTRODUCTION
Since their development, researchers have recognised the potential of nanocarriers as drug delivery systems. There are two strategies by which drug delivery can be achieved with nanosystems: passive delivery, which exploits the enhanced permeability and retention effect (EPR effect) to increase the penetration of nanocarriers into solid tumours; and active delivery, which is achieved by covalent conjugation of the nanocarrier to a ligand or antibody which can bind to a specific receptor that is overexpressed in cancer cells. Previous work dedicated to anticancer drug development has focused on achieving targeted delivery to the tumour, reducing adverse effects and increasing antitumour efficacy (1).
Trastuzumab is as a recombinant, humanised IG1 monoclonal antibody that selectively binds to human epidermal growth factor receptor 2 (EGFR2). Through binding to subdomain IV of the extracellular domain of overexpressed human epidermal growth factor receptor-2 (HER-2) receptors, trastuzumab blocks the receptor and inhibits excessive proliferation of HER-2-positive cancer cells. Previous research has indicated that inhibition of proliferation is a result of cell cycle arrest in the G1 phase (2). Therefore, the combination of trastuzumab and a taxane is a first-line therapy for the treatment of various cancers including lung, ovarian and metastatic breast cancer (MBC). In HER-2-positive MBC patients, three doses peer week of docetaxel (100 mg/m 2 ) given in combination with trastuzumab (at a 4 mg/kg loading dose followed by 2 mg/kg once weekly) resulted in a high overall response rate (ORR), overall survival (OS), response duration, time to progression and time to treatment failure (TTF), and the toxicity of the drug combination was only slightly increased compared with docetaxel alone (3). Similarly; paclitaxel, which is the preferred single chemotherapeutic agent for recurrent or metastatic breast cancer according to National Comprehensive Cancer Network (NCCN) guidelines (4), resulted in improved progression-free survival (PFS) of HER-2positive patients when administrated at a dosage of 80 mg/m 2 weekly in combination with a trastuzumab monoclonal antibody at 4 mg/kg (loading dose, followed by weekly administration of 2 mg/kg), compared with paclitaxel alone (5).
The use of dendrimers as carriers of anticancer drugs or monoclonal antibodies is well known (6). Teow et al. demonstrated the use of a third generation (G3) polyamidoamine (PAMAM) dendrimer as a drug carrier which increased the permeability of the poorly soluble drug, paclitaxel. Cytotoxicity studies have shown that the conjugation of lauryl chains and paclitaxel to G3 dendrimers significantly (p < 0.05) increased the cytotoxicity of the drug toward the human Caco-2 cell line as well as primary cultures of porcine brain endothelial cells (PBECs). The conjugate showed an approximate 12-fold increase in permeability across both the apical and basolateral cell monolayers compared with paclitaxel alone (7). Paclitaxel has also been conjugated to hydroxyl-terminated PAMAM G4 dendrimers and bispolyethylene glycol (bisPEG) polymer to achieve enhancement of drug solubility and anticancer activity. The cytotoxicity of the PAMAM dendrimer-succinic acid-paclitaxel conjugate towards A2780 human ovarian carcinoma cells was increased 10-fold compared with the free, nonconjugated drug (8). The in vitro studies of Miyano et al. have confirmed the efficacy of the monoclonal antibody conjugated to the dendrimer. The G6 PAMAM dendrimer was modified with two amino acidslysine and glutamic acid (KG6E)then trastuzumab and the fluorescent dye AlexaFlour 488 were attached. The results confirmed that the KG6E-trastuzumab conjugate specifically bound to SKBR-3 (HER-2-positive) cells in a dosedependent manner, with low binding affinity for MCF-7 (HER-2-negative) cells. In addition, the conjugate was significantly internalised by SKBR-3 cells and subsequently trafficked to the lysosomes (9).
We believe that we have developed an innovative delivery system which combines both strategies. In the present study, trastuzumab was used in a PAMAM-drugtrastuzumab conjugate carrying paclitaxel (ptx) or docetaxel (doc) in order to specifically target SKBR-3 HER-2 positive cells. Over 20% of breast cancers exhibit overexpression of HER-2 human epidermal growth factor receptor-2 (HER-2) (10); therefore, targeting this receptor may represent an attractive target for nanoparticles loaded with anticancer drugs. Moreover, dendrimer conjugation significantly changes the biodistribution of low molecular weight drugs, affording the opportunity to achieve disease-specific targeting while reducing delivery to sites of toxicity (11). Our aim was to create a conjugate which will release the drugs when it is exposed to low pH, such as at the site of solid tumour, by breaking the pH-sensitive linker between the monoclonal antibody and PAMAM-drug conjugate. This will then enable passive delivery of paclitaxel or docetaxel.
For this purpose, we analysed the cytotoxicity of PAMAMdrug-trastuzumab conjugates in HER-2-positive (SKBR-3) and -negative (MCF-7) human breast cancer cells. The internalisation efficiencies and cellular trafficking were determined in order to evaluate the potential application of PAMAM dendrimers as HER-2-targeted fluorescentlylabelled drug carriers. Our results indicate that PAMAMdrug-trastuzumab conjugates have increased toxicity toward HER-2-positive human breast cancer cells compared with the free drug or the PAMAM-trastuzumab conjugate. Therefore, our new conjugates could represent potential candidates for HER-2-expressing tumour targeting, and may pave the way for improvements in the effectiveness of therapy for this condition, which is the most common cancer in women.
Synthesis of PAMAM Docetaxel/Paclitaxel Conjugate
The linking of the drug to the dendrimer was done using a two steps covalent method (patent pending P.420273).
Shortly, 12.5 μmol of drug (docetaxel or paclitaxel) was dissolved in 3 ml of anhydrous DMSO at 25°C and 3-fold molar excess of N-(3-Dimethylaminopropyl)-N′-ethylcarbodiimide hydrochloride (EDC) was added. The mixture was stirred 0.5 h. Then 25 μmol of succinic acid was slowly added to the solution while maintaining the reaction mixing. The reaction was then stirred for 24 h. Khandare et al. showed that the use of EDC and other carbodiimides could be also appropriate for the synthesis of biomolecules and their functionalization with other molecules, therefore we chose the EDC/carbodiimides system instead of succinic anhydride (12). The choice of anhydrous DMSO as a synthesis solvent was determined by the high solubility of paclitaxel and docetaxel in it, as well as other reagents. Using the anhydrous DMSO, we performed the reactions in an inert gas atmosphere, as previously published (13,14).
Resulting drug-SA was stirred for 3 h at room temperature in the presence of 20 μmol of EDC. Then 1 mL of PAMAM G4 solution in methanol was evaporated in order to remove methanol and then was added to the reaction mixture and stirred at room temperature for 2 days.
The PAMAM G4-drug was purified by ultrafiltration on an Amicon Ultra-3 K (molecular weight cut-off, MWCO = 3 kDa). 1 H NMR, 13 C NMR and FTIR was used to analyze the purity of products and to ascertain the level of PAMAM and docetaxel/paclitaxel conjugation. 1 H NMR and 13 C NMR spectra were recorded on Bruker Avance III DRX-600 and 500 MHz spectrometers, using deuterated DMSO-d6 as solvents. The FTIR spectra were collected with a FTIR ATI Mattson Spectrometer Spectrum and samples were measured as thin film in KBr crystals. The analytical data can be found in the supplementary material.
Activation of Trastuzumab. SMCC was dissolved in a small volume of DMF, and diluted by adding 0.1 M PBS (phosphate buffered saline) pH 7.6, which contains 5 mM EDTA to obtain 1 mg/ml. The solution was added to trastuzumab. The mixture was incubated for 1 h at room temperature (RT). In the next step product was purified and buffer-exchanged into PBS pH 7.0, with Amicon Ultra-30 K column (MWCO = 30 kDa).
Introduction of Thiol Groups for the PAMAM G4 Dendrimer
Surface. Traut's reagent converts primary amine into thiol in the range of pH 7-10, however its half-life in solution decreases as the pH increases. Modification with Traut's reagent (2-iminothiolane) is very efficient and occurs rapidly at slightly basic pH. To introduce thiol groups into G4 dendrimer surface, the primary amine groups were reacted with a 10:1 mol excess of Traut's reagent in 0.1 M PBS buffer, at room temperature under N 2 for 1 h pH 8.0. Thiolated PAMAM G4 was purified and buffer exchanged into PBS, pH 7.0 by ultrafiltration on an Amicon Ultra-3 K column.
The Reaction of the Modified PAMAM G4 Dendrimer with the Activated Trastuzumab. Derivatized trastuzumab was reacted with thiolated PAMAM G4 dendrimer at a 1:12 M ratio. The reaction was conducted in PBS, pH 7.0 at 25°C for 24 h. Finally, the PAMAM-trastuzumab conjugate was purified from the excess of thiolated PAMAM G4 by Amicon Ultra-30 K (MWCO 30 kDa). The final stoichiometric ratio for PAMAM-drug-trastuzumab conjugate was 1:1:1.
Reverse phase high performance liquid chromatography (RP-HPLC) was used to analyze the purity of products and to ascertain the level of PAMAM and trastuzumab conjugation. Solvents used for HPLC analysis were at the HPLC grade; i PrOH, MeOH, MeCN was from Sigma-Aldrich, trifluoroacetic acid from J.T.Baker (9470) and Milli-Q water. All experiments were performed on two FPLC/HPLC systems: (1) AKTA Purifier two pumps system equipped with UV-900 monitoring, pH and conductivity probe and fraction collector Frac-920. Analysis using AKTA was performed at room temperature 25°C, (2) Shimadzu Prominence UFLC system equipped with LC-20 AD isocratic pumps with RF-20A fluorescence detector, SPD-M20A diode array detector for UV-Vis monitoring and CTO-20ASvp column oven that was setup at 75°C. Initially SOURCE uRPC C2/ C18 ST 4.6/100 column was used, but it appeared to be too hydrophobic for dendrimer and antibody analysis, therefore for all presented results Jupiter 4u Proteo 90A 2.0/100 column was used.
Synthesis of FITC Labeled Docetaxel/Paclitaxel and PAMAM-ptx/PAMAM-Doc Conjugate
The linking of FITC to the dendrimer was done using a two steps covalent method. Shortly, 6 μmol of drug (docetaxel or paclitaxel) was dissolved in 2 ml of anhydrous DMSO at 25°C and 3-fold molar excess of EDC was added. The mixture was stirred 3 h. Then 7 μmol of FITC was slowly added to the drug solution while maintaining the reaction mixing. The reaction mixture was stirred for 24 h. To obtain PAMAMdrug-trastuzumab conjugate 5,6 μmol of PAMAM G4 was added to resulting products: FITC labeled docetaxel and FITC labeled paclitaxel and stirred at room temperature for 2 days. Then the trastuzumab was reacted and the product was purified as described earlier. The final stoichiometric ratio for drug:FITC conjugate was 1:1 and for PAMAM-drugtrastuzumab-FITC conjugate 1:1:1:1.
The conjugates contained FITC were purified by ultrafiltration on an Amicon Ultra-3 K (molecular weight cut-off, MWCO = 3 kDa). 1 H NMR and 13 C NMR were used to analyze the purity of products and to ascertain the level of FITC conjugation. 1 H NMR and 13 C NMR spectra were recorded on Bruker Avance III DRX-600 and 500 MHz spectrometers, using deuterated DMSO-d6 as solvents.
Cell Culture HER-2 negative human breast adenocarcinoma (MCF-7) cell line was grown in DMEM medium supplemented with GlutaMAX and 10% (v/v) fetal bovine serum (FBS). HER-2 positive human breast adenocarcinoma (SKBR-3) cell line was grown in McCoy's5 medium supplemented GlutaMAX and 10% (v/v) fetal bovine serum (FBS). Cells were cultured in T-75 culture flasks in a humidified atmosphere containing 5.0% CO 2 at 37°C and subcultured every 2 or 3 days. Cells were harvested and used in experiments after obtaining 80-90% confluence. The number of viable cells was determined by the trypan blue exclusion assay with the use of Countess Automated Cell Counter (Invitrogen). Cells were seeded in flat bottom 96-well plates at a density of 2.0 × 10 4 cells/well in 100 μL of an appropriate medium. After seeding, plates were incubated for 24 h in a humidified atmosphere containing 5.0% CO 2 at 37°C in order to allow cells attaching to the plates.
Determination of Cytotoxicity
The influence of the PAMAM dendrimer conjugates and free docetaxel or paclitaxel on the cell viability was determined with the use of the MTT-assay. Briefly, to the 96-well plates containing MCF-7 and SKBR-3 cells at the density of 2.0 × 10 4 cells/well in appropriate medium different concentrations of all compounds were added. Cells were incubated with the dendrimer for 24 h in a 37°C humidified atmosphere containing 5.0% CO 2 . After the incubation cells were washed with phosphate buffered saline (PBS). Next, 50 μL of a 0.5 mg/mL solution of MTT in PBS was added to each well and cells were further incubated under normal culture conditions for 4 h. After incubation the residue MTT solution was removed and the obtained formazan precipitate was dissolved in DMSO (100 μL/well). The conversion of the tetrazolium salt (MTT) to a colored formazan by mitochondrial and cytosolic dehydrogenases is a marker of cell viability. Before the absorbance measurement plates were shaken for 1 min and the absorbance at 570 nm was measured using the PowerWave HT Microplate Spectrophotometer (BioTek,USA).
Determination of Hemolysis
The influence of the PAMAM dendrimer conjugates on the hemolysis was determined with the spectrophotometric method used previously (15). Briefly, human blood from healthy adult donors was obtained from a local blood bank. Blood was centrifuged for 10 min at 400 g to remove serum and buffy coat. Next, erythrocytes were washed four times with ten volumes of PBS buffer (pH = 7.4), followed by centrifugation for 10 min at 400 g. Erythrocytes were suspended in PBS buffer, the hematocrit was measured and erythrocytes suspension was diluted to the hematocrit of 2%. Erythrocyte suspension was mixed with analyzed compounds solutions in the same buffer to obtain final 1 μM concentration. Samples were incubated for 24 h and 48 h in 37°C. Next, samples were centrifuged at 400 g for 10 min. Supernatant was removed and the absorbance of supernatant at 540 nm was measured. For positive and negative control erythrocytes suspensions in distilled water and in PBS were used, respectively. The hemolysis amount was calculated from the equation: For comparison, the same experiment was performed for free docetaxel and paclitaxel.
Cellular Uptake Detection
In vitro uptake studies were carried out using FITC labeled docetaxel or paclitaxel and PAMAM-doc-trastuzumab or PAMAM-ptx-trastuzumab conjugate. Compounds were added at a final concentration of 0.1 μM to the 12-well plates containing MCF-7 and SKBR-3 cells at the density of 1.5 × 10 4 cells/well. In this study cells were incubated with the compounds for a specific time in a range from 1 h to 48 h in humidified atmosphere containing 5.0% CO 2 at 37°C. After the appropriate incubation cells were washed with PBS, suspended in 500 μL of medium and immediately analyzed with a Becton Dickinson LSR II flow cytometer (BD Biosciences, USA) using a blue laser -488 nm and PE bandpass filter -575/26 nm.
Confocal Microscopy
Confocal microscopy images were obtained with confocal inverted microscope SP-8, Leica equipped with 405 nm laser (Leica, DE). Cells at the density of 1 × 10 4 cells/well (SKBR-3) and 0.75 × 10 4 cells/well (MCF-7) were seeded on 96-well glass-bottom plates and incubated with 0.1 μM FITC labeled docetaxel or paclitaxel or PAMAM-doc-trastuzumab or PAMAM-ptx-trastuzumab conjugate for 24 h in 37°C humidified atmosphere containing 5.0% CO 2 . After the incubation, cells were cooled on ice and washed once with cold phosphate buffered saline (PBS) to inhibit endocytosis. Cells were imaged to visualize fluorescence of FITC labeled docetaxel or paclitaxel in green channel (excitation 488 nm, emission 520 nm) and in transmitted light.
Statistical Analysis
Data was expressed as mean ± SD. Analysis of variance (ANOVA) with the Tukey post hoc test was used for results comparison. All statistics were calculated using the Statistica software (StatSoft, Tulsa, USA), and p values <0.05 were considered significant.
Synthesis and Characterisation of the Conjugates
We have developed an innovative delivery system consisting of three components, each of which plays a different role. Trastuzumab provides specificity against human epidermal growth factor receptor 2 (HER-2), which is overexpressesed in various cancers including breast cancer; taxanes (docetaxel and paclitaxel) provide cytotoxic effects and the PAMAM dendrimer protects the whole conjugate in the circulatory system and provides specific drug release in the tumour environment when linked with an anticancer drug via a pH-sensitive linker. Yabbarov et al. has confirmed the dependence drug release on decreasing pH (16). He observed that pHdependent linkages are hydrolysed in the environment of the tumour to release the drug, which enables controlled administration of the active substance at the chosen site by exploiting the natural properties of tumour cells: high metabolism and acidic pH. We decided to link docetaxel or paclitaxel to the PAMAM dendrimer using succinic acid (SA) (17). Figure 1 illustrates the steps of synthesis of the PAMAM-drug-trastuzumab conjugate.
The chemical structures of PAMAM-doc and PAMAMptx were characterised by 1 H NMR, 13 C NMR analysis and FTIR spectroscopy (analytical data can be found in the supplementary material).
For clarity of 1 H-NMR and 13 C-NMR spectra, Fig. 2 presents the structures of drugs (a. paclitaxel, b. docetaxel) with the marked number of carbon atoms.
The structure of paclitaxel-FITC, PAMAM-ptx-FITC, docetaxel-FITC and PAMAM-doc-FITC was confirmed by their 1 H-NMR spectra using 500 MHz Bruker AVANCE instrument in 310 K. Chemical shifts are reported in ppm downfield from TMS using DMSO-d6 as a solvent. Figure 3 Aromatic signals for FITC (associated with aromatic protons atom adjacent to phenol group) appear at 6.55-6.70 ppm and 7.81-790 ppm. Signals from -NHCSappear at 10.68 ppm. Moreover, H 2' proton peak on the 1 H NMR spectra was shifted to 5.86 ppm. In the 13 CNMR spectrum (Fig. 4 lower panel) we observed chemical shift of C2' to 80.89 ppm. Figure 5 (upper panel) presents the 1 H-NMR spectrum for PAMAM-ptx-FITC (DMSO-d6, 300 MHz, ppm). Proton signals for PAMAM dendrimer appear at 2.17 ppm for -CH 2 -C(O)-NH, 2.38 ppm for -CH 2 -N-, 2.55-2-60 ppm for -N-CH 2 -, 3.04-3.13 ppm for -CH 2 -NH 2 and -C(O)NH-CH 2 , 7.92 ppm for -CONH. Aromatic signals for paclitaxel and FITC appear at (δ (ppm) ≈ 7.14-7.65). Moreover, H 2' proton peak for paclitaxel was shifted to 5.81 ppm, and signal for NH group at 8.48 ppm. The number of paclitaxel molecules conjugated with PAMAM dendrimer was calculated using the proton integration method. Unfortunately, it is very difficult to confirm the results of conjugation by 13 C NMR spectrum (Fig. 5 lower panel) due the small concentration of the sample and too big molar mass difference between the drug and the PAMAM dendrimer. We can only presume that signal at 84.30 comes from the bond between the drug and the dendrimer. Moreover, H2' proton peak of docetaxel was shifted to 5.84 ppm, and signal for NH group at 8.46 ppm. The number of docetaxel molecules that conjugated with PAMAM dendrimer was calculated using the proton integration method. Also in this case 13 C NMR spectrum (Fig. 6 lower panel), due the small concentration of sample and too big molar mass difference between the drug and the PAMAM dendrimer, allows only to presume that the signal at 84.68 comes from the bond between docetaxel and the PAMAM dendrimer.
In the next step, PAMAM-doc and PAMAM-ptx conjugates were conjugated to the monoclonal antibody (trastuzumab). To accomplish this, we used a succinimidyl 4-(N-maleimidomethyl)cyclohexane-1-carboxylate (SMCC) linker, which provides a convenient crosslinking agent for amino and thiol groups. The NHS esters react with primary amines to form stable amide bonds and the maleimide part reacts with sulfhydryl groups to form stable thioethers. To carry out the crosslinking reaction, we modified the amine groups of the PAMAM dendrimer into thiols using 2-iminothiolane (Traut's reagent). Modification with Traut's reagent is very efficient and rapid at a slightly basic pH (18).
The conjugates were characterised using FPLC/HPLC analysis. Reverse-phase high performance liquid chromatography (RP-HPLC) was used to analyse the purity of products and to ascertain the degree of PAMAM and trastuzumab conjugation. Initially, a water/acetonitrile elution system was used, but improved performance was achieved with the modified buffer: A: 0.1% TFA in water, B: 70% i PrOH, 20% MeCN, 0.1% TFA in water. Elution was typically performed using a gradient of 0-80% B over 30 min, followed by 80-100% B in 5 min, then 100% B for 10 min and finally 100-0% B in 5 min (Fig. 7a). Samples were typically injected as 20-100 μg of material suspended in 100 μL of buffer A. The PAMAM dendrimer has been reported to absorb at 214 nm, and so absorbance at this wavelength along with 280 nm (to detect protein) were recorded. We also monitored absorbance at 254 nm to measure any potential contamination. Additionally, a Shimadzu diode array system provided UV profiles (recorded at 200-600 nm). This allowed the purity of PAMAM dendrimer to be ascertained, by its absorbance at 220 nm has also been reported previously. The analytical data can be found in the supplementary material.
In the second step, analysis of trastuzumab was carried out. Analysis was performed on a UFLC system (composed of two LC-20ADXP isocratic pumps, a CTO-20AS column oven with diode array UV-Vis monitoring) operated at 75°C. The elution system was as before (A: 0.1% TFA in water, B: 70% i PrOH, 20% MeCN, 0.1% TFA in water). The system was run at a gradient of 0-80% B over 30 min to elute the main product (monitored at 280 nm), which appeared at 18.3 min. The UV profile of the main signal showed absorbance at 277 nm, as is expected for proteins (Fig. 7b).
Finally, the PAMAM-doc-trastuzumab and PAMAM-ptxtrastuzumab conjugates were analysed. Analysis of the chromatography profiles showed absorption at 280 nm. Analysis was carried out as before at 75°C, using buffers A: 0.1% TFA in water, B: 70% i PrOH, 20% MeCN, 0.1% TFA in water; and a gradient of 0-80% B over 30 min, following injection of 100 μg of sample. The UV profile of the signal at 21 min shows three peaks (Fig. 8). As expected, absorbance signals were observed at 276, 504 and 540 nm, which are characteristic of proteins, docetaxel or paclitaxel, respectively.
In Vitro Studies
There are examples of PAMAM dendrimer conjugated with various anticancer drugs reported (16,17), but few involve the use of a monoclonal antibody as a disease-specific targeting agent (18)(19)(20). In this study, the PAMAM dendrimer was conjugated to trastuzumab using a pH-dependent linker which can be applied for drug conjugation because the linkerbonded conjugates are stable in the extracellular media. This ensures low drug release, but lability when the conjugates enter the lysosomes which allows release of the drug to elicit its antitumour activity (11,21). In the present study, the number of drug molecules per PAMAM dendrimer molecule was calculated to be 1.0. The biocompatibility of the PAMAM-doc- trastuzumab and PAMAM-ptx-trastuzumab conjugates was evaluated using the MTT assay to measure the cytotoxicity in two different breast cancer cell lines: HER-2-negative human breast adenocarcinoma (MCF-7) and HER-2-positive human breast adenocarcinoma (SKBR-3). Measurements were made after 24 and 48 h of incubation and after a 24-h incubation with the drug, removal of drug, and another 24-h incubation without drug (24-24 h). The addition of this incubation variant allows to assess cell damage and mortality after the drug removing from the system. Figure 9 shows the cell viability profiles that were obtained for each conjugate compared with the free drug for both cell lines. The cytotoxicity of docetaxel and paclitaxel was dose-dependent, with IC 50 values of around 23.7 and 7.8 μM (respectively) after a 24-h incubation in the MCF-7 cell line, and 10.7 and 7.3 μM (respectively) after a 24-h incubation in the SKBR-3 cell line.
In contrast, trastuzumab itself exhibited very low toxicity even toward SKBR-3 cells, with cell viability observed to be over 85% following exposure to 20 μM concentration of the drug. However, conjugation of antibody with PAMAM dendrimer improved that cytotoxic effect (results published earlier (14)). The observed effect was more evident for SKBR-3 cells than MCF-7 cells, due to selective binding of the conjugate to cells that overexpress HER-2. These results are in good agreement with previous studies (Miyano et al.), that showed that when the glutamate-modified sixth generation lysine dendrimer (KG6E) was conjugated with trastuzumab, binding with the HER-2 receptor was more specific and exhibited a higher cellular internalisation rate compared with the free monoclonal antibody (9). Other studies have confirmed the lack of antiproliferative activity of free trastuzumab toward different HER-2-positive cell lines (22). Importantly, the addition of taxanes to the PAMAMtrastuzumab conjugate enhanced the therapeutic effect and selectivity of the conjugates in comparison with the free drugs. This was particularly obvious after 48 h of incubation and (Table I) in SKBR-3 cells indicate that both PAMAM-drug-trastuzumab conjugates showed increased selectivity and therapeutic effects compared with the free drugs, but the most remarkable result of the present study is the selectivity that was observed between cell lines. The PAMAM-doc-trastuzumab conjugate in particular showed extremely high toxicity toward the HER-2-positive SKBR-3 cells and very low toxicity towards to HER-2negative MCF-7 cells.
Our results confirm the uniqueness of the PAMAMdrug-trastuzumab conjugates, and reveal a synergistic effect: an increase in the toxic efficiency towards to HER-2positive cells (SKBR-3) and a decrease in the toxic efficiency towards to HER-2-negative cells (MCF-7). These results present the possibility of significant dose reduction while maintaining the therapeutic effect and selectivity, which can protect from the adverse effects caused by administration of docetaxel or paclitaxel. This finding is in agreement with a previous study by Rodallec et al., which reported the association between cytotoxicity and cellular uptake and the level of HER-2 expression. Immunoliposomes containing docetaxel encapsulated in a stealth liposome and engrafted with trastuzumab showed higher antiproliferative efficacies and efficient drug delivery compared with the standard combination of docetaxel and trastuzumab (22). Furthermore, Kulhari et al. confirmed the effectiveness of dendrimer-conjugated monoclonal antibodies and anticancer drugs. Even at very low concentrations (7.8 ng/mL), the trastuzumab-dendrimer-docetaxel conjugate showed significantly higher cytotoxicity against HER-2-positive MDA-MB-453 cells than the dendrimer-docetaxel conjugate, with no significant difference in cytotoxicity observed toward HER-2-negative MDA-MB-231 cells (19). These results demonstrate that trastuzumab can specifically target and successfully deliver docetaxel to HER-2-positive cells.
Many clinical trials have shown that intravenous injection is probably the most convenient way to deliver drugs conjugated with dendrimers (23). Unfortunately, very often lysis of red blood cells excludes intravenous delivery of the dendrimer conjugates. PAMAM dendrimers with exposed terminal cationic surface groups possess hemotoxic properties because they are able to disrupt cell membrane of erythrocytes after adhesion to the cell surface by electrostatic attraction and formation of holes in the membrane (24). Modification of dendrimer surface groups is one of the methods used to reduce dendrimer toxicity (15). Therefore, to assess the biocompatibility of the all analysed compounds we have evaluated their hemotoxicityThe ability of the PAMAM-drug-trastuzumab conjugates to cause hemolysis was compared with the hemolytic activity of free drugs (Fig. 10). The paclitaxel and docetaxel are known to possess minor hemolytic properties contrary to amino-terminated PAMAM dendrimer generation 4. Under the applied experimental conditions free drugs caused 1-2% hemolysis after 24 h-incubation. PAMAM-doctrastuzumab and PAMAM-ptx-trastuzumab conjugates were able to evoke 2.4 and 2.5% hemolysis, respectively. After 48 h of incubation with the above-mentioned conjugates, less than 10% and 12% of hemolysis was observed, respectively. In conclusion, the PAMAM-drug-trastuzumab conjugates possesses higher hemotoxicity than free drugs but it is very likely that this level will be lower in the presence of plasma proteins. Klajnert et al. showed that presence of HSA in the same concentration as under physiological conditions significantly reduced the amount of hemolysis caused by PAMAM dendrimers (25). Moreover, the conjugates are not expected to circulate in blood as long as 48 h.
In this study, FITC was used to label free drugs and the dendrimer conjugated to anticancer drugs via a pHdependent linker. The molar ratio of FITC molecules to PAMAM in the conjugate was 1:1. This method of conjugation ensured stability of the conjugates in the extracellular media and high intercellular drug release catalysed by lysosomal enzymes, what resulted in increased antitumour activity. In order to analyse the cellular uptake of free docetaxel, paclitaxel and the PAMAM-drug-trastuzumab conjugates by flow cytometry, cells were incubated with the compounds at (Fig. 11). It is likely that the linker-bonded conjugate was less stable in the more acidic environment of the cancer cells, which resulted in earlier paclitaxel release. Importantly, the cellular uptakes of free paclitaxel and docetaxel, even after 48 h of incubation, were significantly lower than the PAMAM-drug-trastuzumab conjugates. This is in good agreement with our results obtained by MTT assay, as well as those of previous studies; for example, Miyano et al. suggested that trastuzumab conjugated to the KG6E dendrimer shows HER-2-specific binding, and a consequent high rate of cellular internalisation (9). Other reports have demonstrated the rapid internalisation of trastuzumab-PAMAM (26) and trastuzumab-PLGA (27) nanoparticle conjugates into HER-2-positive breast cancer cells in comparison with free trastuzumab. Confocal microscopy was used to confirm the intracellular localisation of free paclitaxel, docetaxel and the PAMAM-ptxtrastuzumab and PAMAM-doc-trastuzumab conjugates. Incubation of HER-2 positive SKBR-3 and HER-2 negative MCF-7 cells with 0.1 μM of the FITC modified compounds was carried out for 24 h (Fig. 12). Both free drugs were internally localised in both cell lines to some extent; however, paclitaxel was found to be accumulated in the nucleus region in contrast to docetaxel, which was located in the cytosol. Although both conjugates were more concentrated in the nucleus of HER-2-positive compared with negative cells, Our results have a number of similarities with the findings of Ma et al. (20). In their study, trastuzumab was covalently linked to a PAMAM dendrimer via a bifunctional PEG linker, and was internalised more efficiently by HER-2-positive BT474 cells than by HER-2-negative MCF-7 cells. Moreover, colocalisation experiments indicated that the trastuzumab-PAMAM conjugate was located in the cytoplasm (20). In other studies, trastuzumab conjugated with the KG6E dendrimer bound selectively to SKBR-3 cells, rather than to MCF-7 cells, although the conjugate was internalised to the lysosomes (9). Rodallec et al. confirmed the cellular uptake of docetaxeltrastuzumab stealth immunoliposomes (ANC-1) in different HER-2-positive cell lines, and found that ANC-1 was primarily localised around the cell nuclei. However, ANC-1 showed increased accumulation in SKBR-3 cells compared with the MDA-MB-453 or MDA-MB-231 cell lines (22).
Covalent attachment of a humanised monoclonal antibody trastuzumab to a G5 PAMAM dendrimer containing the drug methotrexate (to form the G5-Fl-HN-MTX conjugate) has been used in the treatment of skin, lung and breast cancer (18). Colocalisation experiments carried out in HER2-expressing MCA207 cell line have indicated that G5-Fl-HN-MTX was localised in the late endosomes and lysosomes within 1 h of exposure, but the most surprising result was the long residence time (48 h) of the conjugate in the lysosomes. This may result in the reduced cytotoxicity which is observed in the case of the G5-Fl-HN-MTX conjugate. Our conjugates are free of such a disadvantage, as it is demonstrated by the results of the MTT assay. Although the steric hindrance caused by covalent conjugation of the antibody to the G5 PAMAM dendrimer may prevent intracellular esterase enzymes from releasing the drug, the conjugate is unable to affect its cytotoxic activities because of the extended retention in the lysosomes. It was for this reason that we decided to use a pH-dependent linker that allows the conjugate to disintegrate in the acidic environment of the cancer cell. In the literature there are many examples of improved drug delivery as a promising strategy to optimise the effectiveness of anticancer drugs while reducing the toxicity associated with treatment. This study is the first step towards enhancing our knowledge about the design of selective conjugates which can be successfully used for targeted therapy.
CONCLUSION
Preclinical studies have demonstrated that HER-2 overexpression occurs in over 20% of breast carcinomas and is associated with resistance to anticancer drugs such as paclitaxel and docetaxel (28). Such studies have also reported the additive effects of synergistic interactions between trastuzumab and taxanes (29). The present study presents the successful synthesis and characterisation of the HER-2-targeted conjugates PAMAM-doc-trastuzumab and PAMAM-ptxtrastuzumab. Analysis of the cytotoxicity, cellular uptake and internalisation of the conjugates indicate that they represent promising carriers for HER-2-expressing tumour-selective delivery. The observed selectivity is achieved not only through the inclusion of trastuzumab, which binds and blocks HER-2, but also through the selection of a pH-sensitive linker that breaks in the tumour environment to allow PAMAM-drug conjugate release. Both conjugates show potential as drug delivery systems enhancing the therapeutic index and reducing the required dosage of anticancer drugs. In our opinion these conjugates might be superior for in vivo application due to their increased toxicity for HER-2-positive breast cancer due to specific targeting to tumor cells.
ACKNOWLEDGMENTS AND DISCLOSURES
This work was sponsored by the National Science Centre (Project: "Nanoparticle conjugates with the monoclonal antibodya new opening in target tumor therapy" UMO-2015/ 19/N/NZ3/02942).
AUTHOR CONTRIBUTIONS
The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
v3-fos-license
|
2023-10-01T15:15:24.190Z
|
2023-09-19T00:00:00.000
|
263276546
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://dergipark.org.tr/en/download/article-file/3315103",
"pdf_hash": "04538c4acef0737b247d3fe8f4e2b97c6ea2ea6e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42454",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"sha1": "349fa209a969272ecdc10bd32299c867fddc059a",
"year": 2023
}
|
pes2o/s2orc
|
A comparative analysis of the effects of drop set and traditional resistance training on anaerobic power in young men
Drop set is a popular time-efficient resistance training method. This study aimed to compare the impact of drop-set (DS) training versus traditional resistance training (TRT) while ensuring equalized total training volume on the Wingate Anaerobic Test. Twenty-four sports science students were assigned to either DS (n=12) or TRT (n=12) protocols according to their 1 RM values, and they trained twice a week for 6 weeks.1 RM test was only conducted at the beginning of the study, while the Wingate anaerobic power test was administered at baseline and after the intervention period. The study demonstrated a significant main effect of time for peak power (p< 0.001), and a between-group interaction effect was observed for peak power (p< 0.05). The DS group exhibited slightly higher peak power values compared to TRT (p< 0.05, 15% increase for DS, 13% for TRT, ES: 0,50 and 0,36 respectively), while both groups displayed significantly increased values from pre to post-testing (p < 0.001). Based on our findings, it can be inferred that DS training leads to slightly greater enhancements in anaerobic power when compared to TRT. Additionally, the study confirmed that a 6-week (12 sessions in total) resistance training program utilizing a load of 70% of 1 RM was sufficient to enhance anaerobic performance in young active men.
Introduction
Resistance training (RT) is frequently recommended as an intervention strategy to augment muscular adaptations, including increases in muscle strength, size, and local muscular endurance.The available evidence suggests that the optimization of these adaptations necessitates the manipulation of resistance training variables (Kraemer & Ratamess, 2004;ACSM, 2009).The comprehensive investigation of variables, including intensity and volume of effort, exercise order, number of performed repetitions and sets, tempo of movement, duration of rest periods between sets and exercises, and training status, has been pursued diligently to optimize muscle adaptations (Bird et al., 2005;Ralston et al., 2018).Fundamental constituents of resistance training, namely volume, and load, directly influence the development of muscular adaptations (Schoenfeld et al., 2015;Schoenfeld et al., 2017;Schoenfeld et al., 2019).Empirical evidence suggests that modifications in training load can exert significant effects on the acute metabolic, hormonal, neural, and cardiovascular responses to training (Kraemer & Ratamess, 2004).An optimized strength increase is observed when employing a low repetition scheme involving heavy loads, ranging from 1 to 5 repetitions per set, at loads between 80% to 100% of the individual's 1RM (Schoenfeld et al., 2021).Athletes endeavor to manipulate resistance training (RT) load through advanced techniques such as blood flow restriction (Loenneke et al., 2012), rest-pause method (Prestes et al., 2019), cluster sets (Haff et al., 2008;Tufano et al., 2016), and drop sets (Sødal et al., 2023), seeking an additional stimulus to overcome performance plateaus, maximize muscular strength and mitigate training monotony (Krzysztofik et al., 2019).
The implementation of drop sets is among the most prevalent time-efficient training modalities employed to promote muscle strength and hypertrophy.This technique involves executing sets to concentric muscle failure at a specific load and subsequently reducing the load immediately to initiate the subsequent set, taken to either concentric or voluntary muscle failure (Sødal et al., 2023).The execution protocol of drop sets (DS) lacks a well-defined consensus in the current literature and remains a subject of diverse interpretations within the weightlifting community.Since DS may increase time under tension, metabolite accumulation, cell swelling, and training volume, it may be a superior option for hypertrophy.Conversely, it has been hypothesized that DS may not be optimal for strength gains (Coleman et al., 2022), but there is no compelling evidence to accept this assumption.Although the use of DS as a training approach has gained widespread popularity, its effectiveness still must be established through rigorously controlled research studies.Several investigations have been conducted on this subject, yielding conflicting findings (Varovic et al., 2012;Enes et al., 2021;Fink et al., 2018;Ozaki et al., 2017;Angleri et al., 2017).To date, no study has been conducted to compare the anaerobic power outcomes between the DS method and traditional RT under conditions where the training volumes are equated.The primary objective of this study was to undertake a comparative assessment of the effects of DS and TRT methods implemented over a 6-week intervention period on anaerobic power outputs.Jimsa, Fitness Equipments, Eskişehir, 2000).These exercises were specifically selected to target the quadriceps muscles, particularly in the context of drop sets.A 45-degree leg press device was utilized, and participants received explicit instructions to avoid surpassing a 90-degree knee angle during the eccentric phase of the movement.Additionally, they were rigorously cautioned against lifting their heels off the machine's foot platform.The TRT group trained 4 sets of leg presses and 4 leg extensions, while the DS group trained 3 sets of leg presses and 7 sets of leg extensions.
Methods
The load utilized for the TRT group corresponded to 70% of their one-repetition maximum (1 RM) with 8-12 reps and, accompanied by a resting period of 3 minutes between sets.The tempo employed for both the concentric and eccentric phases of the exercises was set at 1:2, and the exercise order followed a sequence of leg press followed by leg extension.The DS group performed 3 sets of leg press exercises at 70% of their 1 RM within a repetition range of 8-12.Subsequently, upon finishing the third set, they commenced leg extensions with a weight reduction of 20%, aiming to reach muscular failure.Upon achieving concentric failure, the load was once more decreased by 20%, and participants were instructed to continue the exercise until reaching failure again.Regarding the second exercise, namely the leg extension, the DS group completed three sets with a load equivalent to 70% of their 1 RM, adhering to a repetition range of 8-12.
Then, the participants executed two consecutive drop sets until reaching failure, with a reduction of 20% in weight for each successive drop set (Figure 1 1 RM Testing 1 RM test was conducted to determine the training load for the participants.Participants underwent a warm-up comprising a 5-minute stationary bicycle ride, followed by a 1-minute rest period.Subsequently, they were familiarized with the resistance machines through 8-10 repetitions of a light load (50% of predicted 1 RM).Following a 2-minute rest, participants executed a load approximately equivalent to 80% of their estimated 1 RM throughout the complete range of motion.Subsequently, the weight was incrementally increased after each successful attempt until reaching failure.Rest intervals of 2-3 minutes were provided between each attempt and the one-repetition maximum (1RM) was achieved within 5 attempts.The order of exercises during the 1RM testing was as follows: leg press followed by leg extension.However, for the leg extension exercise, a 10-repetition maximum (10RM) test was employed to mitigate the risk of injury, as it is not advisable to conduct 1RM testing for single-joint exercises.The estimation of a 1 RM was accomplished using the Brzycki equation, which relies on a 10repetition maximum (10RM).The equation utilized is as follows: Weight ÷ (1.0278 -(0.0278 × Number of repetitions)).
Wingate Testing
The Wingate test was performed using a friction-loaded cycle ergometer (Monark 894 E model, Sweden) connected to a microcomputer for data interfacing.The seat height and handlebars were individually adjusted to suit each subject.The Wingate test encompassed a 30second all-out sprint against a constant resistance relative to body weight (7.5% of body weight), as proposed by Ayalon et al. (1974).Prior to testing, all participants engaged in a 10-minute warm-up session on the cycle ergometer with a resistance corresponding to 2% of their body weight.The cycling cadence was set between 70 to 80 revolutions per minute (rpm).After the warm-up, a 1-minute rest period was provided.Participants were informed that the weight basket would automatically drop when a cycling cadence of 100 rpm was achieved.When a consistent pedal rate of 60 rpm was reached, a "3-2-1go!"countdown was announced, and participants were encouraged to pedal maximally.Subjects were verbally encouraged throughout the test to refrain from pacing and to maintain a maximal effort consistently.Upon completion of the test, the weight basket was raised, and participants continued pedaling without any additional weight for duration of 5 minutes to facilitate a cooldown period.The computer calculated and stored power output every second during the test.Data were collected through the use of software.All performance tests were conducted at least 72 hours after the last training session, and all participants were strictly instructed to abstain from engaging in intense physical activity and from consuming diuretics or stimulants such as coke, coffee, and tea for a minimum of 24 hours before the tests.Environmental variables, such as humidity and temperature, were controlled and maintained at stable levels throughout all test sessions, ensuring uniform diurnal conditions.
Statistical Analyses
Data normality was confirmed using the Shapiro-Wilk test, and variance homogeneity was assessed with Levene's test.Statistical analyses were performed using IBM SPSS Statistics software for Windows (version 22.0), and the data were presented as mean and standard deviation (SD) values.The percentage change was calculated using the equation: %change = (post-test -pre-test) / pre-test * 100.A two-way, repeatedmeasures ANOVA was employed to examine the interaction between time (pre and post-intervention) and condition (experimental and control).Additionally, a t-test was conducted to assess potential differences in total training volume between the two conditions.When a statistically significant difference was observed over time within or between the groups, a syntax model based on Bonferroni adjustment was utilized to determine the source of the difference.The use of paired or independent t-tests was avoided to minimize the probability of committing a type one error.Also, Eta squared (η 2 ) values were obtained.Effect sizes were interpreted as small, medium, and large if they corresponded to partial eta-squared values of 0.01, 0.06, and 0.14, respectively (Richardson, 2011).The statistical significance level was predetermined at p < 0.05.
Results
A two-way repeated measures ANOVA showed that both DS and TRT groups significantly improved Wingate anaerobic test performance (p<0.001).The DS group showed slightly improved peak power than the TRT group, but the difference was minor (p<0.040;effect size: 0.50 vs. 0.36).Both groups also increased AP with no significant difference between (p>0.05),PD was not significantly changed for TRT, but there was a significant increase in PD for the DS group, as it may be acceptable due to the high PP outputs of DS (Table and Figure 2).
Discussion
The major finding of this study was that resistance training performed twice a week with a load of 70% 1 RM can significantly improve the anaerobic power of young men.The drop set method also effectively improved the anaerobic performance of young men and marginally better than TRT.These results can be partly explained by the total training volume equated between groups.The total training volume (mean) was for TRT and 14309.11±2353.66for the DS group, respectively.The review by Figueiredo et al. 2018 confirmed that training volume is the most effective variable in resistance training for muscle size and health outcomes, not for strength because exercise load seems to be the predominant variable modifying muscle strength compared to other variables (Borde et al., 2015).However, it was highlighted that higher volume may result in higher strength gains when evaluating different resistance training protocols utilizing the same load (Peterson et al., 2005;Peterson et al., 2004;Rhea et al., 2003).Enes et al. (2021) Likewise, a similar fact might hold for anaerobic power, given the demonstrated positive association between anaerobic power and muscle strength and muscle morphology (Arslan, 2005;Alemdaroglu, 2012;Lee et al., 2021).It was found that strength-trained individuals exhibited notably higher average anaerobic power levels compared to their non-trained counterparts (Slade et al., 2002).The adaptations resulting from resistance training may contribute to enhanced muscular activation during anaerobic power assessments.Enhanced power output resulting from neural adaptation may be attained through several mechanisms, including heightened recruitment of motor units, improved synchronization of motor unit firing, increased synergistic activation of other muscle groups, or reduced activation of antagonistic muscle groups (Slade et al., 2002).
In contrast to our findings, Fink et al. ( 2018) reported an increase in triceps push-down 12RM strength of 16.1% for the DS group and 25.2% for the TRT group.However, it is important to note that these differences did not reach statistical significance (effect size: 0.88 vs. 1.34).The difference in outcomes can be attributed, at least in part, to differences in the study design employed by Fink et al. (2018) compared to our own investigation.Specifically, in their study, the DS group underwent training with 12 RM for only one set, while the TRT group engaged in 3 sets.On the other hand, our study involved a more intensive training protocol, with the DS group completing 6 sets at 70% of their 1 RM, and the TRT group performing 8 sets.These findings are consistent with earlier studies, which have indicated that improvements in muscular strength are dependent upon the magnitude of the training load (Schoenfeld et al., 2015;Ogasawara et al., 2013).Ozaki et al. (2017) investigated three resistance exercise conditions: high-load (HL), low-load (LL), and a single high-load set with additional drop sets (SDS).Significant strength gains were observed in the HL and SDS conditions, while the LL group showed no improvement in strength.These findings offer valuable insights into selecting an appropriate initial load for effective drop-set practices.Indeed, the initial load in resistance training can play a crucial role, similar to post-activation potentiation (PAP), in influencing strength gains.By selecting a high-load initial set, the phenomenon of PAP can be harnessed, resulting in enhanced muscle performance during subsequent drop sets.This strategic approach holds promise for optimizing strength adaptations and further improving resistance exercise outcomes (Petisco et al., 2019).
Conclusion
In conclusion, this study demonstrates the effectiveness of the drop sets (DS) method in enhancing anaerobic power among young men.While it exhibits similarities to traditional resistance training, the DS method shows a slight advantage.Due to its time efficiency, athletes and coaches can incorporate DS into their conditioning process for variation and potential performance gains.Existing research on the DS method predominantly focuses on hypertrophic adaptations, attributing its ability to increase metabolic and mechanical stress.However, further investigations are warranted to elucidate its impact on muscular adaptations fully.Additional studies are essential to comprehensively understand the comprehensive benefits of the DS method in resistance training programs.
, resistance training protocol).During each resistance training (RT) session, the number of repetitions and the load used for each set were meticulously recorded.The training volume was determined by multiplying the load (expressed as a percentage of 1RM) by the number of repetitions.Notably, the total training volume for the 12 sessions remained similar across all groups.Resistance training (RT) sessions were conducted under the supervision of competent and certified personal trainers to guarantee the precise execution of the exercises.
weight was summed in 12 training sessions and divided into 12 and was 14124.13±1540.49 Authors' ContributionStudy Design: KK, FT; Data Collection: KK, FT; Statistical Analysis: KK; Manuscript Preparation: KK, FT.
Table 1
Descriptive statistics of the groups (Mean±SD.
Resistance TrainingThe study's training phase extended over a period of 6 weeks, during which participants engaged in 2 training sessions per week on non-consecutive days with a total of 12 sessions.The training regimen comprised of two machine-based exercises, namely, leg press and leg extension (
Table 2
Wingate anaerobic test performance changes from pre to post test (Mean±SD).
PP: Peak power; AP: Average power; PD: Power drop;a: A significant difference within groups from pre to post test (p<0.001);b: A significant difference between groups for the post tests (p<0.05).
|
v3-fos-license
|
2018-12-05T15:52:34.699Z
|
2016-04-13T00:00:00.000
|
56704939
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://ccsenet.org/journal/index.php/ies/article/download/56873/31585",
"pdf_hash": "0f4cbfbb9e241b9eff540c41e07673f1767ef8a7",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42455",
"s2fieldsofstudy": [
"Education"
],
"sha1": "0f4cbfbb9e241b9eff540c41e07673f1767ef8a7",
"year": 2016
}
|
pes2o/s2orc
|
Improving Technological Pedagogical Content Knowledge ( TPACK ) of Pre-Service English Language Teachers
Developing as teachers and optimizing learning experiences for future students is the ultimate goal in technology use in teacher education programs. This study aims to explore the effectiveness of a five-week workshop and training sessions on Technological Pedagogical Content Knowledge (TPACK) of pre-service English language teachers. The participants are 59 pre-service English language teachers enrolled in an ELT Methodology Course at a state university. The data is gathered through the TPACK Scale developed by Solak and Çakır (2014) and journal entries of pre-service English language teachers before and after the procedure. The results indicate a statistically significant improvement in TPACK scores of both male and female pre-service English language teachers. The journal entries clearly indicate an increase in several possible applications or websites that can be used in the classroom with more effective and to the point objectives. The pre-service English language teachers have also displayed better performance in manufacturing and tailoring language learning/teaching materials with specific goals.
Introduction
Today children are growing up with technology.It is an indispensable part of their lives.However, it is a known fact that teachers' technology-related knowledge, skills and competencies fall short when compared with those of their technology-native students (Belland, 2009;Yalin, Karadeniz, & Şahin, 2007;Lim & Khine, 2006).This means that there is not only a lack in teachers' technology use but also in their integration of technology into their pedagogical applications.
Just the availability of more technology in schools today does not automatically guarantee better use and effectiveness.Teachers should be trained to make the best of educational technologies to support students' learning.This specific knowledge to optimize technology to support students' learning of the subject is termed technological pedagogical content knowledge (TPACK) (Mishra & Koehler, 2006).Given the fact that several governments are investing in educational technologies, teacher education programs should equip their graduates accordingly.Technology can be used to improve student learning, support students and parents, make the school more engaging and relevant for the learners, provide equal opportunities for the disadvantaged students, allow for and support teacher professional development (Zuker, 2008).
Teaching and learning is defined as a complex activity that draws on many kinds of knowledge (Mishra & Koehler, 2006, p. 1020).In the past, teaching and learning environment is defined as an intersection of two main domains; pedagogical and content knowledge (PCK).The idea was first proposed by Schulman (1986).PCK refers to the unique form of professional knowledge that teachers possess in making the content knowledge accessible to the students through some pedagogical methods (Chai, Koh, & Tsai, 2013).Today, with the effects of educational technologies, Shulman's idea is built on by adding a new technology component.With the emergence of technological, pedagogical and content knowledge (TPACK) technology-supported courses have gained priority.In very broad terms TPACK can be defined as a framework which synthesizes digital technologies into classroom teaching and learning.The core components of TPACK are content knowledge (CK), pedagogical knowledge (PK), and the technological knowledge (TK).These three basic forms of knowledge have overlapping parts namely; pedagogical content knowledge (PCK), technological content knowledge (TCK), language teacher education in the country.The technological pedagogical content knowledge of classroom English language teachers was also found to be low in this study.Öz (2015) tries to assess pre-service English as a foreign language teachers' technological pedagogical content knowledge.The findings revealed a highly developed knowledge of TPACK.Gender differences were found to be significant with respect to Technological Knowledge and Pedagogical Knowledge dimensions with females proportionally having higher TPACK development.In a similar study Solak and Çakır (2014) examined pre-service EFL teachers' TPACK competencies in Turkey in terms of gender and academic achievement.The results of the research suggest that males' technological knowledge was higher than females; however, females were better than males in pedagogical knowledge.Moreover, no significant difference was found between TPACK mean and academic achievement.
The study of Tai (2013) focuses on the effects of TPACK-in-Action workshops to English classrooms.The study used an observation instrument based on the TPACK framework (Mishra & Koehler, 2006) to investigate the impact of TPACK-in-Action workshops had on English teachers in Taiwan.Findings showed that the TPACK-in-Action CALL workshops had a strong and positive impact on elementary English teachers in Taiwan.
Though many teachers do not ignore possible benefits of using digital resources to help students' academic achievement, several studies indicate teachers may be reluctant to use or integrate technology to support their classes (Conlon & Simpson, 2003;Cuban, 2001;Watson, 2001).Among several reasons, not knowing how to effectively use technology can be cited as a major cause.Knowledge about technology is complex and dynamic.
The ever-changing nature of technology requires constant up dating.Staying current might be time consuming for teachers.Yet, it is inevitable for teachers to acquire TPACK.To this end, a constructivist approach is thought to be effective in that knowledge is constructed through interactive experiences with the world and others.A perspective assuming experience as a necessary condition for the acquisition of knowledge might infer that training and workshop sessions help pre-service teachers acquire and improve TPACK.
To this end, the present study employed a five-week training and workshop on TPACK for pre-service English language teachers in a state university as a part of their course requirement.The following research questions guided the study: 1) Will TPACK of pre-service English language teachers improve as a result of TPACK training and workshops?
2) Do TPACK of pre-service English language teachers differ according to gender?
3) What are the opinions of pre-service English language teachers about TPACK training and workshops?
Research Design
The study employed a mixed design involving both qualitative and quantitative research methods.Quantitative data was gathered through TPACK Scale developed and validated by Solak and Çakır (2014).The pre-service English language teachers also kept journals prior to and after the training and workshops.Content analysis of the journals provided qualitative data.
Participants and Setting
The participants of the study were 59 pre-service teachers in an English Language Teacher Training Program in a state university.The participants were on their third year of training.That was the year when methodology courses (the courses about pedagogy and language teaching) were the most.Other information about the participants is below: More than 8 hours The gender ratio was common in teacher training programs in Turkey.It appears the average participants in the study were pre-service English language teachers possessing a personal computer, using social media and spending between 1-7 hours on the net.That none of the participants was a computer novice was important because it might be assumed that already acquired familiarity with digital technologies and computer literacy eliminated the barriers resulting from lack of computer skills.
Data Collection and Analysis
The pre-service English language teachers took the TPACK inventory before and after the TPACK training and workshops.The TPACK scale was developed by Solak and Çakır (2014).The data was analyzed using SPSS 21 program.The Kolmogorov-Smirnov test revealed normal distribution (.200) and paired samples t-test results and descriptive statistics findings were presented.
The participants were also asked to keep journal entries before and after the TPACK training and workshops.
The pre-service English language teachers were asked to write about how ICT skills, their pedagogical knowledge and ELT knowledge could be integrated to teach English.They were also asked to describe actual educational technology activities to teach English effectively both before and after the training and workshops.The journal entries were analyzed through Constant Comparison Method suggested by Miles and Huberman (1994), which yields themes and patterns as a result of sorting, coding and connecting pieces of data.Two raters sorted, coded and identified the categories separately to ensure reliability of qualitative analysis.The inter-rater reliability the formula (the number of agreements/the number of agreements (x) the number of disagreements multiplied by 100) suggested by Tawney and Gast (1984) was used.The inter-rater reliability of the qualitative data in the study was found .90, which indicated a high consensus on the coding and categorisation of data (Gwet, 2014).The paired samples t test results indicated a 0.025 significance between pre and post test results.The total means showed that the pre-service English language teachers improved their total scores in the post-test.It can be concluded that the TPACK training and workshops served the purpose.
Analysis of the TPACK Scale
A more detailed analysis revealed comparison of the subcategories of the scale in terms of participants means scores in pre-and post-applications of the scale.The findings are below: As for sub-categories of the scale, the pre-service English language teachers' performance indicated statistically significant difference in Content Knowledge, Pedagogical Knowledge and Pedagogical Content Knowledge.The mean scores for each section revealed an improvement in post-scores.It may be concluded that the trainings and workshops improved content and pedagogical knowledge of the pre-service English language teachers.
In order to investigate the gender factor, male and female pre-service English language teachers' total scores on the TPACK scale were compared below: The scores of male and female pre-service English language teachers presented statistically significant difference in pre-and post-applications of the TPACK Scale.Both groups increased their mean scores, however, the increase in female pre-service English language teachers was a lot higher though they were poorer at the beginning.
Analysis of the Journal Entries
The qualitative analysis of the journals before the TPACK training yielded the results below: Table 5. Journal entries before the TPACK training and workshops
Software and applications Purpose
The Before the TPACK training and workshops the pre-service English language teachers listed several applications, websites and hardware as new technology to be used in the classroom.They emphasized the potential that new technologies might foster motivation and interest into the lessons.They stated that these applications could spice up their classroom.Most of those cited were well-known applications and websites such as Facebook, Twitter, Skype and Youtube.However, as for the purpose, they could only mention very limited and superficial uses such as improving listening, pronunciation and vocabulary.They failed to explain how learning objectives could be achieved.An extract from a pre-service English language teacher says: "I know technology is a part of our lives in this era.The smart boards, smart phones, computers and etc. are in our lives.I know I should use them in education too.But how?I don't have sufficient knowledge and skills about it".Similarly, although the pre-service English language teachers were aware of the individual differences of students such as multiple intelligences and learning styles, they could not provide the procedures about how to cater for different students integrating technology, content knowledge and appropriate pedagogy.
A similar qualitative analysis was carried out after the TPACK training and workshops and the findings are below: Table 6.Journal entries after the TPACK training and workshops
Software and applications Purpose
The Internet After the TPACK training and workshops, the journal entries of pre-service English language teachers revealed a much larger list of online applications, software and websites.Most of these applications and websites were related to educational purposes.The possible purposes to use technology had increased a great deal.Moreover, the pre-service English language teachers' awareness about how to optimize learning conditions had expanded.They could clearly specify how these new technologies could be used in the classroom to increase motivation and to achieve intended learning outcomes.The journals revealed descriptions of several actual classroom procedures and materials developed by the pre-service English language teachers.One example described by one of the pre-service English language teachers is "we created an animated conversation between four kids using Goanimate.They were from different cultures: British, Indian, Chinese and African.Then we used Glogster to prepare posters about their cultures and gathered a lot of information.Finally we used Wordle to teach some vocabulary".As a result of the materials development workshops, the pre-service English language teachers gained knowledge and skills in integrating technology with their content knowledge and pedagogical knowledge effectively.The time, thought and effort spent appeared to pay as the pre-service English language teachers could describe much more specifically what could be done.They listed a wide range of possible uses of technology to achieve learning outcomes in the classroom.These included the skills, grammar, vocabulary and pronunciation as well as cultural and motivational aspects, cross-curricular tasks, classroom management, student-centeredness and materials development.The pre-service English language teachers pointed that it was the pedagogical knowledge that helped them understand and determine the needs of students and the content knowledge along with it to prepare learning environments and materials accordingly to meet the desired learning outcomes.
Conclusion
Being able to teach with technology requires an understanding of how technology, pedagogy and content interact to support student learning; to be precise, it involves the skill and knowledge to make use of a digital tool or application with its all features, limitations and possibilities, to support students' learning of a given topic or content.To this end, this study aimed to investigate the effect of the TPACK workshops and training on pre-service teachers.
Both qualitative and quantitative data gathering methods were used in the study.The data gathered from the TPACK scale used before and after the training suggested a statistically significant increase in the pre-service English language teachers' level of TPACK.This result was in parallel with the findings of other studies (Tai, 2013;Kurt, Mishra, & Koçoğlu, 2013).When we look at the sub-categories of TPACK scale, we see that not only the overall TPACK levels of the pre-service English language teachers revealed an improvement but also the training and the workshops had a positive effect on the subcategories of TPACK such as content knowledge, pedagogical knowledge and pedagogical content knowledge.This finding has vital importance since being able to teach effectively with technology requires an understanding of how technology, pedagogy and content interact with each other meaningfully.
As for the second research question, although both of the groups revealed an increase in their mean scores in the post test.The statistical results suggested a higher increase in female pre-service English language teachers' post test results when compared with the males.Studies (Öz, 2015;Solak & Çakır, 2014) revealed a significant difference in favor of males in terms of technological knowledge.On the other hand, females in these studies scored higher than males in pedagogical knowledges.Similarly, in this study the males when we look at the results of the pre-test scored higher than the females in technological knowledge.However, although both of the groups revealed a statistically significant increase in their TPACK levels, female pre-service English language teachers revealed a much higher improvement in their technological knowledge after the training.
The study suggested that training and workshop studies were effective in improving pre-service English language teachers' awareness of possible and effective uses of digital technologies in the classroom for educational purposes.The pre-service English language teachers could gain knowledge and skills in integrating technology with their content knowledge and pedagogical knowledge by producing actual learning materials.Therefore, it is suggested that TPACK training and materials development workshops should be integrated into teacher training programs.
diary and sharing it with the teacher Online writing centers for academic writing Grammar Extending mechanical drills and restricted activities outside the class through eslworksheets, printables, quizes and clozetests using hotpotatoes Grammar can be integrated with other skills Integrated Skills Webquests and internet based research tasks Digital story websites to create students' own stories Cross-curricular tasks Improving collaboration and critical thinking with different intelligences and learning styles Various material for different levels, different interests and different developmental stages Materials Development Teachers can create their own animations and stories through Telegami, Goanimate, Storybird Teachers can tailor the coursebook for their students' needs
Table 1 .
Demographics of the participants
Table 2 .
T-test results of total TPACK scores before and after the training and workshops
Table 4 .
Comparison of TPACK scores according to gender
|
v3-fos-license
|
2023-01-17T17:08:15.835Z
|
2023-01-13T00:00:00.000
|
255903582
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/cr/a/Sxp7N89NZ4JgLdkfLVYqsds/?format=pdf&lang=en",
"pdf_hash": "e2b31c2dce7b4b09de7636a34538df9e6631c439",
"pdf_src": "Dynamic",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42456",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "1d75fa560bedbc23baf024244730f6dd443d241d",
"year": 2023
}
|
pes2o/s2orc
|
Growth, yield and nutrients of sweet cassava fertilized with zinc
ABSTRACT: The application of zinc fertilizers in the soil has been an agronomic practice to correct Zn deficiency in plants, aiming to increase productivity and/or nutritional quality. This study evaluated how zinc sulfate fertilization affects plant growth, yield performance and nutrient accumulation in the cassava ‘IAC 576-70’. The experimental design was in randomized blocks with eight replications. The treatments consisted of 0, 1.5, 3.0, 4.5 and 6.0 g p1-1 ZnSO4. Results showed improvement in yield with soil fertilization with ZnSO4, with the optimal dose of 2.5 g pl-1. The uptake of nutrients in plant parts is favored with lower doses of zinc fertilizer, with maximum points ranging from 0.8 to 3.2 g pl-1 for macronutrients and 1.6 to 3.6 g pl-1 for micronutrients. The Zn content in tuberous roots increases by more than 40% with fertilization up to 2.8 g pl-1 of fertilizer, which contributes to the nutritional value of roots.
INTRODUCTION
Cassava (Manihot esculenta Crantz) is a vital source of energy for both global food security and the Brazilian people.Cassava is an energy-dense food and; therefore, rated high for its caloric value, based on its carbohydrate content, providing 250 Kcal/ha/ day compared to 200 Kcal/ha/day for corn, 176 Kcal /ha/day for rice, 114 Kcal/ha/day for sorghum and 110 Kcal/ha/day for wheat (EL-SHARKAWY, 2012;LEONEL et al., 2015;FAOSTAT, 2018;BAYATA, 2019;BYJU & SUJA, 2020).
The cassava crop requires adequate nutrition to maintain high production, as it absorbs large amounts of nutrients and exports around 1.27; 0.52; 3.02; 0.76; 0.60 and 0.36 kg t -1 of roots produced from N, P, K, Ca, Mg and S and 16.0; 1.51; 0.68; 2.23 and 2.43 g t -1 of Fe, Mn, Cu, Zn and B, respectively.Thus; although, it is considered a low fertility crop, the plant's demands must be met by fertilizers at economically adequate levels (NGUYEN et al., 2002;LEONEL et al., 2015;EZUI et al., 2016).
Micronutrients play a central role in plant metabolism maintenance, growth and production, stress tolerance and disease resistance (SHAHZAD & AMTMANN, 2017).Zn is important for enzyme activation, regulation, and gene expression in plants, as well as protein synthesis, glucose metabolism, photosynthesis, phytohormones, fertility, growth regulation, seed development, and disease tolerance (TAIZ et al., 2017;REHMAN et al., 2018;RAI et al., 2021).LEKSUNGNOEN et al. (2022) highlighted ABSTRACT: The application of zinc fertilizers in the soil has been an agronomic practice to correct Zn deficiency in plants, aiming to increase productivity and/or nutritional quality.This study evaluated how zinc sulfate fertilization affects plant growth, yield performance and nutrient accumulation in the cassava 'IAC 576-70'.The experimental design was in randomized blocks with eight replications.The treatments consisted of 0, 1.5, 3.0, 4.5 and 6.0 g p1 -1 ZnSO 4 .Results showed improvement in yield with soil fertilization with ZnSO 4 , with the optimal dose of 2.5 g pl -1 .The uptake of nutrients in plant parts is favored with lower doses of zinc fertilizer, with maximum points ranging from 0.8 to 3.2 g pl -1 for macronutrients and 1.6 to 3.6 g pl -1 for micronutrients.The Zn content in tuberous roots increases by more than 40% with fertilization up to 2.8 g pl -1 of fertilizer, which contributes to the nutritional value of roots.Key words: Manihot esculenta, fertilization, minerals, biofortification.
SOIL SCIENCE
Ciência Rural, v.53, n.9, 2023.Silva et al. that the interaction between the soil Zn concentration and the cassava Zn concentration is poorly understood and that the Zn input from weathering is insufficient for the production of cassava.
Increases in Zn content in plants when subjected to various treatments involving this nutrient supplementation depend on genotypes, application methods, element concentration, and interactions with other elements (WHITE & BROADLEY, 2011;KACHINSKI, 2019).Fertilizers such as ZnSO 4 , soluble in water, are often more efficient as their rapid release rapidly increases the concentration of Zn in the soil solution and this can result in greater plant uptake (MATTIELLO et al., 2021).
Appropriate Zn fertilization can promote growth by improving photosynthetic performance and chlorophyll synthesis, in addition to decreasing oxidative damage to the cell membrane induced by adverse environmental conditions.However, excess doses of Zn interfere with the absorption of essential elements and result in heavy metal toxicity (NATASHA et al., 2022).
Zinc deficiency in soil has increased the number of studies on agronomic biofortification of world food staple crops (VALENÇA et al., 2017).JOY et al. (2015) modelled the potential of Znenriched fertilizers to alleviate dietary Zn deficiency, focusing on ten African countries with zinc deficiency.Their results showed that agronomic biofortification can increase the amount of absorbable Zn in the diet by 5%.
Micronutrient deficiencies (hidden hunger) have become a silent epidemic and inadequate Zn intake is quite substantial, affecting approximately two billion people worldwide, most of them pregnant women and children.Zn deficiency can cause anemia, dermatitis, growth retardation, affect reproductive capacity and mental function, with results showing that zinc supplementation reduced the incidence of diarrhea and respiratory infections in children (WESSELLS & BROWN, 2012;LIVINGSTONE, 2015;OKWUONU et al., 2021).
Given the importance of cassava as a staple food crop and the need for a balanced approach to Zn fertilization to achieve increased food production in a sustainable and responsible manner, this study verified how ZnSO 4 doses affect plant growth, yield performance and nutrient absorption in the sweet cassava 'IAC 576-70'.
MATERIALS AND METHODS
The experimental study was done in Botucatu, in the state of São Paulo, Brazil.The geographic coordinates are 22º59' S; 48º30' W, with an altitude of 778 meters above sea level.
The experiment was conducted in a randomized block design with eight replications.Zinc sulfate (ZnSO 4 7H 2 O with 20% of zinc) was employed as the zinc source, and five doses of ZnSO 4 were applied: 0, 1.5, 3.0, 4.5, and 6.0 g pl -1 .A 310 L plastic box containing a cassava plant was used to represent each plot.The plants were spaced at a distance of 1.00 × 1.5 m (Figure 1).
For cassava planting, the soil was first poured into 310 L boxes with a height of 0.54 m and a diameter of 1.04 m.In the planting fertilization was used 100 g pl -1 P 2 O 5 , 25 g pl -1 K 2 O, and 0.88 g pl -1 boron.As sources of P, K and B were used simple superphosphate (18% P 2 O 5 ), potassium chloride (60% K 2 O) and boric acid (17% B) fertilizers.
Pits were opened for fertilizing, and fertilizers were poured into the pits' soil.After that, one cassava stem cutting was planted horizontally in each pit and manually filled with dirt.Cassava stem cutting with 15 cm in length were obtained from the middle third of 12-month-old plants.The planting was completed on April 25, 2019.Nitrogen was applied using urea (45% N) at 35 days after planting (DAP) at a rate of 13.64 g pl -1 (equivalent to 40 kg ha -1 ).
The crop was irrigated using a drip irrigation system, which met the crop's water need.The pests and disease control was carried out in accordance with the requirements and technical guidelines.The plants were harvested at 368 DAP.
The number of stems and leaves per plant was determined by counting.The diameter of the stems was measured at a height of 10 cm from the soil surface.Plant height was determined from the soil surface to the highest point of the plant.The length of the roots was measured from one end to the other and the diameter was determined in the region of the middle third with a caliper.
Plant parts were weighed to obtain fresh matter values.Then, samples of fresh material were dehydrated in an oven with forced air circulation at 65 °C until reaching constant weight.After drying, the material was weighed and the dry matter accumulated in each part of the plant was calculated.
The N concentration in the plant tissues was determined by sulfuric acid (H 2 SO 4 ) digestion and quantified using the semi-micro-Kjeldahl method.P, K, Ca, Mg, S, Cu, Fe, Mn, and Zn concentrations were determined by atomic absorption spectrophotometry after nitric acid (HNO 3 ) -perchloric acid (HClO 4 ) digestion (MALAVOLTA et al., 1997).
The amounts of accumulated nutrients in each plant organ were calculated by multiplying the concentrations of nutrients by the accumulated amount of dry matter in each plant organ.
The data were submitted to analysis of variance in order to perform the statistical analysis.Regression analysis was used to assess the effect of ZnSO 4 doses (P ≤ 0.05).The highest value of the coefficient of determination was used as the criterion for selecting the linear or quadratic model (R 2 ) (P ≤ 0.05).Sisvar software was used for statistical analysis, while Excel was used to create graphics.
RESULTS AND DISCUSSION
The growth parameters were influenced by the doses of ZnSO 4 tested in the cultivation of cassava 'IAC 576-70' (Figure 2).Cassava plants had an increase in the height of the main stem with zinc fertilization.Stem diameter and number of leaves increased with maximum points at doses of 2.7 and 2.8 g pl -1 , respectively.The number of roots per plant was positively affected by ZnSO 4 fertilization, but shorter roots were produced.
The effects of zinc fertilization on growth parameters are due to the fundamental roles of this nutrient in numerous biochemical pathways of plants, such as auxin, which is a growth regulator (AIRES, 2009).
Fertilization with ZnSO 4 interfered with the accumulation of dry matter (DM) in parts of the cassava plant, with the exception of the seed stem.The total amount of dry matter accumulated in the plant increased up to the dose of 2.8 g pl -1 , with a decrease at higher doses (Figure 3).CAMPOS (2000) discovered that a dose of 2.04 g pl -1 ZnSO 4 enhanced the DM of tuberous roots and MALAVOLTA et al. (1997) explained that the reduction in the production of DM in plants subjected to high levels of zinc is due to the accumulation of plugs containing Zn in the xylem of plants, which hinder the rise of crude sap.
Yield was positively affected by fertilization with ZnSO 4 , with a maximum point at 2.8 g pl -1 .
Zinc is a cation that interacts with almost all plant nutrients present in the soil, especially anions.REHMAN et al. (2018) reported that Zn interacts positively with N, K, Mg, and negatively with P, Mn and B.
The fertilization of sweet cassava with ZnSO 4 pronouncedly affected the accumulation of macronutrients in the plant shoot (leaves and stems), but with an effect on the accumulation of N, P, K, Ca and Mg in the tuberous roots.In general, the use of high doses of ZnSO 4 in the fertilization of cassava 'IAC 576-70' decreased the accumulation of macronutrients in the plant parts, with variations in the maximum accumulation points among the nutrients (Figures 4 to 6).Ciência Rural, v.53, n.9, 2023.The accumulation of N in the leaf was higher up to the estimated dose of 2.6 g pl -1 ZnSO 4 , it was greater in the stem until 0.8 g pl -1 ZnSO 4 , with a decline beyond these doses, and it had a linear decline in seed stem (Figure 4).Plant productivity is largely determined by the interaction between carbon and N metabolism, with N assimilation resulting directly or indirectly from photosynthesis.The role of zinc in these processes can be seen in the effect of doses on N accumulation in leaves and stems, when higher doses had a negative effect on this nutrient.In addition, the toxic effect of zinc on chlorophyll can be observed indirectly by N, since 50% of the total N in leaves is part of the chloroplast and leaf chlorophyll compounds (CHAPMAN & BARRETO, 1997).KUTMAN et al. (2011) reported a positive relationship between N and Zn in plants, with N increasing the uptake of Zn by the roots, as well as its translocation to the shoot.
The accumulation of P in the leaf, stem, and tuberous root was usually larger than the control, with the maximum accumulation at 2.5, 1.4, and 3.2 g pl -1 ZnSO 4 dose, respectively (Figure 4).
The accumulation of K in the leaf, stem and tuberous root followed a similar pattern, being higher than the control and decreasing after ZnSO 4 doses of 2.9, 2.1, and 2.5 g pl -1 , respectively (Figure 5).
Regardless of the levels of zinc sulphate fertilization, the aerial part of cassava (stem and leaves) showed the highest calcium accumulations, with effect of fertilization levels on the accumulation of this nutrient in leaves, seed stems and tuberous roots (Figure 5).Increasing levels of zinc fertilization increased Ca accumulation in leaves with decrease at the highest dose.The accumulation of Ca decreased in the seed stem with increasing fertilization.Increased accumulation in roots was observed only at the lowest dose.These results showed the reduction of Ca availability under high doses of ZnSO 4 , as reported by PRASAD et al. (2014).
Ciência Rural, v.53, n.9, 2023.Mg in the leaf, stem, root, and S in stem and root all behaved the same way, being greater than control and decreasing with doses of 2.7, 2.6, 2.1, and 2.6 g pl -1 ZnSO 4 , respectively (Figure 6).The lower concentration of Mg might be due to the physiological response of the plant to the highest Zn concentration in solution, which may have affected the uptake system and thus lowered the apparent concentration.
The effects of zinc fertilization on micronutrients were variable among nutrients and for each nutrient among plant parts.Data analysis revealed that ZnSO 4 doses had no effect on Cu in cassava leaves.Doses had no influence on the accumulation of Zn and Mn in the seed stem (Figure 7).
In the aerial part of cassava plants, the increase in the levels of zinc fertilization increased the accumulation of iron (Fe); however, with decreases in the highest doses.For the seed stem and tuberous roots, decreases in iron accumulation were observed with fertilization (Figure 7).The decrease of Fe may be due to competitive interactions with Zn, which probably occur at the absorption sites of plant roots.
Mn had the greatest accumulation in leaves and stems (Figure 6), as Mn is preferentially Ciência Rural, v.53, n.9, 2023.reported to the plant shoot, to act in the photosynthetic processes of the plant (TAIZ et al., 2017).Fertilization with MnSO 4 negatively affected the accumulation of manganese in tuberous roots (Figure 7).The adverse relationship between Zn and Mn was also described by BARBEN et al. (2010) who observed that Mn concentrations in potato plant tissues decreased with increasing Zn concentration in the nutrient solution.In the absence of ZnSO 4 , copper (Cu) accumulation was higher in the seed stems; although, the maximal accumulation in the stem was higher up to the estimated dose of 2.1 g pl -1 of ZnSO 4 (Figure 8).Cu is a key micronutrient for crops because it regulates enzymatic activity in shoot tissues' photosynthetic and respiratory functions (KIRKBY & RÖMHELD, Ciência Rural, v.53, n.9, 2023.Zinc uptake depends on the different types of plant species as mainly depends upon the concentration and composition of media.Zinc translocation happens through the symplast and apoplast from roots to plant tissue (TAIZ et al., 2017).
Zn is absorbed predominantly as Zn 2+ , and soil texture, pH, organic matter, microbial activity and concentrations of P and cationic elements affect the availability of Zn for plant absorption (ALLOWAY,
Figure 1 -
Figure 1-Image of the installation of the experiment and cassava plants.
|
v3-fos-license
|
2018-04-03T00:16:00.328Z
|
2015-03-25T00:00:00.000
|
236357
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2015/203947.pdf",
"pdf_hash": "cb64f37405a2d81174411a24e05705eb3c69b946",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42458",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0cbff59acfd43acabd57ecde8a6b823f7651883a",
"year": 2015
}
|
pes2o/s2orc
|
Daycare Attendance, Breastfeeding, and the Development of Type 1 Diabetes: The Diabetes Autoimmunity Study in the Young
Background. The hygiene hypothesis attributes the increased incidence of type 1 diabetes (T1D) to a decrease of immune system stimuli from infections. We evaluated this prospectively in the Diabetes Autoimmunity Study in the Young (DAISY) by examining daycare attendance during the first two years of life (as a proxy for infections) and the risk of T1D. Methods. DAISY is a prospective cohort of children at increased T1D risk. Analyses were limited to 1783 children with complete daycare and breastfeeding data from birth to 2 years of age; 58 children developed T1D. Daycare was defined as supervised time with at least one other child at least 3 times a week. Breastfeeding duration was evaluated as a modifier of the effect of daycare. Cox proportional hazards regression was used for analyses. Results. Attending daycare before the age of 2 years was not associated with T1D risk (HR: 0.89; CI: 0.54–1.47) after adjusting for HLA, first degree relative with T1D, ethnicity, and breastfeeding duration. Breastfeeding duration modified this association, where daycare attendance was associated with increased T1D risk in nonbreastfed children and a decreasing T1D risk with increasing breastfeeding duration (interaction P value = 0.02). Conclusions. These preliminary data suggest breastfeeding may modify the effect of daycare on T1D risk.
Background
Type 1 diabetes (T1D) is an autoimmune disease where the body's immune system destroys the pancreatic beta cells that produce insulin. The incidence of T1D is increasing at roughly 3% globally, with the greatest increase of incidence in children younger than 4 years of age [1]. It is likely that an individual with the genetic makeup for diabetes will not develop T1D without an immunologic trigger that initiates the autoimmune response [2]. While the autoimmune pathophysiology of T1D has been established, a deeper understanding of this trigger has remained elusive.
The hygiene hypothesis proposes that the recent increase in incidence of T1D is due to increased hygiene and low pathogen burden environments [3]. Exposures to infectious agents early in life are hypothesized to activate regulatory pathways in our immune system that suppress development of autoimmunity and thus T1D [4]. Social mixing is a variable used to encompass the numerous exposures to infectious agents that individuals experience when sharing space together. Social mixing captures asymptomatic or minor infections that would otherwise not be reported or recalled. Previous studies used social mixing as a proxy for infections to test the hygiene hypothesis and have observed lower risk of T1D in high social mixing environments [5,6]. Parslow et al. observed a significant association with higher incidence of T1D for children 0-14 years of age in areas with low levels of social mixing [7]. In Scotland, Patterson and Waugh examined social mixing socioeconomically and geographically and found that incidence of T1D was lower in deprived urban areas compared with affluent rural areas [8]. In Austria, Schober et al. examined social mixing through population density and observed protection from T1D in areas with high percentages of children less than 15 years of age [5].
Daycare offers social mixing during critical immune development stages early in life. Like social mixing, attending daycare can be used as a proxy for measuring asymptomatic or minor infections to test the hygiene hypothesis. McKinney et al. found evidence that social mixing through daycare attendance early in life protected against the development of T1D [6]. A meta-analysis of several case-control studies showed a statistically significant protective effect of daycare on the risk of T1D [9]. The previous studies examining daycare attendance and the risk of developing T1D have been retrospective; and the authors have recommended that future studies analyze this association prospectively. This study will attempt to close the gap on the lack of prospective analysis by examining daycare attendance and the risk of developing T1D prospectively using the Diabetes Autoimmunity Study in the Young (DAISY) cohort.
Breastfeeding has also been shown to be protective in the risk of developing T1D, albeit inconsistently [10,11]. It is believed that breastfeeding provides immune support through immunoglobulin A antibodies and increased -cell proliferation [12] to protect against infections and thus reduce the risk of T1D.
We hypothesized that daycare attendance is associated with a decreased risk of developing T1D in children in DAISY. We further hypothesized that the effect of daycare attendance is modified by breastfeeding.
Study Population.
DAISY is a prospective study of children in Colorado who are at increased risk of developing T1D. It includes children born at St. Joseph's Hospital in Denver that were screened by umbilical cord blood for diabetessusceptibility alleles in the human leukocyte antigen (HLA) region. It also includes unaffected children recruited between birth and 8 years of age with a first degree relative that has T1D. For these analyses, we included only the DAISY children who had a clinic visit before 1.35 years of age and who had prospective daycare exposure data from birth until two years of age and complete breastfeeding duration data. Interviews collecting diet and daycare data were completed at 3, 6, 9, 12, 15, and 24 months and then annually thereafter. Clinic visits occurred at 9, 15, and 24 months and annually thereafter for the tracking of autoimmunity and T1D.
The following descriptive factors were examined: HLA genotype (HLA-DR3/4, DQB1 * 0302 versus others), first degree relative with T1D (mother versus father or sibling versus none), birth order (first/only child versus second child or more), sex (female versus male), race/ethnicity (non-Hispanic white versus other race/ethnicity), maternal age at child's birth, maternal education (>12 years versus ≤12 years), crowding (≥1 person/room versus <1 person/room at 6 months of age), and breastfeeding duration (in months). Crowding was calculated by taking the reported number of persons living in a household and dividing this by the number of rooms in the household, not including bathrooms, when the child was six months of age.
Daycare
Measure. Daycare information was collected by parent interview with the following query, "Does attend daycare (family daycare home or daycare center) or preschool on a regular basis?" Follow-up questions regarding the size of the daycare/preschool class and the frequency of attendance were asked. The daycare variable used in this study was defined as supervised time with at least one other child, not including a sibling, at least three times a week.
Breastfeeding Duration Measure.
Breastfeeding duration was defined as the length of time, in months, that the child was breastfed, either partially or exclusively.
2.4. Diagnosis of Type 1 Diabetes. T1D was diagnosed by a physician based on symptoms of excessive urination and/or excessive thirst with at least a glucose level greater than 200 mg/dL, a fasting plasma glucose level at or above 126 mg/dL, or an oral glucose tolerance test with a 2-hour glucose level at or above 200 mg/dL.
Analysis Population.
Of the 2,632 children followed by DAISY, 1,856 children were followed from birth; that is, they had a clinic visit before 1.35 years of age. Of these, 1,799 children had prospective daycare exposure data. From these, 16 were excluded due to missing breastfeeding duration or ethnicity information, leaving 1,783 children in the analysis cohort. The analysis cohort included 58 children who developed T1D during follow-up of an average of 8.5 years (range 0.9-17.4 years). Three children developed type 1 diabetes before 2 years of age (at ages 0.9, 1.8, and 1.9 years). In these instances, only the information regarding daycare attendance prior to the development of diabetes was used to determine their daycare exposure variable.
2.6. Statistical Analysis. The SAS version 9.3 (SAS Institute Inc.) statistical software package was used for all statistical analyses. Hazard ratios (HR) and 95% confidence intervals (CI) were estimated using Cox regression, to account for right-censored data. Follow-up time began at birth. A clustered time to event analysis was performed treating siblings from the same family as clusters, and robust sandwich variance estimates were used for statistical inference [13]. Based on our a priori hypothesis, we tested the significance of an interaction between the dichotomous daycare attendance variable and continuous breastfeeding duration variable; interaction models contained the base terms and the interaction term. The significance of the interaction term was determined by improvement in model fit as indicated by the chi-squared statistic from the likelihood ratio test.
Results
Children who developed T1D in the analysis cohort were more likely to have the HLA-DR3/4, DQB1 * 0302 genotype and a father or sibling with T1D (Table 1). Being non-Hispanic white was marginally associated with an increased T1D risk. Univariately, daycare attendance and breastfeeding duration were not associated with T1D risk (Table 1). After 0.02 * The HRs and CIs of the breastfeeding duration and day care attendance in first 2 years variables were not calculated as these variables were components of the significant interaction term. The interaction between these variables is depicted in Figure 1.
adjusting for HLA, first degree relative with T1D, ethnicity, and breastfeeding duration, attending daycare during the first two years of life was not associated with the risk of developing T1D (HR: 0.89; CI: 0.54-1.47, value = 0.64), while each additional month of breastfeeding duration was associated with a 5% decreased risk of developing T1D (HR: 0.95; CI: 0.90-1.00, value = 0.05).
We a priori hypothesized that breastfeeding would modify the effect of attending daycare on the risk of developing T1D. Our analyses showed that breastfeeding duration interacted with daycare attendance, where daycare attendance was associated with increased risk of T1D in nonbreastfed children and a decreasing risk of T1D with increasing breastfeeding duration (interaction value = 0.02) ( Table 2). To demonstrate this relationship, we calculated HR estimates and 95% CI for daycare attendance for 0, 3, 6, 9, and 12 months of breastfeeding duration (Figure 1). The highest risk of developing T1D was observed in children who attended daycare and were not breastfed (HR: 1.56; CI: 0.77-3.16), and the lowest risk of T1D was observed in children who attended daycare and were breastfed for 12 months (HR: 0.37; CI: 0.13-1.06).
Discussion
We found that breastfeeding modified the effect of daycare, where daycare attendance was associated with increased risk of T1D in nonbreastfed children and a decreasing risk of T1D with increasing breastfeeding duration. These findings lend support to both the trigger-booster hypothesis and the hygiene hypothesis. The trigger-booster hypothesis argues that the immunologic trigger in the natural history of T1D is an infection, such as an enterovirus infection. This infection then triggers the autoimmune response that progresses towards overt diabetes [14]. The Eurodiab Substudy 2 showed that reported infections early in a child's life, noted in the hospital record, were found to be associated with an increased risk of T1D (i.e., evidence for the trigger-booster hypothesis); however, preschool/daycare attendance used as a proxy to measure total infections in early childhood was found to be inversely associated with diabetes [15], suggestive of the hygiene hypothesis. Our findings of an increased risk of T1D for attending daycare in the absence of breastfeeding support the trigger-booster hypothesis that daycare may be increasing exposure to diabetogenic infections that are triggering the development of autoimmunity. The decreased risk associated with daycare attendance in breastfed children supports the hygiene hypothesis, suggesting that breastfeeding is providing immunological support to fight off diabetogenic infections while daycare provides an environment that stimulates the immune system with nonspecific infections preventing immune responses against self-antigens. These findings suggest that breastfeeding may be required to glean the benefits of the daycare environment. In sum, breastfeeding may provide the immune support to fight off diabetogenic infections, while allowing the low immune stimulation found in daycare environments to prevent the development of autoimmunity and T1D. One limitation to using daycare as a proxy for infections is that it does not account for the effects of specific infections, as some infections have been associated with increased risk of T1D development and this detail is lost in using daycare as a proxy for all infections [16]. Furthermore, our questionnaire data lacked the level of detail to calculate duration or intensity of daycare exposure; therefore, this study could not evaluate a dose-response relationship between amount of time in daycare and risk of developing T1D. A strength of the study is that the data were collected prospectively, increasing the accuracy. However, the small number of children with T1D may limit the inference.
The presence of the interaction between daycare attendance and breastfeeding duration suggests a complex interplay between exposures in the etiology of T1D and may explain, in part, the difficulty in identifying environmental risk factors for the disease. Due to the small number of children with T1D in our analysis cohort, our findings should be confirmed in other populations. Future analyses examining environmental exposures in the risk of T1D should hypothesize and test biologically plausible effect modifications such as the one identified here, in order to more clearly elucidate the etiology of the T1D.
|
v3-fos-license
|
2022-09-28T15:22:59.884Z
|
2022-09-26T00:00:00.000
|
252561176
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41682-022-00130-3.pdf",
"pdf_hash": "b07e6e76a91369695cfecd2c86e2b053de559540",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42459",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"sha1": "1fbad4858e1b4fcedafd08c2a15751c6ed15786a",
"year": 2022
}
|
pes2o/s2orc
|
Revisiting the islam-patriarchy nexus: is religious fundamentalism the central cultural barrier to gender equality?
Is Islam a religion that promotes patriarchy? In the academic debate, there are different assessments. On the one hand, there is the thesis of an elective affinity between Islam and patriarchal values. In Muslim-majority countries and among Muslims, support for patriarchal values is most pronounced. On the other hand, there is the antithesis of Islamic feminism, which shows that a significant proportion of devout Muslims support gender equality. It is therefore wrong to describe Islam as a misogynistic religion. What matters is whether the religion is interpreted in an emancipatory manner. This contribution offers a synthesis and argues that religious fundamentalism provides a more valid explanation for patriarchal values than simplistic references to Islam. The 6th and 7th waves of the World Values Survey were analyzed to test this research-guiding hypothesis. Multilevel analyses show that value differences between Muslims and non-Muslims and between Muslim-majority societies and societies with another majority religion turn out to be small or even insignificant when controlling for religious fundamentalism. Fundamentalism is the central driver of patriarchal values and generates uniform effects. At the individual-level, fundamentalism makes both Muslims and non-Muslims more susceptible to patriarchal values. Moreover, Muslims and non-Muslims adapt to the conformity pressures of their societies, resulting in egalitarian as well as patriarchal values, depending on the prevalence of fundamentalism. The high support for patriarchal values in Muslim-majority countries has a simple reason: Religious fundamentalism is by no means a marginal phenomenon in these societies, but rather the norm.
and the Pentagon. And this, in turn, is an important reason why criticism of the deficient situation of women's human rights in the Islamic world defies delegitimization as Islamophobic bigotry (Imhoff and Reicker 2012;Tezcan 2015). Where Islamists succeeded in consolidating their power (e.g., Afghanistan, Iran, Sudan), the consequences for women are indeed disastrous, and the list of injustices is a long one. Islamic fundamentalists demand that women obey their husbands, they tolerate domestic violence, and they discriminate against women in legal proceedings. Divorce, for example, is a privilege of men. In addition, there is the compulsion to wear the hijab and the institutionalization of strict gender segregation in public spaces. Self-appointed vicegerents of Allah and the repression apparatus ensure submission to the rules of the game, and even minor violations of these rules are likely to result in draconian punishments (Schröter 2019, p. 72-78). Obviously, these are the most extreme examples, but they are part of a more generic empirical pattern (Koopmans 2020, p. 103-104). The findings of the latest Global Gender Gap Report (World Economic Forum 2022), which tracks gender inequalities along four key dimensions (economic participation and opportunity, educational attainment, health and survival, and political empowerment), underscore the problematic situation of women in much of the Islamic world. The Middle East and North Africa are the world regions with the most severe gender inequalities, and Muslim-majority societies (including Saudi Arabia, Iran, Pakistan, and Afghanistan) are significantly overrepresented among the ten worst-performing countries in this ranking (World Economic Forum 2022, pp. 7, 10). To cut a long story short: The stereotype of the 'oppressed Muslim woman' is a flawed collectivizing extrapolation-but at the same time it does capture a chunk of reality.
In studies that explore the prevalence and drivers of patriarchal values, there are findings that could be cited as evidence for both of these opposing assessments. Based on the World Values Survey, which is one of the largest and most comprehensive surveys in the social sciences, Inglehart and Norris (2003a, b) substantiate an 'elective affinity between Islam and patriarchal values'. Rejection of equal status for women in life domains such as education, economics, and politics is found to be most prevalent in Muslim-majority societies and among Muslims (see also Alexander and Welzel 2011;Lussier and Fish 2016). Studies on 'Islamic feminism' (e.g., Glas et al. 2018;Glas and Alexander 2020;Glas and Spierings 2019;Masoud et al. 2016) criticize the narratives and framings of the aforementioned inquiries. The rebuke is that the aforementioned studies tend to essentialize Islam as a homogeneous and patriarchal entity. Empirically, this criticism is based on findings that challenge the stereotype that all 'Muslims are misogynists' to highlight another side of the argument. An analysis of survey data reveals that one in four Muslim Arabs are devout supporters of gender equality. Thus, and even if this finding does not show majoritarian support for gender equality, a strong religiosity and egalitarian values are not fully at odds with each other. Islam may therefore turn out to be an 'unlikely ally' in the struggles for women's empowerment once emancipatory interpretations of the Islamic faith gain traction (Glas and Alexander 2020, p. 450;Glas and Spierings 2019, p. 293). 3 The findings of these two lines of research allow for some leeway for elaborating a synthesis that could account for the parallel existence of higher susceptibility to patriarchal values in the Islamic world and among Muslims, as well as the non-negligible number of Muslims that are highly religious and in favor of gender equality. Since there are both supporters and opponents of gender equality among devout Muslims, at least one conclusion appears to be apt: The driving force for susceptibility to patriarchal values does not seem to arise primarily from an individual's religious affiliation or the strength of its religiosity. Hence, it is more useful to engage with religious manifestations that hinder emancipatory interpretations of religion and this contribution argues that it is religious fundamentalism that impedes such progressive readings of religion. An important inspiration for this research-guiding hypothesis is the seminal work of Martin Riesebrodt (1998), in which he portrayed religious fundamentalism as radical patriarchal protest movements. According to him, religious fundamentalism is a reaction to intensified modernization and an attempt to preserve or revitalize patriarchal structures to the greatest possible extent (Riesebrodt 1998, pp. 204-206). This contribution draws on this line of reasoning for a cross-cultural analysis of patriarchal values. To this end, I analyzed the sixth and seventh waves of the World Values Survey to revisit the Islam-Patriarchy nexus. The central research question is: Does religious fundamentalism yield a more plausible explanation for susceptibility to patriarchal values than do references to Islam and Muslims? 2 Theoretical framework 2.1 The islam-patriarchy-nexus: more support for patriarchal values among muslims and in muslim-majority societies?
As already mentioned, Inglehart and Norris (2003a, b) delivered some of the earliest empirical clues to an 'elective affinity between Islam and patriarchal values'. Their seminal study is considered a milestone in the sociology of religion since it turned hitherto unexamined speculations about the effects of religiosity and various religious belief systems on patriarchal values into the subject of sound empirical research (Pickel 2019). The findings show that religion operates as a conservative social force, given that religious individuals are, on average, more likely to deny women equal rights in domains such as education, economics and politics (Inglehart 3 Such empirical findings are in line with the concept of multiple modernities, according to which features of modernity (e.g. gender equality) are achieved not at the expense but through a pragmatic adaptation of tradition (Eisenstadt 2002). In this perspective, the headscarf is thus not a symbol of oppression, but a way of combining conservative gender role requirements of families and women's own desire for a religious lifestyle with autonomous participation in public life (Göle 1995). The sociologist Nilüfer Göle (2000, p. 101) even claims that there might be an 'empowerment through Islamism' within this context. The empirical findings of this contribution contradict such interpretations. Gender equality and the Islamic faith can only be reconciled if religious fundamentalism or Islamism is losing its popularity. But see Tezcan (2019) for a critical acknowledgement of Nilüfer Göle's research. and Norris 2003a, p. 67). However, their empirical results also point out that the specific religious belief system is more important for attitudes toward gender equality than the strength of an individual's religiosity. The sharpest value gap occurs between Christians and nondenominational individuals in affluent postindustrial societies and Muslims in agrarian societies (Inglehart and Norris 2003a, pp. 67-68). These divergences in value orientations are exacerbated by the different trajectories of value change in Western societies and in the Islamic world. While notions of gender roles in Western societies shifted substantially between generations and point into a more egalitarian direction, there are hardly any differences between the youngest cohort, their parents and grandparents in Muslim-majority societies (Inglehart and Norris 2003b, p. 69). Inglehart and Norris (2003b) interpreted these empirical patterns through the lens of the 'clash of civilizations' hypothesis (Huntington 1992), and argued that this conflict is less about (paying lip service to) democracy than about women's equal rights and (her) sexual freedoms. In any case, the authors contend that Islam's religious heritage is a social barrier to women's equality (Inglehart and Norris 2003a, p. 71). Several follow-up studies replicated these findings using more sophisticated multilevel analyses. The results are straightforward and confirm that patriarchal values are more prevalent among Muslims at the individual level and in societies in which Islam is the predominant religion (Alexander and Welzel 2011;Lussier and Fish 2016;Norris 2014). In all cited studies, those empirical patterns are attributed to religious socialization effects and some unique aspects of Islam. Lussier and Fish (2016) argue that exposure to Islamic norms promotes the internalization of patriarchal values because the sacred scriptures of Islam (Quran and Hadith) were written at a time when unequal treatment of women was common practice. As a result, there are some passages in the holy scriptures of Islam that imply unequal treatment of women and that are still instrumentalized today as a source of legitimacy for misogynistic practices. Of course, this is not a unique feature of Islam, and the authors do not ignore the fact that there are many Muslims that interpret their religion in an egalitarian fashion. However, they argue that Islam's distinctiveness resides in its strong tradition of jurisprudence, in which the prestige of an author's exegesis is determined by his temporal proximity to the Prophet Mohammed. As a consequence, patriarchal ideas that emerged centuries ago continue to shape the thinking of Islamic ulama today, a situation that in turn impacts the values of ordinary Muslims as they are usually exposed to conservative spiritual leaders upon whom they rely for the interpretation of their religion (Lussier and Fish 2016, p. 32-33). 4 Alexander and Welzel (2011) point out that mosques play the role of an important socializing institution in this context. The transmission of patriarchal values is favored by par-ticipation in religious ceremonies, since worshippers are repeatedly exposed to the religious norms that are propagated in this setting.
H1 Compared to non-Muslims (nondenominational and members of other religions), Muslims display stronger support for patriarchal values
One of the most important findings, however, is that all people, Muslims and non-Muslims alike (for all the observable differences in their values), tend to align to the predominant norms of their social environment. When it comes to the internalization of patriarchal values, it is therefore less important whether or not an individual is Muslim, but whether Islam is the predominant religion in society (Alexander and Welzel 2011, p. 257). Hence, it follows that methodological individualism is in itself incapable of providing a comprehensive account of the driving forces of patriarchal values. After all, the likelihood of internalizing patriarchal norms and values depends to a large extent on an individual's embeddedness in societies in which unequal gender relations dominate daily life (Lussier and Fish 2016, p. 33). Islam tends to play a problematic role in this context, because it has shaped societal beliefs, attitudes, and norms that seem to legitimize rather than undermine an unequal division of labor between men and women in the household and in public spheres (Norris 2014, p. 258).
H2 Compared to societies with another predominant religion, there is stronger support for patriarchal values in Muslim-majority societies
Structural factors beyond islam: lower levels of human empowerment, rentier economies, and kinship ties
Of course, any analysis must consider a variety of structural factors before proclaiming Islam to be a central pillar of patriarchal values. The gradual liberalization of attitudes about appropriate gender roles in Western societies, for example, is not explainable without changing material conditions and societal modernization processes. In the transition from an agrarian to an industrial society, women became part of the labor market as paid workers. This process was accompanied by increased literacy, more education, and a drastic decline in fertility. The associated changes enabled many women to advance into occupations with higher economic status as societies transitioned to postindustrial settings, which in turn opened the way for more women to enter politics. In most societies with viable and durable democratic institutions, women's representation in parliaments grew substantially in recent decades, and many women head ministries or run government affairs. Although inequalities persist, those largescale changes left their mark on the attitudes of ordinary citizens (Inglehart and Norris 2003b, p. 70). These are the reasons why most studies account for economic development (e.g., women's share of the labor force, wealth) and the presence of democratic institutions to explain support for egalitarian gender roles (Alexander and Welzel 2011, p. 266;Norris 2014, p. 254;Lussier and Fish, pp. 48-49). From a more holistic perspective, a society's rising level of prosperity, its turn toward egalitarian values, and the emergence or consolidation of democratic institutions are components of human empowerment that mutually reinforce each other (Welzel 2013, p. 44). Conversely, low levels of human development (e.g., poverty, low education, and low life expectancy) and socialization under authoritarian regimes are likely to boost support for patriarchal values.
H3
The lower a society's level of human empowerment (lower level of human development and lack of democratic institutions), the higher the support for patriarchal values Moreover, it is important to scrutinize whether it is perhaps other factors that prevail in Islamic societies and that drive support for patriarchal values, but which, by their very nature, are unrelated to Islam itself (Alexander and Welzel 2011, p. 250). One such factor that is given importance within this context relates to the question of whether the societies' prosperity is based on rent-seeking. The key reference point for this debate is Ross's (2008) empirical findings that women's inequality in the Middle East is primarily based on economies that derive their wealth from oil and gas exports. Rent-seeking economies, according to the central argument, tend to invest less into the service and agricultural sectors, which reduces women's participation on the labor market and translates into a lower representation of women in parliaments. This situation then in turn compounds the existing social inequalities between men and women (Ross 2008, p. 111). In other words, patriarchal values prevail in Islamic societies because their economic structures tend to exclude women, and not because they are inhabited by Muslims (Alexander and Welzel 2011, p. 250).
H4 Citizens in rentier economies display stronger susceptibilities to patriarchal values
Since studies either failed to substantiate this assumption (Alexander and Welzel 2011;Norris 2014) or at best yielded mixed results (Lussier and Fish 2016), it could be argued that rentier economies are not the crux when it comes to the drivers of patriarchal values. Charrad (2009, p. 548) objects that patriarchal structures have been in place for centuries and preceded the discovery of oil or gas deposits. Moreover, patriarchal values likewise prevail in Islamic societies that are not classified as rentier states. But that does not make Islam the culprit. In everyday life, patriarchal norms are imposed on women through close-knit tribal and kinship networks. These entail traditional role expectations and notions of chastity, which in extreme cases imply the surveillance of women. As a result, women's activities outside the domestic sphere face daunting limitations (Charrad 2009, p. 549). Quite obviously, organizational and structural principles of family systems offer important insights into society's support of egalitarian principles (Todd 1985, p. 7). A study by Dilli (2015) finds that family systems that emerged in agrarian societies continue to impact values of contemporary societies. It is primarily family systems characterized by patrilineal structures and that tolerate endogamy as well as polygamy that provide a social bedrock for patriarchal values. And since the emergence of these family structures preceded the advent of the Islamic faith, they offer a more valid explanation for the unequal treatment of women than simplistic references to Islam (Dilli 2015, pp. 18, 24). Thus, it cannot be ruled out that the thesis of an 'elective affinity between Islam and patriarchal values' suffers from a bias of omitted variables. K H5 The more intense the kinship ties within a society, the stronger the support for patriarchal values
Questioning the islam-patriarchy-nexus: islamic feminism and the compatibility of islamic religiosity and support for gender equality
Studies of 'Islamic feminism' (e.g., Glas et al. 2018;Glas and Alexander 2020;Glas and Spierings 2019;Masoud et al. 2016) convey a counter-narrative to the thesis of an 'elective affinity between Islam and patriarchal values'. To avoid misunderstanding at this point: It is not radically questioned that Muslims are on average more susceptible to patriarchal values than non-Muslims (Glas and Alexander 2020, p. 438). However, this result is interpreted differently. Islam is obviously instrumentalized for the unequal treatment of women (Masoud et al. 2016(Masoud et al. , p. 1562, and the all-male club of the ulema has its share for this setting, as its mainstream propagates a patriarchal version of Islam (Glas and Spierings 2019, p. 289). All of these studies, however, share a common objection: It is by no means all Muslims that submit to a patriarchal interpretation of Islam in a passive manner. Hence, it is wrong to essentialize Islam as a patriarchal entity (Glas et al. 2018, p. 687;Glas and Spiering 2019, p. 284). An analysis of the Arab Barometer and the World Values Survey shows that deeply religious supporters of gender equality are by no means a marginal group among Arab Muslims. Based on latent class analysis, it is shown that about one in four respondents can be classified as 'Islamic feminists' (Glas and Alexander 2020, p. 450;Glas and Spierings 2019, p. 293). Masoud et al. (2016) likewise provide evidence that progressive interpretations of the Quran are able to mitigate patriarchal attitudes toward the public role of women. Within the framework of a large survey experiment in Egypt, a subset of respondents was exposed to an argument from the Quran that advocates the inclusion of women in the political arena. This group was significantly more likely to support a political leadership role for women than the group of respondents that was exposed to non-religious arguments for women's suitability for political positions (Masoud et al. 2016(Masoud et al. , pp. 1575(Masoud et al. , 1567. Two important conclusions can be drawn from these findings. First, the role of Islam is ambivalent. Islamic scriptures are all too often instrumentalized in order to discriminate women, and yet they can also be invoked to demand women's equality (Masoud et al. 2016(Masoud et al. , p. 1590. Second, religious socialization is not necessarily accompanied by a passive adoption of patriarchal norms. Many Muslims interpret their own religion in ways that deviate from the prevailing patriarchal mainstream (Glas et al. 2018, p. 687).
H6
In comparison to non-Muslims, being a Muslim and strongly religious does not amplify support for patriarchal values 'Islamic feminism' is certainly more than a scattered anomaly (Glas and Spierings 2019, p. 299), yet it is important to keep in mind that patriarchal values are still commonplace, since unequal treatment of women is endorsed by over 70% of Arab Muslims (Glas and Alexander 2020, p. 450). The persistence of these patriarchal norms is also a product of the androcentrism that prevails in all Islamic schools of law. Men, who hold the reigns within the ulama, are the primary beneficiaries of patriarchal structures and tend to resist progressive reinterpretations of Islam (Engineer 2004, pp. 211-212;Lussier and Fish 2016, p. 35). Given that women consequently vanguard a non-patriarchal exegesis of the Qur'an, the premise of a gendered perspective on the Islam-Patriarchy nexus seems apt. And there is indeed evidence that women are overrepresented within the ranks of 'Islamic feminists' (Glas and Alexander 2020, p. 450), while, conversely, support for patriarchal norms is particularly pronounced among Muslim men (Alexander and Welzel 2011, p. 263).
H7 In comparison to non-Muslims, being Muslim and a man amplifies support for patriarchal values
Bringing religious fundamentalism into the equation: on its defining features and significance for patriarchal values
Since there are both defenders and opponents of unequal gender relations among devout Muslims, any reference to Islam as the driving force of patriarchy seems too simplistic. This brings other questions to the foreground. To what extent do these two groups differ in their interpretation of their religion? And why is the combination of strong religiosity and support for gender equality (still) a minority position in the Arab world? The punchline of this contribution is that religious fundamentalism offers the key role in answering these two questions. There are also several (implicit) hints in the cited literature that lend plausibility to this research-guiding hypothesis. Masoud et al. (2016Masoud et al. ( , p. 1578, for example, pinpoint the existence of political forces in Egyptian society that oppose an equal role for women in politics. Voters of the (outlawed) Freedom and Justice Party, a party that made its name as the parliamentary arm of the fundamentalist Muslim Brotherhood, were more likely to oppose women in political leadership positions than non-voters and the sympathizers of other parties. A finding by Glas et al. (2018) points in a similar direction. At least it matches the problematized role of religious fundamentalism that textualist religiosity severely limits the space for emancipatory reinterpretations of religious scriptures (Glas et al. 2018, p. 701). Lussier and Fish (2016, p. 36) even elaborate explicitly on the hypothesis that a higher presence of fundamentalist groups in a society is associated with a higher level of support for patriarchal values. However, the study's empirical analysis makes no attempt to test whether this is indeed the case. All in all, religious fundamentalism is by no means a blind spot in the studies cited so far, but it has certainly received too little attention compared to its central role vis-à-vis patriarchy.
Arguing for a patriarchy-promoting effect of religious fundamentalism requires, as a first step, a description of its main features-and there are two reasons why this is not a simple undertaking: First, the term fundamentalism arose in the 19th century as a self-description of ultraconservative and militant movements within American Protestantism. Some critical voices therefore claim that the Protestant origin of the terminology disallows its applicability to movements to other religious traditions (Emerson and Hartman 2006, pp. 130-131). Moreover, the term religious fundamentalism is suspected of being a battle cry. In everyday practice, it is often misused to insult people for taking their religion seriously (Riesebrodt 2000, p. 51). The K result, according to critics, is a demonization of religious groups and an obscure term that makes nuanced analysis virtually impossible (Schiffauer 1999). Neither position sounds convincing to me. The first position clings to a provincial outlook and remains indifferent to religious revival movements beyond Protestantism. Given that religious fundamentalism is a reaction to secularization processes, there is indeed evidence that religious revival movements occur in all world religions (e.g., the Abrahamic religions, Buddhism, and Hinduism) and that they also share family resemblances (Brekke 2012). The second position is also ill-conceived. The pejorative and instrumental application of terms is by no means peculiar to religious fundamentalism, and it is an argument for, not against, a scientific specification of the term (Riesebrodt 2000, pp. 51-52).
This contribution focuses on the ideal type of politicized, legalistic-literalist fundamentalism, since Riesebrodt (2000, p. 96) attributes a strong regressive tendency toward women's equality to this variant of fundamentalism. According to his reading, religious fundamentalism is primarily a radical rejection of the value relativism which is one of the most important signatures of modernity (Riesebrodt 2000, p. 93).
Fundamentalists are nevertheless no medieval forces, but both a reaction to modernity and a product of modernity, since their identity develops in opposition to the accompanying trends of modernity (e.g., egalitarianism, individualism, and secularism). This rejection of modernity is not to say, however, that religious fundamentalists forego exploiting the achievements of modernity for their own ends. Thus, they rely on the most cutting-edge means of communication to propagate their messages (Riesebrodt 2000, p. 50). But this hardly changes the radical nature of their positions. Fundamentalists claim exclusive entitlement to the truth and ascribe universal validity to their beliefs in a supremacist manner. There is no inclination to compromise on these issues, and reinterpretation or adaptation of these principles to the circumstances of the time is rejected. Fundamentalists instead demand that these principles must be applied literally and without revision (Riesebrodt 2000, pp. 89-90). In addition, there is the political ambition to make these rules the standard for everyone. Fundamentalists strive for a (revolutionary) transformation of political realities. Politics is to be subordinated to religion in order to achieve a restoration or maintenance of religious rules (Riesebrodt 2000, pp. 89-90). Riesebrodt's (2000) descriptions of the common features of religious fundamentalist movements are in line with the definition put forward for discussion in this special issue by Pollack, Demmrich and Müller (2022). Religious fundamentalism, in this perspective, entails four central components: 1. the claim to exclusive truth, 2. the claim to superiority over all other positions, 3. the claim to universal validity of exclusive truth, and 4. the demand for restoration of the unadulterated, submerged past through radical change of the present.
But why does religious fundamentalism encounter such great demand, and why do patriarchal values play such an important role for fundamentalism? Riesebrodt (2000, p. 92) traced the rise of religious revitalization movements to experiences of crisis that occur in early and intensified phases of modernization. Political cen-tralization, bureaucratization, commercialization, and secularization usually go hand in hand with the marginalization of broad segments of the population and trigger alienation in traditionalist cultural milieus. Whenever the present is experienced as a source of disillusionment, it becomes fashionable to romanticize the past-and it is precisely at this point that fundamentalism generates its demand. Religious fundamentalism enjoys broad popularity because it formulates a critique of society, a diagnosis of its (alleged) causes and remedies for overcoming the crisis (Riesebrodt 2000, p. 53). In this context, fundamentalism's critique of society is directed against modernity, which is equated with moral decay and an attack on the religious identity of its members. The central discourses of fundamentalists therefore usually revolve around the breakdown of families, divorce, adultery, prostitution, homosexuality, pornography, venereal diseases, alcoholism and gambling (Riesebrodt 2000, pp. 86-87). Fundamentalists also name the alleged 'culprits' of these trends. Depending on the context, these might be foreign powers, the political elites, the economic and cultural beneficiaries of the transformation processes, intellectuals, apostates or members of other religions (Riesebrodt 2000, pp. 87-88). Fundamentalists see themselves in an apocalyptic struggle with such groups and refuse to compromise because, in their view, this is tantamount to the destruction of their most cherished values. The formula for overcoming all problems, on the other hand, is quite simple: The establishment of a political order in which the sacred rules will be binding for all (Riesebrodt 2000, p. 89). Patriarchy occupies a central role in this idealized political order, as it is seen as a remedy for crisis-ridden modernity. To make a long story short: Fundamentalism proclaims (a God-given) dualism of men and women. Within the idealized division of labor, women are assigned the role of subordinates. Their task is to bear and raise children, and their natural domain is the domestic sphere. The role of men is conceived in a complementary fashion: Men are not only fathers, but also the breadwinners and patriarchal guardians of the family (Riesebrodt 2000, p. 88). At the end of the day, Riesebrodt (1998Riesebrodt ( , 2000 leaves no doubt that the affirmation of patriarchy constitutes the cross-cultural umbrella of fundamentalist movements. Studies that examine the sources of out-group hostility, as well as sexist and misogynistic attitudes, confirm this hypothesis. Koopmans (2015) shows that religious fundamentalism is the strongest predictor of hostility toward gays, Jews, Muslims (among Christian respondents), and the Western world (among Muslim respondents), while religiosity does not correlate at all with outgroup hostility among Christians and only weakly among Muslim respondents. A study by Kanol (2021) corroborated this finding using a very heterogeneous country sample. Fundamentalist interpretations of religion generate significant effects on hostile attitudes toward religious groups and atheists. This empirical pattern is observed both within and outside the Western world and among members of the Abrahamic religions. There are entirely comparable findings about discriminatory attitudes toward women: Moaddel (2020) demonstrates for several Muslim-majority societies that religious fundamentalism is associated with a rejection of gender equality. Fundamentalists tend to deny women an equal role in important areas of public life, they insist on female obedience, and they are in favor of polygamy (Moaddel 2020, pp. 65-66, 135). This regressive effect of fundamentalism can also be observed in Western societies.
K Hannover et al. (2018) found that Muslims in Germany are more likely to describe themselves to be religious when compared to Christians and nondenominational individuals and more likely to embrace fundamentalist interpretations of religion. In addition, it is observed that Muslim men are more likely to display hostile sexism, meaning that they are more likely to vilify women that deviate from traditional gender roles. But the results are finally similar to the findings of the studies described earlier. Once religious fundamentalism is controlled for, religiosity does not turn out to be an influential predictor of discriminatory attitudes toward women, nor are the observed differences between other religious groups very salient.
H8 The more fundamentalist an individual's interpretation of religion, the stronger the support for patriarchal values These findings match Riesbrodt's (2000) critique of Samuel P. Huntington's (1992) 'clash of civilizations'. In his view, the analytical substance of this thesis (which also inspired some studies on an 'elective affinity between Islam and patriarchal values') suffers from an exaggerated assumption of homogeneity within religious traditions and groups. One argument against this assumption is that there are devout people in all religious communities who are by no means susceptible to fundamentalist worldviews. Moreover, the hostility of fundamentalists is directed less against foreign powers than against elites, religious minorities and apostates in their own country (Riesebrodt 2000, pp. 29, 87). The result is rampant domestic polarization and culture wars, which arise even in societies where fundamentalist forces managed to seize political power. Despite a well-armed repressive apparatus and state propaganda, the theocratic regime in Iran, for example, has never succeeded in convincing the entire population to adopt its ideal of family and its notions of 'appropriate' gender roles (Riesebrodt 2000, p. 137). Conversely, fundamentalist milieus across civilizations and religious groups are much more likely to display affinities and similarities than to share common ground with their nonfundamentalist fellow believers (Riesebrodt 2000, p. 31).
H9 Being Muslim and leaning toward a fundamentalist interpretation of religion amplifies support for patriarchal values to the same extent as among non-Muslims Martin Riesebrodt (2000, pp. 136-137), however, does not downplay the importance of cross-societal divergences. This is because modernization processes and the enormous expansion of women's participation in higher education and their integration into the labor market also left their mark on fundamentalist milieus. Although patriarchal family ideals continue to be ideologically cherished in evangelical circles in the United States, dual-earners and working women are the prevailing norm today. This in turn shapes their socio-moral attitudes. Especially among younger cohorts of evangelicals, attitudes toward women are becoming more egalitarian and more aligned to the mainstream of American society (Riesebrodt 2000, p. 137). It is therefore imperative to consider the social climate that surrounds individuals and that exerts intense conformity pressure (Alexander and Welzel 2011, p. 272). Strong support for patriarchal values is most likely to be found in societies where fundamentalist interpretations of religion display a pronounced societal prevalence (Lussier and Fish 2016, p. 36).
H10
The more religious fundamentalism predominates the societal climate, the stronger the support for patriarchal values Given that (younger generations of) Muslims in Western societies tend to align with the mainstream of their social environment when it comes to the support of gender equality relations (Alexander and Welzel 2011;Norris and Inglehart 2012), there is no 'Muslim distinctiveness' to be anticipated on this front.
H11
Being Muslim and being exposed to a societal climate in which religious fundamentalist interpretations of religion prevail amplifies support for patriarchal values to the same extent as for non-Muslims
Sample description
The central data set of this contribution is the World Values Survey (Haerpfer et al. 2021). The analysis is based upon the sixth and seventh waves of the World Values Survey and thus on population surveys conducted in the last decade (2010-2020). When the populations of the participating countries were surveyed in both waves, I used the most recent data. The combination of the two waves allows for an analysis of 76 highly diverse societies.
The sample includes the most populous nations on all continents (e.g., the United States, Brazil, Germany, Nigeria, India, and China) and the full range of varying levels of human development and political regimes (from closed autocracies to liberal democracies). The sample also encompasses nations whose majority populations cover all the major world religions or religion-like cosmologies (including Christianity in all its versions, Buddhism, Hinduism, and Confucianism). One exception is Israel, but the sample covers Jewish respondents living as minorities in in other countries. A very broad spectrum of Muslim-majority societies is likewise represented within the sample. The sample includes countries from North Africa (Morocco, Tunisia, Libya and Egypt), the Middle East (Turkey, Iraq, Iran), the Arabian Peninsula (Yemen), the Gulf region (Qatar, Kuwait), the Caucasus (Azerbaijan), Central Asia (e.g., Kazakhstan, Tajikistan, Kyrgyzstan) and Southeast Asia (Bangladesh, Malaysia and Indonesia). In addition, there are countries where Muslims account for a substantial share of the population (e.g., India and Nigeria) and countries in which they live as a religious minority (e.g., Germany and Sweden).
Dependent variable: patriarchal values
The dependent variable in this study is patriarchal values. In line with Inglehart and Norris (2003a, b) and Alexander and Welzel (2011), I measure patriarchal values (Cronbach's alpha = 0.665) in terms of affirmative responses to the following three statements: 'University is more important for a boy than for a girl' (D060), 'Men should have more right to a job than women' (C001), and 'Men are better political leaders than women' (D059).
K
The scores on these items, and any other variables of interest, which are mentioned in the following section were recoded into a scale ranging from 0-1.0, whereby in this case a score of 0 indicates absence and a score of 1.0 the strongest support for patriarchal orientations. Intermediate positions beyond the extremes of the scale are represented in decimal values between 0 and 1.0. For all individual-level items that provide rank-ordered response options (e.g., scales of 1-4, 1-7, and 1-10), values above the midpoint of the scale (0.50) are indicative of a tendency to agree with the statements. When the country means of these scales are aggregated, they allow for the same interpretation as percentage averages (Welzel 2013, pp. 63-64). To calculate the patriarchal values index, I added the scores of the three items and divided the sum by three.
Individual-level independent variables: religious affiliation, religiosity, religious fundamentalism, and gender
The thesis of an 'elective affinity between Islam and patriarchal values' postulates that Muslims are more inclined to ascribe a subordinate status to women. Respondents participating in the World Values Survey were asked about their religious affiliation, and the variable F025 allows a distinction between the adherents of the world's major religions. The studies that address the Islam-Patriarchy Nexus do so by operating with a simple distinction between Muslims and non-Muslims (e.g., Alexander and Welzel 2011;Lussier and Fish 2016;Norris 2014), and I follow this practice. By implication, non-denominational individuals and members of non-Islamic religions serve as the reference category in the empirical analysis. This contribution, however, contends that it is not so much the self-identification as Muslim, but a fundamentalist reading of religion that gives rise to the support of patriarchal values. For this hypothesis to be valid, it must first be ensured that religiosity and religious fundamentalism constitute two separable components. For a detailed account of an individual's religiosity, I use the respondents' self-description as a religious person, the importance they attribute to religion in their own lives, and statements about their religious behavior. More specifically, I use the following items (Cronbach's alpha = 0.803): 'Religious person' (F034), 'Important in life: Religion' (A006), 'How often do you attend religious services' (F028), 'How often do you pray' (F028B) and 'Active membership in a church or religious organization' (A098).
Compared to religiosity, there is a much smaller number of items to tap into fundamentalist interpretations of religion (Cronbach's alpha = 0.706): 'The only acceptable religion is my religion' (F203), 'Whenever science and religion conflict, religion is always right' (F202), and 'Democracy: Religious authorities ultimately interpret the laws' (E225) (see Koopmans 2020Koopmans , p. 37, 2021. While more items are obviously needed for a more detailed measurement of fundamentalism, it is still possible to capture the main components of religious fundamentalism (Pollack, Demmrich, and Müller 2022) with the available instruments. The first item clearly involves an exclusive claim to truth for one's own religion. Moreover, the second item allows respondents to assign universal validity to their religious convictions. They thus place religion above science, even though science holds a de facto monopoly on knowledge since the advent of modernity (Habermas 2003, p. 252). The third point captures fundamentalists' aspiration to subordinate politics to religion. Respondents at least express a sympathy for religious leaders who in their view are supposed to possess 'the ultimate authority' over the interpretation of laws. An analysis of the dimensionality of these items yielded two principal components with Eigenvalues exceeding 1.0 (see Table 1). Keeping in mind that not every religious person adheres to a fundamentalist interpretation of religion, but conversely fundamentalists tend to be religious, there is sufficient reason to assume that the components or factors display a strong correlation.
Based on this line of reasoning, and to simplify the interpretation of the factor structure, I employed the promax-rotation procedure, which belongs to the family of oblique techniques. When a loading criterion above 0.50 is used for interpreting the components, it seems empirically reasonable to distinguish between religiosity and religious fundamentalism. This is not to gloss over the cross-loading of individual items. Fundamentalists and religious people share one (not very surprising) common trait: Both attribute an important role to religion in their lives. And yet, the items on religiosity load particularly strong on the first component, whereas the three fundamentalism items load on the second component. For the empirical analysis, I added the scores of the items capturing religiosity and divided them by five. The same procedure is applied to the items related to religious fundamentalism. The respective scores were added accordingly and then divided by three. 5 5 The following procedure was applied to all indices at the individual-level: If respondents opted to not provide an answer to only a single item, their scores on the indices were constructed using the remaining items. Furthermore, I relied upon the items 'Religious person', 'Important in life: religion' and 'Active membership in a church or religious organization' for both Kuwait's and Qatar's religiosity-scale. The goal of this procedure is to avoid missings on the individual-level.
K Another hypothesis to be tested is that Muslim men exhibit a higher susceptibility to patriarchal values. To test such gendered effects, I draw upon the self-reported sex (X001) of respondents, with women being the reference category in the empirical analysis.
Individual-level control variables
Furthermore, I include several control variables into the analysis. All of them relate to the sociodemographic background of the respondents. One factor of interest is the respondents' marital (X007) and employment status (X028). The reference categories with respect to these two variables are unmarried persons and persons who are neither self-employed nor working in part-time or full-time jobs. Other variables of interest relate to educational resources (X025R) and the age of respondents (X003R2). The analysis differentiates between respondents with low, medium, and high educational resources and membership within three age groups (15-29, 30-49, 50 and older). The reference category is respondents within the youngest age group and respondents with high levels of education.
Societal-level variables: islam, low levels of human empowerment, rentier economies, kinship ties, and a fundamentalist societal climate
Susceptibility to patriarchal values, however, cannot be attributed to individual factors in isolation. Based on the intraclass correlation coefficient, it is even possible to quantify the variance of the dependent variable attributable to the grouping variable or contextual factors. The intraclass correlation coefficient for patriarchal values amounts to 29.2% in the analyzed sample, which is a strong argument for the addition of societal-level factors.
To scrutinize the Islam-Patriarchy nexus, this analysis includes a dummy variable indicating whether Muslims comprise more than 50% of the population within the societies under study. This information is taken from the World Values Survey and cross-checked with the data set of Barro and McCleary (2003). I treat societies with a majority religion other than Islam as the reference group.
Any statement about human empowerment implies information about the material well-being of societies and the existence of democratic institutions. Therefore, the analysis includes both the 2010 human development index (UNDP 2020) and the scores on V-Dem's liberal democracy index (Coppedge et al. 2021) from the year in which the surveys were conducted. The scores of these two indices were added and then divided by two. For the analysis, I use the inverse of the resulting human empowerment index, as low levels of modernization and authoritarian regimes tend to underpin patriarchal values (Pickel 2013).
In addition, a dummy variable is employed to capture the patriarchal effect of rentier economies. The dummy variable provides the information whether states derive more than 40% of their revenues from the export of oil and gas. This information is drawn from Kuru (2014). Non-rentier economies are treated as a reference category in the analysis.
The claim that close kinship ties play an important role in maintaining patriarchal structures has so far hardly entered empirical analyses (Dilli 2015). To investigate their effects, the kinship intensity index of Schulz et al. (2019) provides an extremely valuable instrument. It includes information on cousin marriages, polygamy, coresidence of extended families, lineage organization (patrilineality vs. matrilineality), and endogamy at the community level. To normalize this variable, the lowest score on this index was set to 0 and the highest score to 1.0.
Finally, I utilize the country-specific mean scores on the fundamentalism-scale to shed light on how contextual variations in the prevalence of fundamentalist beliefs affects people's susceptibility to patriarchal mindsets. The country means on the fundamentalism-scale allow statements about the societal climate (Pickel 2009;Welzel 2013).
Combining these different data sets, there are some cases that must be excluded from the analysis due to missing data. V-Dem (Coppedge et al. 2021) does not provide data for Andorra, Macau, and Puerto Rico. Information on the Human Development Index for Taiwan (UNDP 2020) is also missing an the same applies to the kinship intensity index for Singapore (Schulz et al. 2019). In addition, Qatar and Kuwait must be excluded from the analysis because religious affiliation was not queried in the surveys. After removing all respondents with missing data, the dataset includes 69 societies and the response behavior of 96,516 individuals (see Appendix for descriptive statistics).
Results
Before testing the hypotheses in detail, it makes sense to throw a descriptive glance on the data. The question is whether Muslim-majority societies are indeed strongholds of patriarchal values and whether Muslims are more supportive of these values when compared to other religious denominations.
The heat map in Fig. 1 visualizes the intensity of support patriarchal values across the analyzed sample. For the sake of a complexity reduction, the heatmap differentiates between three groups of societies: (1) countries in which only a minority of the population supports patriarchal values (light gray), (2) countries in which less than half of the population is receptive to these values (medium gray), and (3) countries in which patriarchal values are supported by most of the population (dark gray). The empirical patterns replicate the 'values clash' between the Western and Islamic world that was identified by Inglehart and Norris (2003a). Whereas only a minority of the population in Western societies (Western Europe, Scandinavia, the USA, Canada, Australia and New Zealand) still supports patriarchal values, the very opposite is observable in Muslim-majority societies. In North Africa, the Middle East, the Gulf States, the Arabian Peninsula, the Caucasus, Central Asia, and Southeast Asia (e.g., Malaysia and Indonesia), support for patriarchal values is the prevailing norm. This is not to insinuate, however, that patriarchal values are an exclusive characteristic of Muslim societies. In predominantly Christian (e.g., Ghana), Hindu (e.g., India) and Buddhist (e.g., Myanmar) societies, a majority of the population speaks out against gender equality by the same token. And yet, it is hard to deny that Muslim-majority The findings of Inglehart and Norris (2003b) also hold at the individual-level. The violin plots in Fig. 2 visualize the support for patriarchal values among the non-denominational and members of different religions. They clearly show that patriarchal values meet the highest approval ratings among Muslims. In view of the violin plots, one could of course argue that allegiance to a religious denomination is not a matter of fate when it comes to patriarchal values. Across all groups, there are both individuals in favor of gender equality and individuals advocating patriarchal hierarchies between men and women.
But such an interpretation misses the crucial point: On average, Muslims express the strongest support for patriarchal values, and there are significant median differences compared to members of other Abrahamic creeds, members of other religions, and to individuals who do not feel affiliated with any religious denomination. Thus, if empirical research focuses on the Islam-Patriarchy nexus, it is not out of a prejudiced bias, but rather because a reality-based problem is being scrutinized-and this is intense support for patriarchy in Muslim-majority societies and among Muslims.
Such descriptive visualizations, however, do not address the crucial question: Are we dealing with a spurious correlation? To test the hypotheses, I rely on multi-level modeling. This procedure is appropriate for my research interests as it allows to isolate the effects of individual and societal-level parameters. In addition, it enables me to test the hypothesized (cross-level) interaction effects (Hox 2002). The results of the first model (see Table 2) are in line with the descriptive findings. The first and second hypotheses turn out to be both plausible. Muslims (β = 0.082, p = 0.0001) display a higher tendency to support patriarchal values when compared to non-Muslims. It should be emphasized, however, that these value differences between Muslims and non-Muslims do not amount to a sharp chasm. The stronger support for patriarchal values among Muslims turns out to be a strict relative finding and one that is tremendously sensitive to the specific contexts (see Alexander and Welzel 2011). Arguing in favor of this is the fact that a predominant Muslim population within a society (β = 0.169, p = 0.0001) represents a more powerful parameter than whether (or not) an individual self-identifies as being Muslim. The societal-level effect of a predominant Muslim population is remarkable: 41.7% of the observed variance in patriarchal values between societies can be explained by this factor on its own. Models 2 and 3 corroborate that the nexus between Islam and patriarchy is quite robust. Model 2 reveals that the effect of self-identification as Muslim on support for patriarchal values (β = 0.064, p = 0.0001) persists after adding almost all individuallevel control variables. It is worth mentioning, however, that other parameters trigger more pronounced effects. Rather unsurprisingly, patriarchal values find greater appeal among men (β = 0.097, p = 0.0001) than among women. Moreover, religiosity (β = 0.094, p = 0.0001) and lower levels of education (β = 0.099, p = 0.0001) are shown to underpin patriarchal values. Model 3 adds the structural variables discussed in the theory section. This does not alter the robustness of the effect of a majority Muslim population on patriarchal values, however. Although the effect (β = 0.075, p = 0.026) attenuates compared to model 1 and 2, it is still significant. Hypothesis 3 can be confirmed in this and all subsequent models. In line with the theory, lower levels of human empowerment (i.e., poverty and living in authoritarian regimes) sustain patriarchal values (β = 0.398, p = 0.0001). The fact that democracies are a rarity in the Islamic world (e.g., Huntington 1992; Fish 2002; Koopmans 2021) is one of the key reasons why the effect size of a Muslim-majority population is dwindling in this model. Hypotheses 4 and 5, by contrast, need to be rejected. All other things being equal, there is no evidence of higher susceptibility to patriarchal values in K Table 2 Multi-level explanations for patriarchal values. The table shows the results of several multilevel models. Parameters were calculated using maximum likelihood estimation N (number of observations) are 96,516 respondents at the individual-level and 69 countries at the societal level Entries are the unstandardized regression coefficients with robust standard errors in parentheses Except for the dummy variables, all individual-level variables were centered on the country mean The same applies to the society-level variables, which, however, were centered at the global mean Calculated models allow for random slopes for Muslim respondents Explained variances are calculated from the change in the random variance component relative to the baseline model Estimates were computed using the xtmixed command of STATA (version 16.1) ** < 0.05, *** < 0.01 K rentier economies nor in societies with intense kinship ties. In bivariate analyses at the society level, these two variables are indeed strongly correlated with patriarchal values. However, neither variable is a viable candidate to explain away the patriarchy-promoting effect of a Muslim-majority population (see Model 3). Islam's cultural imprint on societies seems to be more conducive for the preservation of patriarchal values than the existence of lootable mineral resources-and this finding is also logically sound for at least two reasons. First, the emergence of patriarchy preceded the discovery of oil and gas resources. And second, high support for patriarchal values can also be observed in Muslim-majority societies that do not qualify as rentier economies (Charrad 2009). On the surface, Norris (2014) is not wrong in arguing that patriarchal values are more likely to be inspired by 'Mecca than by petroleum'. Conversely, it is rather surprising that the kinship intensity index fails to render the Islam-patriarchy nexus insignificant. Polygamy and endogamy were common practices throughout the Arabian Peninsula well before the advent of Islam, and Islam even introduced certain limits to these practices. However, these customs were not completely abolished either and hence these practices still occur in many contemporary societies. During conquests and the accompanying conversions to Islam, various ruling dynasties succeeded in establishing patriarchal structures and values within societies that lacked strong kinship ties in pre-Islamic times (Engineer 2004). The cases of Indonesia and Malaysia are illustrative examples for this trend. Within the investigated sample, these two countries are the ones with the lowest scores on the kinship intensity index (Schulz et al. 2019) of all societies with Muslim-majority populations. There is a simple reason for this: Prior to the conversion to Islam, matrilineal household structures were not an uncommon phenomenon in Indonesia and Malaysia (Schröter 2021, p. 119-123). One indication of Islam's strong influence on societies is that support for patriarchal values in Malaysia and Indonesia reaches similar levels to that in the Arab world. On balance, model 3 substantiates the thesis of an 'elective affinity between Islam and patriarchal values' (Alexander and Welzel 2011;Inglehart and Norris 2003a;Lussier and Fish 2016). Even when controlling for important structural factors, Islam's ability to shape societies accounts for an explanatory surplus when it comes to patriarchal values. 6 This story, however, gets an entirely new twist once religious fundamentalism enters the equation. As evidenced by Model 4, religious fundamentalism is by far the strongest predictor of patriarchal values. I find evidence of this effect at both the individual and societal-level. Consequently, hypotheses 8 and 10 are not rejected. Respondents that interpret their religion in a fundamentalist fashion (β = 0.248, p = 0.0001) are the strongest supporters of patriarchal values. Accounting for religious fundamentalism, the effect sizes of individual religiosity (β = 0.024, p = 0.011) and the self-identification as Muslim (β = 0.028, p = 0.012) drop consider-6 All in all, most Islamic societies find themselves in a vicious circle. All factors that reinforce patriarchal values are particularly prevalent in these societies. Muslim-majority societies display low levels of human empowerment (r = 0.470); they are overrepresented among rent-seeking economies (r = 0.461); they score high on the kinship intensity index (r = 0.672); and fundamentalist interpretations of religion are widespread (r = 0.729). A regression of all these factors on patriarchal values shows that one can rule out a problematic degree of multicollinearity despite these high correlations. The VIF score is 2.21.
ably. Although the effects remain statistically significant, they are not substantial in content given their minuscule effect sizes. In any case, the results do not permit the impression of an irreconcilable antagonism between Muslims and Non-Muslims or between secular and religious citizens. An even more important factor than an individual's personal interpretation of religion is the societal climate of its surrounding environment. The societal prevalence of fundamentalist interpretations of religion (β = 0.313, p = 0.0001) is the most powerful driver of patriarchal values. One result that deserves emphasis is that the patriarchal effect of a predominantly Muslim population (β = 0.041, p = 0.258) turns out to be insignificant under control for the prevalence of religious fundamentalism. It is by no means wrong that patriarchal values find their strongholds in Muslim-majority societies. But it appears that previous research somewhat oversold the link between Islam and patriarchy. The central reason for patriarchal values is not the inalterable nature of Islam, but rather societal susceptibility to a fundamentalist version of Islam.
The Models 5-8 complement the previous findings by testing the hypotheses that involve assumptions about (cross-level) interactions. To simplify the interpretation of these results, Fig. 3 provides a visualization of the corresponding marginal effect plots (Helmdag 2017). As shown in model 5 and Panel A in Fig. 3, the effect of self-identification as Muslim on patriarchal values is amplified by stronger religiosity (β = 0.061, p = 0.022). Religiosity tends to unleash conservative effects among Muslims and hence increases their susceptibility to patriarchal values. Among non-Muslims, this patriarchy-promoting effect of religiosity is less accentuated. There are no substantial differences between secular and more devout individuals in the non-Muslim reference group. This finding is more easily conceived if one considers the balance of power between religious supporters and opponents of patriarchal values across the religious denominations. Among Muslims in the investigated sample, there is indeed a fraction of 'Islamic feminists' or religious supporters of egalitarian gender relations. In line with the findings of Glas and Alexander (2020), it is almost one out of four Muslims (23.1%) to be classified in this camp. However, most devout Muslims are still in favor of patriarchy (54.1%).
The size ratios of these two groups are significantly different for members of the other religions. Within members of non-Abrahamic religions (e.g., Hindus and Buddhists), there is a stalemate between religious supporters (35.3%) and opponents of patriarchy (35.1%). And in the group of non-Islamic Abrahamic religions (i.e., Christians and Jews), religious proponents of gender equality (50.5%) are even in a majority position vis-à-vis devout defenders of patriarchy (24%). 7 Consequently, hypothesis 6 must be rejected. Among Muslims, religiosity is more likely to fuel an internalization of patriarchal values than to cause an agentic questioning of traditional gender roles. At the same time, this effect should not be overinterpreted. The strength of the interaction effect is not particularly impressive and does not add to the explanatory power at the individual-level. Model 6 furthermore fails to corroborate that Muslim men (β = 0.012, p = 0.298) display any significant stronger inclination toward patriarchal values when compared to their non-Muslim reference group. Thus, hypothesis 7 must be rejected as well. Panel B in Fig. 3 is simply indicating that men in general are more inclined to subscribe to patriarchal values than women. In other words: While women demand equality, men insist on their privileges, regardless of their religious affiliation. The ongoing struggle for gender equality in the Islamic world must therefore reckon with resistance from men, just as it does in the rest of the world. It is not very likely that men will surrender their privileges on a voluntary basis, and it is rather men than women that instrumentalize Islam in order to lend their privileges a sacred patina.
Religious fundamentalism occupies the pivotal role for these ideological ambitions. Since religious fundamentalism generates uniform effects (β = -0.035, p = 0.146), there are no striking particularities unique to Muslims. The more individuals lean toward a fundamentalist interpretation of their religion, the more they support patriarchal values (see Model 7 and Panel C). But even more important than an individual's interpretation of religion is the prevalence of fundamentalism on the societal-level. The societal climate creates tremendous conformity pressure on individuals, and both Muslims and non-Muslims (β = 0.083, p = 0.106) adjust to this group pressure by adopting egalitarian or patriarchal attitudes towards women (see Model 8 and Panel D). Consequently, hypotheses 9 and 11 are not rejected.
The density plot of Panel D in Fig. 3 shows why many studies were able to substantiate the thesis of an 'electoral affinity between Islam and patriarchal values' (Alexander and Welzel 2011;Inglehart and Norris 2003a;Lussier and Fish 2016). The underlying reason for this result is the fact that Muslims account for the bulk of the population in most of the countries displaying high levels of support for fundamentalism. This empirical pattern looms up particularly clear in the scatterplot in Fig. 4. On the one hand, it is evident that religious fundamentalism is a formidable predictor of patriarchal values at the societal-level (r = 0.841, p = 0.0001). The second finding is that Muslim-majority countries are strongholds of accentuated approval ratings towards religious fundamentalism, which in turn translates into a higher susceptibility to patriarchal values. Religious fundamentalism is not a marginal fringe phenomenon in the Islamic world, but the general norm. Azerbaijan is the only country that breaks ranks in this regard. It is the only Muslim-majority society in the sample in which less than half of the population is susceptible to religious fundamentalism. I see at least three compelling reasons why religious fundamentalism provides a better explanation for patriarchal values than simplistic reference to Islam. To begin with, fundamentalism and its regressive patriarchal ideologies only gained momentum in recent decades in societies such as Indonesia and Malaysia. In analytical terms, a static reference to a Muslim majority population fails to deliver a convincing explanation of this cultural drift. The observations of country experts on this subject are more concise. They suggest that fundamentalist movements started to gain popularity in more recent times and that they owe their burgeoning popularity to generous funding from the Gulf states (e.g., Saudi Arabia) (Schröter 2019, pp. 52-62). The misogynistic effects of religious fundamentalism are secondly by no means unique to Islamic societies. After all, it is possible to observe very similar empirical patterns in Ghana, the Philippines, India and Myanmar, to give just a few examples. A third argument is that the highly simplistic reference to Islam is incapable of explaining the large differences between Muslim-majority countries. For example, support for patriarchal values differs significantly between Lebanon and Yemen and covaries with the respective society's susceptibility to fundamentalism. Of course, all these arguments do not change the fact that patriarchal values are rather the norm in most Muslim-majority countries. But in contrast to the static reference to Islam, religious fundamentalism is not an inevitable destiny, but open to value change. Evidence for this possibility is provided by sizeable proportions of devout Muslims expressing support for gender equality (Glas and Alexander 2020;Glas and Spierings 2019).
The studies on 'Islamic feminism', however, do not employ a cross-cultural comparative research design. Inevitably, this leaves one important question untouched: Why are devout supporters of gender equality so severely underrepresented among Muslims compared to other religious denominations? The scatter plot in Fig. 5 illustrates that religious fundamentalism is one piece of this puzzle. At the societallevel, the prevalence of fundamentalist beliefs goes hand in hand with the proportion of respondents being both religious and in favor of patriarchal values (r = 0.916, p = 0.0001). Religious fundamentalism thus offers an explanation why the combination of strong religiosity and support for gender equality is still a minor phenomenon in most Islamic societies. The logic underlying this pattern is rather simple: Religious fundamentalism is a regressive ideology that severely curtails the space for emancipatory interpretations of religion, which in turn reinforces the discrimination of women in societies. The fate of women in Islamic societies is the subject of heated public debate, and academic research likewise offers contrasting assessments of the situation. Within this context, the thesis of an 'elective affinity between Islam and patriarchal values' (Inglehart and Norris 2003b;Alexander and Welzel 2011;Lussier and Fish 2016) encounters criticism from the antithesis of 'Islamic feminism' (e.g., Glas et al. 2018;Glas and Alexander 2020;Glas and Spierings 2019). The first line of research reveals strong support for patriarchal values in the Islamic world and among Muslims. A robust finding that cannot be explained away even when controlling for structural and individual confounding factors (e.g., Alexander and Welzel 2011;Lussier and Fish 2016). The second line of research criticizes the accompanying framings of these findings, contending that it is wrong to describe the essence of Islam as hostile to women. The existence of 'Islamic feminists' or devout supporters of gender equality contradicts this narrative. Hence, women's rights and Islam are not mutually exclusive. Improvements of the situation of women in the Islamic world depends instead on an emancipatory interpretation of the Islamic faith (e.g., Glas et al. 2018;Glas and Alexander 2020;Glas and Spierings 2019). This contribution connects to these ideas and offers a synthesis. Its central argument suggests that religious fundamentalism provides a better explanation for regressive gender norms than simple references to Islam. Moreover, emancipatory interpretations of the Islamic faith are not ruled out either, though it is argued that the prevalence of fundamentalism severely shrinks its playing field. Studies of the Islam-Patriarchy nexus need to be reconsidered once religious fundamentalism enters the equation. The key finding suggests that the 'elective affinity between Islam and patriarchal values' is somewhat overestimated. This is not to deny that Islamic societies are indeed strongholds of support for patriarchal values, nor that Muslims display a particular high susceptibility to patriarchal values in comparison to different religious denominations. It is just that the story takes a new twist after the effects of religious fundamentalism are accounted for. Controlling for religious fundamentalism, there is no significant nexus between a Muslim population majority and support for patriarchal values. At the individual-level, the differences between Muslims and non-Muslims remain significant, but the effect size of this parameter is far too small to invoke scenarios of a vicious clash of values. Once again, it is important to emphasize that the general finding of an 'elective affinity between Islam and patriarchal values' is not wrong on the surface. Islamic societies do display a remarkably high level of support for patriarchal values. The reason for this, however, is not the unchangeable nature of Islam, but religious fundamentalism. In this context, there is some good news and some bad news. The bad news is that religious fundamentalism is by no means a fringe phenomenon in the Islamic world. The good news, on the other hand, is that religious fundamentalism is not a constant, and that it can lose its popularity if a shift in values sets in. The existence of devout Muslims that reject patriarchal discrimination against women is a telling indicator of such developments (e.g., Glas et al. 2018;Glas and Alexander 2020;Glas and Spierings 2019). The thesis of an 'elective affinity between Islam and patriarchal values' is also hardly convincing from a normative perspective. Religious fundamentalists are strengthened in their position if it is claimed that patriarchal interpretations are a logical consequence of Islamic faith. Another argument against such an assessment is offered by the fact that it is primarily (Muslim) men, and not (Muslim) women, that support patriarchal values. People dedicated to a critique of religion are thus well advised to address their rebukes more precisely. Such criticism would be more credible if it were not directed against Islam (or any other religions) in the abstract, but against specific actors that exploit religion for their own agendas. A good audience for this criticism is those imams and parts of the ulema that promote patriarchal interpretations of Islam to ensure privileges for men. One reason why this would be appropriate is that religiosity among Muslims tends to contribute to the preservation of patriarchal norms. Compared to the reference category of non-Muslims, devout Muslims display higher support for patriarchal values than their fellow believers, with a more secular lifestyle. This is not to deny the existence of devout supporters of gender equality among Muslims. Yet it is also important not to overlook the current balance of power. Proponents of patriarchal values do still constitute a clear majority among devout Muslims. This is another observation for which religious fundamentalism provides an explanation. Its strong prevalence curtails the playing field for emancipatory interpretations of the Islamic faith throughout most societies in the Islamic world. At the end of the day, religious fundamentalism turns out to be a deeply misogynistic ideology. Hence, religious fundamentalism offers the strongest account for the support of patriarchal values at both the individual and the societal-level. Controlling for religious fundamentalism, there is a clear leveling of the value gaps between Muslims and non-Muslims. If they subscribe to a fundamentalist interpretation of their religion, they are equally likely to support patriarchal values. Muslims and non-Muslims also adapt alike to the conformity pressures of their environment. In societies with a low prevalence of fundamentalist interpretations of religion, they tend to hold more egalitarian attitudes. Conversely, they are equally inclined to patriarchal values if they live in societies where fundamentalist ideologies predominate. These findings are entirely congruent with the theoretical assumptions and empirical research of Martin Riesebrodt (1998Riesebrodt ( , 2000, in which patriarchal values are described as the cross-cultural common ground of various fundamentalist movements. This contribution suggests that his theoretical insights, informed by case studies of fundamentalist movements, are also applicable to a cross-cultural comparative analysis of patriarchal sentiments. The key source of inspiration for the present empirical analysis is the ideal type of politicized, legalistic-literalistic fundamentalism (Riesebrodt 2000). Its central components also found their way into the definition of fundamentalism proposed by Pollack, Demmrich, and Müller (2022) in this Special Issue. In my opinion, the merit of this definition lies in the fact that the central characteristics of this ideal type are highlighted and accentuated. It is emphasized that fundamentalists claim access to an 'exclusive truth'; that they declare a 'superiority over all other positions'; and that they attest a 'universal validity' to their conception of truth. The politicization of this truth claim is also accentuated by Pollack, Demmrich, and Müller (2022), emphasizing that fundamentalists aspire a 'restoration' and a 'radical change of the present'. These accentuations of the components of fundamentalism are helpful in distinguishing fundamentalists from traditional and orthodox religious groups. Another asset over the much-cited definition of fundamentalism by Altemeyer and Hunsberger (1992, p. 118) is that the fundamentalist claim to truth is not confined to concrete deity conceptions. This specification of the definition is preferable for cross-cultural analyses, given that not all world religions share notions of deity. This is also a good opportunity to pinpoint a potential weakness of the present analysis. Namely, the fact that only three items were available to tap into fundamentalism. It could also be debated whether respondents aspire for a radical change of political conditions if they are seeking for 'religious authorities to interpret the laws'. Having said that, it deserves to be emphasized that the fundamentalism scale used in this study meets a nomological validity criterion in full clarity. After all, the multi-items scale entails an immense effect on its expected correlate of patriarchal values (Welzel et al. 2021). However, it is also clear that more items for each component would be the ideal case. It would thus be possible to assess the fit between the factual dimensionality of these items against the four components of fundamentalism (Pollack et al. 2022). In addition, it would be feasible to examine whether the dimensionality of these items is equivalent across different religious groups (Rippl and Seipel 2015). The empirical validity and generalizability of the fundamentalism definition, as well as the presented results of this contribution, might thus be subjected to more rigorous tests.
This contribution, however, provides important hints that there is a compelling need to capture fundamentalist beliefs and to make a stronger distinction between religiosity and religious fundamentalism. The reason for this is quite easy. There is a tendency to rashly blame regressive tendencies such as patriarchal values and other forms of discriminatory attitudes on Islam, Muslims and religious individuals, even though religious fundamentalism is the crux of the issue. It may sound oversimplified, but fundamentalists are usually very religious, but obviously not all religious people are also fundamentalists. By not including religious fundamentalism into the equation, there is a risk of falling prey to a spurious correlation. This contribution exemplified this possibility using the Islam-patriarchy nexus as an illustrative example. Riesebrodt (2000) has anticipated another stimulating exercise for empirical analysis. Following his portrayals, fundamentalist evangelicals in the United States of America were grudgingly accepting the integration of women into the labor market. Their last bastion since then has been sexual morality-or rather their concept of it. There are thus sound reasons to suspect that fundamentalism is also the main culprit when it comes to the demonization of homosexuality and the obsession with virginity. Table 3 Descriptive statistics. (Source: Own calculations based on the World Values Survey (Haerpfer et al. 2021); Barro and McCleary (2003); UNDP (2020); V-Dem (Coppedge et al. 2021 These are descriptive statistics of the dataset that underlies the multilevel models in Table 2 K
|
v3-fos-license
|
2021-03-17T06:17:27.175Z
|
2021-03-01T00:00:00.000
|
232242690
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://gh.bmj.com/content/bmjgh/6/3/e004940.full.pdf",
"pdf_hash": "e3a2ae4ee41d60131efce45991f506bd810d146c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42460",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "593c7b633da83b683a230cd32fb97885f31cb7d7",
"year": 2021
}
|
pes2o/s2orc
|
Why you should share your data during a pandemic
Correspondence to Dr Valerie J Flaherman; valerie. flaherman@ ucsf. edu © Author(s) (or their employer(s)) 2021. Reuse permitted under CC BY. Published by BMJ. The devastating impact of the COVID-19 pandemic can be partly attributed to a lack of evidence to inform effective prevention and treatment. The global scientific community has been racing against time to rapidly generate such evidence. Prospective metaanalysis (PMA) and other innovative approaches for pooling data from multiple studies are increasingly used and have the potential to expedite the pace at which knowledge is produced, especially for observational or surveillance data. However, investigators engaged in these innovative projects must leave behind some traditional practices of academic research. For that reason, the use of PMA or other realtime pooling efforts may meet resistance from individual investigators dubious about the professional, ethical and utilitarian implications of such innovation. We have previously proposed that a sequential PMA offers a useful approach to rapidly generate policy and practicerelevant guidance; our group is currently engaged with investigators working in 21 countries to pool data related to SARSCoV-2 infection during pregnancy. While the PMA process requires commitment from investigators and some effort to harmonise data collection elements, it also provides substantial potential benefits related to rapid dissemination of information. Through collaboration, serially updated PMAs allow results to be shared well before adequate sample sizes are available for individual studies and can therefore rapidly inform public health policy decisions such as those needed for the management of the pandemic. In working to pool published and unpublished data, we have encountered resistance to data sharing from scientists accustomed to a more traditional approach. Although many investigators, especially those working in lowincome and middleincome countries or with prior international collaboration experience, have readily agreed to participate, some investigators working in academic settings in highincome countries have expressed doubts. We address some of the most common misunderstandings related to observational data pooling that we have encountered.
The devastating impact of the COVID-19 pandemic can be partly attributed to a lack of evidence to inform effective prevention and treatment. The global scientific community has been racing against time to rapidly generate such evidence. Prospective meta-analysis (PMA) and other innovative approaches for pooling data from multiple studies are increasingly used and have the potential to expedite the pace at which knowledge is produced, especially for observational or surveillance data. However, investigators engaged in these innovative projects must leave behind some traditional practices of academic research. 1 For that reason, the use of PMA or other real-time pooling efforts may meet resistance from individual investigators dubious about the professional, ethical and utilitarian implications of such innovation.
We have previously proposed that a sequential PMA offers a useful approach to rapidly generate policy and practice-relevant guidance; our group is currently engaged with investigators working in 21 countries to pool data related to SARS-CoV-2 infection during pregnancy. 2 While the PMA process requires commitment from investigators and some effort to harmonise data collection elements, it also provides substantial potential benefits related to rapid dissemination of information. Through collaboration, serially updated PMAs allow results to be shared well before adequate sample sizes are available for individual studies and can therefore rapidly inform public health policy decisions such as those needed for the management of the pandemic.
In working to pool published and unpublished data, we have encountered resistance to data sharing from scientists accustomed to a more traditional approach. Although many investigators, especially those working in low-income and middle-income countries or with prior international collaboration experience, have readily agreed to participate, some investigators working in academic settings in high-income countries have expressed doubts. We address some of the most common misunderstandings related to observational data pooling that we have encountered.
ONE COMMON MISUNDERSTANDING IS THE CONCERN THAT PARTICIPATING IN PROSPECTIVE DATA POOLING EFFORTS WILL BE PERCEIVED AS PUBLISHING OVERLAPPING DATA
This concern is unfounded for two reasons. First, while the importance of avoiding duplicative publication of data from a single participant in two or more reports can hardly be overstated 3 , there is a straightforward method for handling this type of data overlap, which is to disclose it in publications. Individual patient data meta-analyses, whether prospective or retrospective, inherently achieve this disclosure by clearly identifying the sources of data and reanalysing it for purposes of pooling. Second, the International Committee of Medical Journal Editors (ICMJE) guidance on overlapping Summary box ► Prospective meta-analyses and other innovative approaches for pooling data from multiple studies have the potential to expedite the pace at which knowledge is produced, especially for observational or surveillance data. ► In working to pool published and unpublished data over the past year, we have encountered resistance to data sharing from scientists accustomed to a more traditional approach. ► Common concerns and misunderstandings are that participating in prospective data pooling: (1) might be considered to be (unethical) publication of overlapping data; (2) may render study-specific manuscripts less novel, less prestigious or less appealing to high-impact journals; and (3) it may be unethical to share or analyse data repeatedly while data collection is ongoing. ► We review the likely source of these concerns and argue there are not any robust reasons to avoid sharing data for appropriately designed, collaborative projects that can advance global health.
BMJ Global Health
publications (developed prior to the current pandemic but highly relevant in light of current circumstances) strongly encourages dissemination of data in the context of a public health emergency, without concern of detriment to future publication, and it urges editors to give priority for publication to any study that has shared crucial information. 4 The ICMJE's prescient endorsement of data sharing in any setting of urgent scientific need strongly supports participation in well-designed pooling activities during the current pandemic. Another frequently expressed misunderstanding is that manuscripts analysing previously shared data may be perceived as less novel, less prestigious or less appealing to high-impact journals than those presenting data not previously shared. While this was a compelling concern in the past, the recent deluge of data published on preprint servers like medRxiv-and the clear success of those preprints in terms of Altmetric score and subsequent publication-demonstrate changing attitudes about prepublication results dissemination. 5 In fact, highimpact journals, including those in the BMJ family of journals, strongly encourage such sharing. 6 Contributing data to a pooled analysis, when transparently disclosed, has thus not shown itself to be prejudicial to publication in journals following such principles. Furthermore, publishing a single study, including all data points and a thorough discussion of the context, methods, strengths and limitations of the study, should still be valued for providing different insight than that of a pooled analysis. Investigators with concerns about a publication's perceived prestige should be reassured that participating in meta-analyses is generally thought to increase the visibility and impact of individual studies. 7 SOME INVESTIGATORS HAVE ARGUED THAT IT IS UNETHICAL TO SHARE OR ANALYSE DATA REPEATEDLY WHILE DATA COLLECTION IS ONGOING While this may be true for randomised controlled trialswhere underpowered interim analyses may bias future data collection, cause participant withdrawal or hamper future recruitment-ethical considerations regarding surveillance and other observational data are wholly different. The objectives of observational research are not related to study-supplied interventions; interim data analysis has little potential to introduce detrimental bias. Indeed, the most common way repeated data analysis might influence observational studies of COVID-19 would be by providing evidence to inform policies or guidelines that benefit future participants of such a study. This appears to be an argument in favour of ongoing analysis of observational data. Surveillance is a core component of public health science; ongoing collection, timely dissemination and linkages to public health practice are essential. 8 If surveillance identifies modifiable factors that are successful in preventing or ameliorating disease, this is considered a major success and public health good. 9 Effective response to the COVID-19 pandemic necessitates revisiting historical conventions. While established norms such as concealing data until publication may have benefits during times of stable knowledge generation, these norms have significant costs including preventing rapid dissemination of crucial knowledge. In the current public health environment-characterised by massive increases in global morbidity and mortalitycontributing data to pooled analyses is a contribution to the global good. Answers to basic epidemiological questions regarding COVID-19 infection are urgently needed worldwide. We argue there are not any robust reasons to not share data given appropriately designed, collaborative projects that can advance global health.
Twitter Emily R Smith @DrEmilyRSmith Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; internally peer reviewed.
Data availability statement There are no data in this work.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https:// creativecommons. org/ licenses/ by/ 4. 0/.
|
v3-fos-license
|
2015-03-27T04:16:54.000Z
|
2015-03-25T00:00:00.000
|
4496378
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://capmh.biomedcentral.com/track/pdf/10.1186/s13034-015-0039-6",
"pdf_hash": "be541312dfa56cd2a81dba806fbc26b97716ff8e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42461",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "be541312dfa56cd2a81dba806fbc26b97716ff8e",
"year": 2015
}
|
pes2o/s2orc
|
Brief report: large individual variation in outcomes of autistic children receiving low-intensity behavioral interventions in community settings
Background Despite widespread awareness of the necessity of early intervention for children with autism spectrum disorders (ASDs), evidence is still limited, in part, due to the complex nature of ASDs. This exploratory study aimed to examine the change across time in young children with autism and their mothers, who received less intensive early interventions with and without applied behavior analysis (ABA) methods in community settings in Japan. Methods Eighteen children with autism (mean age: 45.7 months; range: 28–64 months) received ABA-based treatment (a median of 3.5 hours per week; an interquartile range of 2–5.6 hours per week) and/or eclectic treatment-as-usual (TAU) (a median of 3.1 hours per week; an interquartile range of 2–5.6 hours per week). Children’s outcomes were the severity of autistic symptoms, cognitive functioning, internalizing and externalizing behavior after 6 months (a median of 192 days; an interquartile range of 178–206 days). In addition, maternal parenting stress at 6-month follow-up, and maternal depression at 1.5-year follow-up (a median of 512 days; an interquartile range of 358–545 days) were also examined. Results Large individual variations were observed for a broad range of children’s and mothers’ outcomes. Neither ABA nor TAU hours per week were significantly associated with an improvement in core autistic symptoms. A significant improvement was observed only for internalizing problems, irrespective of the type, intensity or monthly cost of treatment received. Higher ABA cost per month (a median of 1,188 USD; an interquartile range of 538–1,888 USD) was associated with less improvement in language-social DQ (a median of 9; an interquartile range of −6.75-23.75). Conclusions To determine an optimal program for each child with ASD in areas with poor ASD resources, further controlled studies are needed that assess a broad range of predictive and outcome variables focusing on both individual characteristics and treatment components.
Background
Autism spectrum disorders (ASDs) are persistent disabling neurodevelopmental disorders that are clinically evident from early in life. Accordingly, many countries have given greater public attention to ASDs and allocated more public funds to implement and develop community services or promote research in this field. Among them, early identification and subsequent intervention for ASDs are considered key issues. A recent systematic review of early intensive intervention concluded that Lovaas-based approaches, early intensive behavioral intervention variants and the early intensive comprehensive approach (the Early Start Denver Model) resulted in some improvement in cognitive, language, and adaptive functioning in some young children with ASD compared with broadly defined eclectic treatments [1].
The growing body of evidence on early interventions for children with ASD suggests that there is great variability in children's response to treatment [1][2][3][4][5][6]. However, the responder's characteristics for each treatment have not been well identified, which makes it difficult for clinicians to recommend any specific form of intervention as the best option for an individual child with ASD.
In Japan, existing intervention services are generally insufficient in terms of their quantity and quality to meet the identified needs of young children with ASDs and their families. To complement existing services, various ABA-based techniques combined with parental training are provided at a limited number of universities and private agencies in metropolitan areas, although of a lower intensity. In a recent study, Hiraiwa [7] retrospectively examined the severity of autism in 60 young Japanese children with autism and found that there was a significant improvement in children receiving low intensity one-to-one treatment using various methods based on the principle of ABA (≥7 hours per week but less than the recommended intensity when compared with those receiving treatment-as-usual (TAU)); here the ABA methods included discrete trial teaching (DTT), verbal behavior (VB), and PRT provided by therapists and/or parents. Apart from case studies, Hiraiwa's study [7] has been the only study to examine children receiving ABA in Japan. However, it used only one, less sensitive child measure, and family functioning that might have had an influence on children's progress [8,9] was not measured.
The aim of this study was to thus explore individual outcome variations across time in young autistic children and their mothers, who received less intensive early interventions with and without ABA methods in community settings in Japan. The outcomes were assessed in terms of both child and family functioning using standardized instruments.
Participants
Seventeen children were recruited through notices posted in a specialized pediatric clinic located in a suburb of Tokyo, where one of the authors (M.H.) prescribes ABA therapy for children diagnosed with autism. In addition, three research volunteer families were contacted because they lived near the National Center of Neurology and Psychiatry (NCNP). All 20 children met the following criteria: (1) a diagnosis of autistic disorder according to DSM-IV-TR criteria corroborated by the Japanese versions of the Autism Diagnostic Interview-Revised (ADI-R) [10] and the Autism Diagnostic Observation Schedule (ADOS) [11] evaluated by an experienced child psychiatrist or psychologist with a research license; (2) an absence of medical conditions or obvious motor delay; (3) a chronological age below 7 years; (4) entry into an ABA and/or TAU program at two to five years of age. Of the 20 children, 18 (14 boys) participated in both intake and a 6-month follow-up assessment ( Figure 1).
Participants' characteristics (age, gender, scores at T1) are shown in Table 1. All participants were living with both parents. The number of siblings was similar to the national average [12]. Parental educational levels were higher than the national average [13]. Family income varied widely but the peak and mean shifted higher than those of the national average [14]. The percentage of women who were full-time housewives was higher than the national average [15].
The study protocol was approved by the NCNP Ethics Committee. Written informed consent was obtained from the parents of each participating child.
Treatment
Sixteen participants received ABA-based treatments, with 11 also receiving supplemental TAU (Table 1). Five children received only ABA and two received only TAU. In addition to DTT, various ABA techniques such as VB, PRT, either alone or in combination, were provided in a one-to-one setting by highly trained therapists supervised by the program consultant. Neither therapists nor supervisors were involved in this study. Information regarding the content, hours per week, and the monthly cost of received treatment was obtained from mothercompleted questionnaires at T1, T2, and T3 (Table 1). Our participants received a near average to above-average intensity of ABA as a group, and paid monthly fees to the agency/agencies that ranged from approximately US $175 to $5,875 (based on parental information, according to the currency exchange rate at the time of this study). In contrast, TAU was either free of charge or the monthly fees that were paid were less than $125. Hours of ABA/TAU per week or the cost of received treatment per month were not significantly associated with any of the child and family characteristics (child's age, number of siblings, parental education, income). Although all parents were taught the basics of ABA and about various behavioral techniques to augment the effect of the intervention, additional ABA therapy carried out by parents themselves at home was not examined in this study. TAU consisting of one-to-one or group programs was provided by local community-based day nurseries or specialized private preschools. The programs were organized and provided by a team that included a psychologist, nursery school teacher, community nurse, and child care staff. The frequency and hours per week of TAU provided by the community were limited across the study areas ( Table 1). The TAU content was diverse with some of it including the use of picture cards or schedules, sensory integration therapy, or group-based social skills training.
Outcome measures
Regarding child measures, although testers (licensed clinical psychologists with a master's degree or doctoral degree) were blind to the intensity of the child's treatment, sometimes blindness to the type of treatment was compromised unintentionally. Autistic symptoms were assessed using the Japanese version of ADOS [11]. Since the use of Calibrated Severity Scores (CSS) as an indicator of autism severity has been shown to be more valid than the ADOS raw total score [16,17], CSS were calculated from raw ADOS scores [16,18].
A child's development was assessed using the Kyoto Scale of Psychological Development Test (KSPD) [19], which is widely used in Japanese clinical settings for young and/or developmentally delayed children and comparable to the Bayley Scales of Infant Development second edition (BSID-II) [20] (KSPD cognitive-adaptive (C-A) DQ and the BSID-II Cognitive facet, languagesocial (L-S) DQ and the Language facet, postural-motor (P-M) DQ and the Motor facet) [21]. Total DQs assessed by the KSPD are considered comparable to IQ scores for children with autism [22].
Children's internalizing and externalizing behavior problems were measured using the Japanese version of the parent-rated Child Behavioral Checklist (CBCL) [23]. Tscores were used as outcome measures.
Maternal mental health was assessed using the Parenting Stress Index (PSI) and a two-question case-finding instrument (TQI). The PSI, a self-report 120-item questionnaire comprising Child and Parent domains, assesses dysfunctional parenting in parents of preschool children [24]. The TQI consisting of two questions is a depression screening tool originally included in the Primary Care Evaluation of Mental Disorders Procedure (PRIME-MD) [25]. The utility of the number of yes answers has been previously demonstrated for Japanese adults [26].
Procedures
Eighteen participants completed both the T1 assessment (demographic information, ADI-R, ADOS, KSPD, CBCL, and PSI) and 6-month follow-up (T2) assessment (ADOS, KSPD, CBCL, and PSI). At T3 approximately 1 year after T2, the TQI and questionnaire about the received treatment were mailed to mothers, with 16 (88.9%) mothers completing and returning them (Figure 1). Time intervals T1-T2 and T2-T3 had a median of 192 days and an interquartile range of 28 days, and a median of 354 days and an interquartile range of 147 days, respectively. Performancebased tests were administered at the NCNP.
Statistical analysis
Wilcoxon's paired-sample test was used to compare outcome measures at T1 and T2. Since the nonnormality of treatment variables was confirmed using the Shapiro-Wilk test, correlations between the predictor variables including treatment variables and child/mother measures at T1, and score changes between T1 and T2 were examined by calculating Spearman's correlation coefficients. A Mann-Whitney test was used to compare predictor variables between participants whose mothers answered yes to one or both depression items at T3 and those whose mothers answered no to both questions. A p-value < .05 was considered statistically significant. The statistical analysis was performed using SPSS version 18.0 (SPSS Inc., Chicago, USA). Table 2 provides details of each participant's measures at T1, T2 and T3. As shown in Table 1 and Table 2, levels of children's cognitive functioning, behavior problems and their mothers' parenting stress at T1, treatment hours per week, treatment cost per month, and T1-T2 change in child and mother measures varied greatly in this sample. Table 3 shows the correlations between the predictor variables and T1-T2 improvement for 18 pairs of children with autism and their mothers. T1 T2 T1 T2 T1 T2 T1 T2 T1 T2 T1 T2 T1 T2 T1 T2
Change in children's behaviors
As shown in Table 3, ABA hours per week were significantly correlated with an improvement in P-M DQ only (p = .036). TAU hours per week were not associated with any change. The monthly fee paid for ABA was significantly negatively correlated with an improvement in L-S DQ (p = .027), although it was positively correlated with an improvement in ADOS CSS, which approached statistical significance (p = .064). The improvement in the child measures listed in Table 3 were not significantly associated with clinical characteristics assessed at T1, although DQs at T1 were negatively correlated with the changes in C-A and L-S DQs (p = .043, .053, respectively). The changes in each child measure, ADOS CSS, KSPD DQ, and CBCL, were not significantly correlated with each other, whereas among the KSPD DQs the changes in C-A DQ were correlated with those in L-S DQ and P-M DQ (r s = .51, .45, p = .031, .064, respectively).
Change in mother's parenting stress
Neither ABA nor TAU hours per week were significantly associated with an improvement in the PSI Child or Parent domain scores. ABA plus TAU hours per week were associated with a reduction in PSI Parent scores, which approached statistical significance (p = .075) ( Table 3). The monthly cost of ABA was not significantly correlated with change in either PSI Child or PSI Parent domain scores. A reduction in the PSI Child domain scores was significantly correlated with an improvement in children's C-A DQ (r s = .67, p = .002) and CBCL internalizing scores (r s = .69, p = .001), while a reduction in the PSI Parent domain scores was significantly correlated with children's CBCL internalizing scores (r s = .52, p = .026).
Mothers' depression items at T3
The frequency distribution of TQI positive items (n = 16) was similar to that in a recent Japanese adult sample [26]. Participants whose mothers answered yes to either one or both depression items (n = 9) did not significantly differ in either ABA or TAU hours per week, the monthly ABA cost between T1 and T3 (not shown), or family characteristics when compared with the other children (n = 7), but had a significantly increased ADOS CSS (p = .046) and a lower total DQ (p = .071) at T1.
Discussion
We prospectively monitored the developmental progress of 18 children diagnosed as having autistic disorder who received various combinations of ABA (median 3.5 hours per week, range 0-12 hours per week) and/or TAU (median 0.5 hours per week, range 0-21.3 hours per week), and assessed their autistic symptoms, cognitive functioning, internalizing and externalizing problems at intake and 6-month follow-up, and their mothers' mental health at intake, 6-month follow-up, and 1.5-year followup. Large individual variations in outcomes were observed in this study, which is consistent with the findings from previous research [2,6,27]. A significant improvement at the group level was observed only for internalizing problems, irrespective of the type and intensity of received treatments or ABA cost per month. Changes in children's autistic symptoms, cognitive or language functioning and mothers' parenting stress were not associated with either ABA or TAU hours per week at an individual level. However, it was impossible to tease out the effect of ABA from that of TAU in this study. Recent studies have reported that young children with ASDs who received 2 years of lower intensity one-to-one behavioral treatment (4-15 hours per week, where the average hours per week were much higher than those in this study) showed significant progress in a broad range of parameters compared to children who had received TAU treatment [28,29]. The question of whether the intensity or duration of low intensity ABA-based treatment of the type delivered in this study is associated with an improvement in child and family functioning should be examined in future prospective controlled studies.
Regarding the outcome predictors in children, initial IQ has been identified as a strong predictor in 4 of the 11 studies of early intensive behavioral interventions [2] and in a naturalistic study [6]: that was not the case in this study (although our findings may have been affected by the small sample size). As ASD treatment predictors as well as goals can vary according to the socio-cultural context (similar to general mental health issues [30]), future intervention studies should include more diverse race/ethnic/cultural factors to better understand their effects [31].
This study has a number of methodological limitations. First, the sample size was small. Second, we obtained information on ABA and TAU treatment only through parents. We therefore lacked information on its specific form or quality. The fidelity of ABA programs delivered was not monitored. And we did not systematically evaluate parental involvement in homebased ABA therapy. Third, this study did not have a control group receiving a different type of treatment. Fourth, the blindness of the assessment was not perfect. The strengths of this study include a uniform assessment protocol with well-standardized measures of child diagnostic and developmental status as well as parental mental health [1,2].
As emphasized by Howlin et al. [2], treatment should not demand extensive sacrifice in terms of time, money, or any other aspect of family life, instead it should benefit all involved. Although our preliminary results should be interpreted with caution, they suggest that in countries such as Japan with poor ASD resources, we need to focus on individual characteristics and to think about what components should comprise an optimal program for the child with autism.
|
v3-fos-license
|
2018-04-03T05:45:26.527Z
|
2016-02-01T00:00:00.000
|
24420852
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/rcbc/v43n1/0100-6991-rcbc-43-01-00042.pdf",
"pdf_hash": "5ab69a170944fca00ed4eb38ddb4a88e7f7b82da",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42462",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3f65855c7e2e88d317911cf296e5f7b9564c688a",
"year": 2016
}
|
pes2o/s2orc
|
Survival following orbital exenteration at a tertiary brazilian hospital
Objective: to analyze the epidemiology, clinical features and survival rate of patients undergoing orbital exenteration (OE) in a tertiary referral hospital. Methods: we conducted a retrospective study of all patients undergoing OE at the Hospital das Clínicas, FMUSP between January 2007 and December 2012. We collected data records related to gender, age, origin, length of stay, duration of the disease, other treatments related to the disease, number of procedures outside of the face related to the disease, follow-up and histological diagnosis. Results: we treated 37 patients in the study period. The average survival in one year was 70%, in two years, 66.1%, and 58.3% in three years. There was no significant difference in the one-year survival related to histological diagnosis (p=0.15), days of hospitalization (p=0.17), gender (p=0.43), origin (p=0.78), disease duration (p=0.27) or the number of operations for the tumor (p=0.31). Mortality was higher in elderly patients (p=0.02). The average years of life lost was 33.9 in patients under 60 years, 14.7 in patients in the 61-80 years range and 11.3 in patients over 80 years. Conclusion: the present series of cases is significant in terms of prevalence of orbital exenteration; on the other hand, it shows one of the lowest survival rates in the literature. This suggests an urgent need for improved health care conditions to prevent deforming, radical resections.
INTRODUCTION INTRODUCTION INTRODUCTION INTRODUCTION INTRODUCTION
O rbit Exenteration (OE) is one of the most disfiguring procedures among ophthalmologic operations, and is characterized by the complete removal of the contents of the orbital cavity.According to the resection extent, it can be classified into: 1) total, if there is resection of the eyelids; 2) subtotal, when preserving the eyelids; or 3) extensive, when including removal of the bone surrounding walls 1 3 .
OE is the therapy of choice when other less radical methods do not result in better prognosis.It is usually indicated in oncologic resections for local control of malignant tumors.However, aggressive diseases or benign tumors that cause uncontrollable pain and structural and/ or extensive lesions also require it.Among the malignant lesions, Basal cell carcinoma (BCC) is the most common skin cancer (80-90%), followed by squamous cell carcinoma (SCC).Examples of non-malignant diseases include: neurofibromatosis, fibrous dysplasia, mucormycosis, sharply contracted anophthalmic cavity, recurrent meningioma and orbital myiasis 4,5 .
The aesthetic consequences have a strong psychological impact on the patient and require a multidisciplinary approach.Many patients are referred to psychological services after the operation or even refuse to undergo the surgical procedure.Constant vigilance, good doctor-patient relationship, early diagnosis and prompt treatment would provide better prognosis, especially in emerging countries 6,7 .
This retrospective study aims to analyze the epidemiology, clinical features and survival rate of patients undergoing orbital exenteration (OE) in a tertiary referral hospital.
METHODS METHODS METHODS METHODS
The research project was approved by the of the Hospital das Clínicas, University of São Paulo and we carried out a retrospective study of medical records and pathology reports of all patients who underwent orbital exenteration at the facility between January 2007 and December 2012.
We identified cases by the International Classification of Diseases (ICD-10).We requested the medical records and analyzed them manually.the following data were collected: gender, age, origin, days of hospitalization, time of disease, other operations/treatments performed related to the disease, number of procedures performed outside the area of the face related to the disease, follow-up, histologic diagnosis and recurrence of lesions.To analyze the survival rate, we contacted the patient's family members by telephone with the help of Social Service for identification and active search for the occurrence of death.
We analyzed the variables by the Kaplan-Meier method, and compared survival curves using the log-rank test, with the R software, version 3.1.1.We calculated the Years of Potential Life Lost (YPLL) by the method proposed by Romeder 8 , adjusted to the life expectancy of Brazilians in 2013 9 .The age of reference used was 78.6 for patients under 60 years of age, 83.7 for patients between 61 and 81 years, and 96.7 for patients over 80.
RESULTS RESULTS
We identified 39 patients, of whom two were excluded due to incorrect coding of the disease.
Demographic and clinical characteristics Demographic and clinical characteristics Demographic and clinical characteristics Demographic and clinical characteristics Demographic and clinical characteristics of patients of patients of patients of patients of patients
The study cohort included 17 men and 20 women, between 0 and 94 years of age (mean 62.2 years).São Paulo, capital, was the origin of 15 patients (40.5%), 13 (35.1%)were from towns in the interior of São Paulo and nine (24.4%) from other Brazilian regions.Thirty-three patients were white (89.2%), one was black (2.7%) and three brown (8.1%).
The average time of diagnosis was 43.4 months (range three months to 12 years), except for congenital cases.The days of hospitalization ranged from 0 to 62, average 14.Twelve patients (35.3%) were not subjected to any other surgical procedure related to the current injury, another 12 (35.3%)underwent one operation and 10 (29.4%) underwent more than one.Seventeen patients had additional treatment such as radiotherapy (ten patients -27%), chemotherapy (two patients -5.4%) and cryosurgery (three patients -8.1%).Most were not submitted to any other operation outside the face area (81.8%) and eight (21.6%) were previously treated at least once.
Survival Rate Survival Rate Survival Rate Survival Rate Survival Rate
We excluded congenital cases from the survival analysis.Two patients died during hospitalization.
At the time of the study, 15 patients had died, 15 were alive and six could not be contacted.The average survival rate at one year was 70% and this figure decreased to 66.1% and 58.3% in two and three years, respectively.Mean survival was 47.3 months.
The mortality rate was higher in older patients (p=0.02).There was no significant difference in one-year survival as for the histological diagnosis, if SCC (Figure 1), BCC or non-ECC/non-BCC (p=0.15),days of hospitalization (p=0.17),gender (p=0.43),origin (p=0.78),time of disease progression (p=0.27) or number of operations related to the tumor (p=0.31-Table 2).
The average age of death in the age group under 60 was 44.7 years; between 61 and 80 years, 69, and in patients aged over 80 years, 85.4.Considering the life expectancy of Brazil in 2013, the average years of life lost were, respectively, 33.9 years, 14.7 years and 11.3 years.The total YPLL was 191 years (Figure 2).As the hospital where the study was conducted is a tertiary center, it is expected that 59.9% of patients originate from other cities as well as from São Paulo.The geographical distance from the origin to the hospital also explains the choice for OE, as the imprecise diagnosis of other health services and lagged time to admission to the Figure 1 -Figure 1 -Figure 1 -Figure 1 -Figure 1 -Example of squamous cell carcinoma with orbital invasion.
tertiary hospital may have made OE the only possible procedure for the control of local disease.Among the patients cohort, three constituted nonmalignant cases.SCC and BCC together accounted for 70.2% of the histological diagnosis, which is consistent with other studies.BCC is the most common skin cancer in the periorbital area, but SCC spreads more easily and requires a quick management to prevent disease progression 2,10,12,14,15 .Our findings are similar to current literature, insofar as BCC represented 27% of the OE cases, while SCC accounted for 43.2%.Although SCC is more aggressive than BCC, the difference in survival at one year was not statistically significant between histopathologic diagnoses (p=0.15).The difference was evident only among the first 30 months or so.Some studies, however, had higher mortality after SCC Rev. Col. Bras.Cir.2016; 43(1): 042-047 than BCC [16][17][18] .Additional treatments, such as Mohs micrographic surgery, may have been beneficial in the management of some SCC cases 19,20 .
The average mortality rate after OE also differs from the literature, since our series showed lower survival.Rahman et al. reported a survival rate of 93% in one year 10 ; Younger patients had on average 33.9 years of life lost as a result of diseases that lead to OE, and older patients lost more than ten years.Not only the aggressiveness of the disease, but also the lack of information, difficulty in access to health care and delay in correct diagnosis justify the current low survival rate 6,21 .Studies suggest differences in post-SCC mortality between developed and developing countries 22 .Advanced age may act as a confounding variable because, generally, it is related to comorbidities and other causes of death unrelated to the tumor.However, the predominance of advanced malignant disease is already an indicator of difficulty in access to adequate medical services for immediate treatment, which could improve survival even in the older age group.
In conclusion, this case series is significant in terms of prevalence of Orbit Exenteration; On the other hand, it displayed one of the lowest survival rates in the literature.This suggests an urgent need for improved health care conditions to prevent deforming, radical resections.
DISCUSSIONDISCUSSION DISCUSSION DISCUSSION DISCUSSION Orbital exenteration is not a common procedure and is usually done in tertiary referral centers.Our case series presented one of the largest series per year (37 patients in six years).Rahman et al. reported 64 cases in a period of 13 years 10 ; Mohr and Esser had 77 in 20 years 11 ; Bartley et al. described 102 in 20 years 12 ; and Maheshwari et al. published 15 in 10 years 13 .
Table 2 -Table 2 -Table 2 -Table 2 -Table 2 -
Comparison of age, gender, days of hospitalization, origin, time of disease, number of operations and histological diagnosis with survival rate.
|
v3-fos-license
|
2021-08-24T13:24:15.790Z
|
2021-08-24T00:00:00.000
|
237271342
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2021.660536/pdf",
"pdf_hash": "63961fc0010cb9b26c240a2c421f2d1e20e1077c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42464",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "63961fc0010cb9b26c240a2c421f2d1e20e1077c",
"year": 2021
}
|
pes2o/s2orc
|
Older Adults in the United States and COVID-19: A Qualitative Study of Perceptions, Finances, Coping, and Emotions
Introduction: Older adults have the poorest coronavirus (COVID-19) prognosis with the highest risk of death due to complications, making their COVID-19 experiences particularly important. Guided by the stress-appraisal-coping theoretical model, we sought to understand COVID-related perceptions and behaviors of older adults residing in the United States. Materials and Methods: We used convenience sampling to recruit persons with the following inclusion criteria: Aged ≥ 65 years, English fluency, and U.S. residency. Semi structured in-depth interviews were conducted remotely and audio recorded between April 25, 2020 and May 7, 2020. Interviews were professionally transcribed with a final study sample of 43. A low-inference qualitative descriptive design was used to provide a situated understanding of participants' life experiences using their naturalistic expressions. Results: The mean age of participants was 72.4 ± 6.7. Slightly over half were female (55.8%), 90.6% were White, and 18.6% lived alone. The largest percentages of participants resided in a rural area (27.9%) or small city (25.6%). We identified four themes, including (1) risk perception, (2) financial impact, (3) coping, and (4) emotions. Most participants were aware of their greater risk for poor COVID-19 outcomes but many did not believe in their increased risk. Financial circumstances because of the pandemic varied with largely no financial impacts, while others reported negative impacts and a few reported positive impacts. Coping was problem- and emotion-focused. Problem-focused coping included precautionary efforts and emotion-focused coping included creating daily structure, pursuing new and/or creative activities, connecting with others in new ways, and minimizing news media exposure. Overall, emotional health was negatively affected by the pandemic although some participants reported positive emotional experiences. Conclusions: Perceiving themselves as high risk for COVID-19 complications, older adults used precautionary measures to protect themselves from contracting the virus. The precautionary measures included social isolation, which can negatively affect mental health. Older adults will need to be resourceful and draw on existing resources to cope, such as engaging in creative activities and new strategies to connect with others. Our findings underscore the importance of the preservation of mental health during extended periods of isolation by taking advantage of low-to-no-cost existing resources.
INTRODUCTION
was the date of the first recorded case of coronavirus disease in the United States (1) and 10 days later it was identified as a Public Health Emergency of International Concern by the World Health Organization (2). The U.S. President issued the Proclamation on Declaring a National Emergency Concerning the Novel Coronavirus Disease on March 13, 2020 (3). The cumulative number of confirmed and probable COVID-19 cases in the United States, as of May 20, 2021, since January 21, 2020 was 32.8 million (4). People who are aged ≥ 65 years have the poorest COVID-19 prognosis with the highest risk of death due to complications (5,6). The highest hospitalization rates have consistently been among persons aged ≥ 65 years and the rate increases with age. As of May 19, 2021, in the U.S. there were 574,045 deaths of all ages involving COVID-19 and 458,645 or 80% were persons aged ≥ 65 years (7). Consequently, older adults were prioritized to receive a COVID-19 vaccine (8).
During the pandemic, social distancing and sheltering in place have been the main recommendations to avoid or reduce the likelihood of virus exposure (9). Further, older adults were advised to adhere to stricter social distancing directives. Centers for Disease Control and Prevention (CDC) guidance advised older adults and/or persons with underlying health conditions to limit their in-person interactions as much as possible (10). Other steps recommended by CDC for everyone included washing hands often; avoiding touching eyes, nose, or mouth; covering mouth and nose with mask when around others; and cleaning and disinfecting high frequency contact surfaces (9).
There is substantial scientific evidence with respect to the negative outcomes of social isolation. Social isolation is associated with increased loneliness, greater morbidity, and decreased quality of life as well as increased mortality risk (11). Prior to COVID-19, older adults experienced disproportionately more social isolation than younger persons (12). Mental and psychological health has been largely overlooked in response to the pandemic (13). Stress, anxiety, depressive symptoms, sleep disturbance, and loneliness are all heightened with social isolation (14,15). Several studies have reported on mental health-related issues among older adults in the U.S. with respect to the COVID-19 pandemic (16)(17)(18)(19). Not surprisingly, results have indicated that a large proportion of study samples report feelings of stress and loneliness (16,18). Yet, when compared to younger adults, some research has found that older adults have experienced better mental health during the pandemic (20)(21)(22). Most of what we know so far has been epidemiological in nature with relatively less research that has qualitatively examined how older adults are responding to the pandemic (23,24). Thus, to contextualize the published statistics, we sought to understand the responses and experiences of persons aged ≥ 65 years in the context of the COVID-19 pandemic in the United States.
Theoretical Model
Our study was framed within the stress-appraisal-coping theoretical model (25). Coping strategies and emotional reactions have been found to mediate the effect of the COVID-19 pandemic on stress (26). The stress-appraisal-coping theoretical model (25,27) posits that the stress occurs when a person appraises an event as dangerous to their well-being and demands more resources than available. Cognitive appraisal, including individual characteristics, perceptions, thinking, and environmental characteristics, affect individual reactions. Coping, or a person's ongoing changing cognitive and behavioral efforts to manage stressors, also can influence stress (25). There are two types of coping in the literature: (1) Problem-solving strategies are efforts to do something active to improve a stressful situation; and (2) Emotion-focused strategies involve efforts to regulate associated emotional responses (28). Thus, we analyzed our data using this framework to better understand how cognitive appraisal and coping of older adults during COVID-19 impacts their stress response.
Data Collection
Participants were identified and recruited using a convenience sampling approach. During spring semester 2020, 22 Masters in Social Work students taking a research course were asked to recruit and interview two persons each with the following inclusion criteria: Aged ≥ 65 years, fluent in English, and living in the U.S. Students used their personal connections to identify potential participants who they initially contacted by telephone. All the students conducted semi structured in-depth interviews with the two study participants that they identified and recruited using an interview guide (See Table 1) developed by the course professor (RTG). Given the sampling approach, most of the participants were family members of the students (e.g., parents, grandparents). All interviews were conducted remotely via a virtual meeting platform and audio recorded. Recordings were professionally transcribed and reviewed for accuracy. Forty-four interviews were conducted between April 25, 2020 and May 7, 2020. We excluded one interview since the participant did not meet the age criteria, yielding a total of 43 interviews analyzed for our study. The study received Western Carolina University's institutional review board approval. • When did you start taking steps/precautions to minimize your exposure to the Coronavirus/COVID-19?
• What have you done? (e.g., no longer visits with persons not in the home, quit their job, stopped volunteering, canceled appointments, stopped attending group events, bulk buying) •
Analyses
We used a low-inference qualitative descriptive design to provide a situated understanding of participants' life experiences using their naturalistic expressions (29,30). Low-inference refers to relying on verbatim accounts of what participants said and minimizing the extent to which we as researchers reconstructed what the participants were sharing. Individual transcripts and team debriefing recordings formed the data for our analyses. A well-established mixed inductive, deductive, and reflexive analysis (31) was conducted through team processes led by a senior researcher (RTG). The analytic team consisted of four investigators with social work (LA, HM, HD), public health (RTG, HD), and gerontology (LA, RTG) perspectives. Triangulation of interpretations among this interdisciplinary team strengthened credibility of the analyses (32). Transcripts were read individually by team members using a gestalt and then line-by-line approach to in vivo coding using participant language to answer the question: What were the responses and experiences of COVID-19 among our participants?
The team-based analytic process consisted of individually reading each transcript, coming together to discuss words, phrases, and text segments that characterized how participants talked about their experiences. Attention was paid to what was said, the context it was offered in, and the language used. Common ideas were grouped as codes and into themes. An emergent coding schema was developed and an intra-and interinterview theme analysis was conducted to identify emerging patterns. We used a low-inference interpretive approach to stay closer to description. Naming and meaning of themes were developed through iterative consensus discussions across the team. Investigator triangulation and an iterative design was used to ensure emergent findings were recontextualized to check meanings in subsequent interviews. An audit trail of team discussions, theme development, and the refinement of the analytic framework was maintained through audio recordings and note taking. Analyses was continued until saturation was reached, concluding that no new information would be obtained by pursuing additional interviews.
Lastly, member checking was conducted with six study participants to further enhance credibility. This involved sharing the emerging themes and interpretations with the six participants to give them an opportunity to indicate if they agreed with or if they had any feedback on the emerging themes and interpretations.
Participant Characteristics
As shown in Table 2, the mean age of our participants was 72.4 ± 6.7, slightly more than half (55.8%) were female, 90.7% were White, and 18.6% lived alone. Persons self-identified the type of area in which they lived with the largest percentages of our study participants residing in a rural area (27.9%) or in a small city (25.6%).
Themes
Overall, we identified four themes with respect to responses and experiences with the COVID-19 pandemic among our participants, including (1) risk perception, (2) financial impact, (3) coping, and (4) emotions. Exemplar quotes for all themes are presented in Table 3. Brackets after quotes indicate gender (F = female, M = male) and the participant's unique identification number.
Theme 1: Risk Perception
Participants were asked "Do you consider yourself in a 'high risk category' if you contracted the Coronavirus or COVID-19?" Responses fell into six categories: (1) Yes, due to underlying health conditions; (2) Yes, because of age but with reluctance, (3) Yes, without reluctance but only because of age, (4) Yes, without elaboration, (5) No, because they are healthy despite meeting age criteria, and (6) No, without elaboration. Most of the respondents considered themselves in a high-risk category and the two most common responses were "yes, due to underlying health condition(s)" and "yes, because of age but with reluctance" in placing themselves in a high-risk category.
Theme 2: Financial Impact
Within the stress-appraisal-coping theoretical model, one's financial circumstances are resources that can be used and can affect how one copes. We discussed with participants the extent to which the pandemic had impacted their financial situation, and participants' discussions fell into four categories: (1) Yes, negatively; (2) Yes, positively; (3) No impact, without elaboration; and (4) No, not currently. Those who were negatively impacted had experienced a loss in their day-to-day income. Those who were positively impacted attributed it to not engaging in activities that involved spending money such as not going out to eat, shop, and/or for entertainment. There were also a few participants who shared that they benefited from the federal stimulus check.
Most our participants had not experienced a negative financial impact from the pandemic as they were retired and had a fixed income. The fourth category regarding being financially impacted were participants who reported none but also mentioned the potential of being negatively impact by losing money invested in their retirement account and the stock market. One participant discussed having temporary financial security through unemployment benefits, but was worried about possible financial insecurity once they end.
Theme 3: Coping Problem-Focused
Participants were engaged in a variety of problem-solving strategies to avoid contracting COVID-19. These precautionary efforts were either (1)to reduce exposure to the virus or (2) to reduce susceptibility to the virus. To reduce virus exposure, all participants engaged in some of the following activities: Mask wearing, glove wearing, social distancing, handwashing, shopping at specific or designated times, and working from home. A notable number of participants described their grocery shopping experiences during the pandemic. Participants discussed avoiding people in the store, minding the 6′ distance from others, shopping at designated times for older adults, using a pre-order and pick up service, and disinfecting items upon returning home. Also, many of our study participants discussed efforts to reduce their susceptibility to the virus if exposed, including healthier eating, meditating, exercising, and taking supplements to boost their immune system.
Emotion-Focused
In addition to the problem-focused precautionary activities, participants enlisted emotion-focused coping strategies, that included (1) creating daily structure, (2) engaging in new or creative activities, (3) connecting with others in new ways, and (4) limiting news media exposure. Creating daily structure simply involved establishing a routine to their day. In regards to pursing new or creative activities, participants were taking care of house and/or yard projects they had put off or were starting new projects to keep them occupied. Some participants were using their time for creative pursuits such as playing an instrument or creating visual art. Several activities discussed involved food, such as cooking, baking, and/or eating. Some other activities included exercising, yoga, meditating, journaling, or deliberately spending more time outside. Participants shared how they were pursuing social engagement and support through familiar as well as new ways, including regular telephone calls, texting, and/or online video meetings. Some participants were socializing inperson with increased distance and outside, such as hosting "garden parties" or taking a walk. Lastly, to reduce their negative feelings because of the pandemic, participants shared that they were deliberately not listening to, watching, or reading the news.
Theme 4: Emotions
Participants discussed their emotional health in response to the pandemic. While most participants were negatively affected in some way, a few participants shared that COVID-19 had not affected their emotional health. Of those affected, anxiety, fear, and loneliness were expressed. With respect to anxiety, participants expressed overall anxiety, anxiety about the future's uncertainty, and concern about others they saw in public spaces who did not take precautionary steps such as mask wearing. Well, yeah, I'm a real estate agent…I had 2 or 3 people that were ready or couples that were ready to buy houses right away and we were looking up until I mean like… And then one person was affiliated with a university and when they closed the university down, she just said, "I won't be looking anymore." So, that's gone. And then others have pretty much been the same way, just wanting to wait to see how things go. And then we also have the short-term rental properties up through Airbnb and all that just got canceled immediately. So yeah, our income has definitely been affected. I don't feel like it has affected me a great deal. Maybe a little boredom at times, but there's just things that I would like do that I miss. But I also realized that I'm in a whole lot better shape in these things than most people are. [M18] F, female; M, male.
There were discussions of disappointments, such as missing socializing opportunities, eating out, and visiting with loved ones. Also, with respect to disappointments, many participants were displeased with the federal government's response to the pandemic. Finally, some of our participants shared that they had experienced positive feelings, including having less stress, enjoying having more time, and a feeling a generalized sense of gratitude for what they had.
DISCUSSION
Most of our participants perceived themselves as in the highrisk category if they contracted COVID-19. This risk perception of the study participants makes sense as 81% of deaths due to COVID-19 are among persons aged ≥ 65 years (7). When viewed with Lazarus and Folkman's stress-appraisal-coping theoretical model, we understand that our participants cognitively appraised COVID-19 as a high-risk threat and employed significant coping skills and resources to ameliorate the emotional distress from the stress (25). There is still much to be learned about COVID-19 risk perception in older adults as study results thus far have been mixed. A study in Wuhan, China, found a higher percentage of middle-aged and older adults compared to younger adults perceived themselves as high risk for contracting COVID-19 while a slightly greater percentage of younger adults perceived themselves at high risk of death if they contracted COVID-19 (33). Prior research has found that, compared to younger adults, older adults perceived themselves at lower risk of the contracting the virus (34-36) and of dying from the virus (36). COVID-19's financial impact has been significant, with up to 33% of people worldwide having lost income and 14% having lost a job (37). Yet, older adults have fared better financially compared to younger counterparts (38), which aligns with our findings that most of our participants were not negatively impacted financially by the pandemic. A survey of almost 5,000 U.S. adults found that across age groups, the highest percentage of those who were prepared for a financial emergency were aged ≥ 65 years. Further, this survey found that persons aged ≥ 65 years were the least likely to report losing a job and/or taking a cut in pay (38). Another U.S. study with 825 persons aged ≥ 60 years found that only 5.5% had concerns about experiencing any personal financial repercussions of the pandemic (39).
Regarding coping strategies, all our study participants engaged in both problem-and emotion-focused efforts. Problem-focused coping included precautionary steps to avoid contracting COVID-19, which corroborates other research that has shown that most older adults take the pandemic seriously. Such studies have found that older adults are the most likely to adhere to the CDC's recommendations and to engage in precautionary behaviors, including wearing a face mask, washing or sanitizing hands, keeping 6 feet distance from others, avoiding restaurants, and avoiding public or crowded places (26,(40)(41)(42).
Like other studies, our participants also coped with emotionfocused strategies, including engaging in more solitary activity (16), changing exercise regimens from group settings to home settings (43), and increasing social media use and texting (16). Moreover, our participants established low-cost coping methods such as eating healthier, taking supplements, working on projects and creative activities, finding alternatives to inperson socialization, and decreasing consumption of news media. Research examining behaviors of persons during the pandemic have found that older adults were less likely to engage in unproductive coping strategies such as substance use and behavioral disengagement compared to younger adults (26). As in other studies with older adults, and not surprisingly, our participants reported that COVID-19 has negatively affected their emotional health, including increased loneliness (16,44), depression (45), and anxiety (20). In the general world population, the average General Anxiety Disorder score has increased (0.82-3.31) and the average Patient Health Questionnaire score has increased (0.94-2.59) (37). Yet, compared to younger persons, older adults have been found to be less likely to report depression or anxiety symptoms (20)(21)(22).
Our data provide important information about how older adults perceive the problem of COVID-19, their available resources, coping styles, and how these factors impact their emotional health. We found that while our participants perceived themselves as high risk if they contracted the virus, most of them believed they had adequate financial resources to mediate future problems related to the pandemic. This finding could explain, in part, how well our participants coped by limiting spending, minimizing COVID-19 exposure, and adopting healthier behaviors. While our participants acknowledged emotional burden, these coping skills appeared to help mitigate a more severe emotional impact the pandemic could have had on them. Lazarus and Folkman's theory may explain why others have found that older adults have not experienced as much emotional distress as younger counterparts because unlike younger counterparts, most older adults are protected by fixed incomes, Social Security, Medicare, and benefit from a lifetime of developing coping skills. It could also explain how research has found that those with financial resources are more likely to have effective skills, follow precautionary measures and recommended guidelines, and report less depression and anxiety (26,37).
A crux of COVID-19 problem-focused coping is that the coping skill of physical distancing increases risk for isolation, a well-known risk factor among older adults for poor emotional and physical health (46,47). While some study participants continued activities such as work, most did so from home. Participants had stopped volunteering, visiting others, eating out, or attending events and altered their grocery shopping to minimize potential virus exposure. Our findings suggest that financial stability, access to technology for socialization, access to healthy foods, and safe exercise options are important coping skills and resources to alleviate emotional distress from the stress response. Further, there are strategies that health care and social service providers can employ to help older adults address the emotional impact of COVID-19, including: • Use a strengths perspective and praise patients who are realistic about their COVID-19 risk perception and make efforts to stay healthy and socially distance. • Screen for loneliness, anxiety, and depression, especially among persons who live alone. • Screen for financial impact of COVID-19. Of those who have had financial loss, recognize that is a risk factor for impaired coping and emotional health and connect to resources such as Area Agencies on Aging. • Elicit unique coping skills before providing advice and encourage using skills that have worked for them in the past. Listen out for healthy behaviors that are being pursued such as exercise, healthier eating and supplements, and acknowledge their efforts to help build self-efficacy. • Help identify wellness and/or exercise opportunities.
• Inquire about eating habits or conduct a nutrition screening.
Refer those at risk to nutritional counseling and/or related services such as Meals on Wheels. • If the individual does not have effective coping skills, encourage strategies such as creating daily structure, engaging in new or creative activities, connecting with others safely, and limiting news exposure. • Recognize that it is normal for persons to experience a myriad of emotions during a pandemic, especially for those that are socially isolated. Refer those with emotional distress to effective treatments such as cognitive behavioral therapy and problem solving therapy (48).
There are several study limitations that warrant acknowledgment. These data were only collected at a single interview relatively early during the pandemic among persons residing in the U.S. Should participants have been interviewed later during the pandemic, it is likely that they would have appraised their risk differently, with changing resources such as limited capacity at hospitals and an overall slow vaccine distribution. Such circumstances may have influenced coping and emotional reactions, especially if they believed they have less control over the outcome. Also, it is possible that if more than one interview per participant was conducted, greater rapport would have been established potentially yielding more information regarding their experiences. We did not collect the state of residence of our participants. Different enacted state-level policies may have influenced the experiences and perceptions of the participants. Last, most of our participants were White, limiting our ability to examine race differences. Future research is warranted to investigate racial and ethnic differences with COVID-19 experiences, including Blacks, American Indians, Alaska Natives, and Latinx. Research has found that these groups have been found to be more likely to contract the virus and to experience greater negative health effects that the general U.S. population (49)(50)(51)(52). These insights into risk perceptions, financial resources, coping strategies, and emotional health have public health implications. Studies prior to the COVID-19 pandemic indicate that older adults were at increased risk for social isolation and loneliness, which can lead to physical and emotional problems (46). Clearly, the pandemic has presented greater challenges for older adults as well as for their health care and social service providers. The COVID-19 pandemic has raised concerns with respect to reduced physical activity, limited use of services, increased anxiety, and compromised nutrition among older adults (15). We heard that our participants were being resourceful in their coping although concerted efforts are needed to bolster programs and services that support older adults. Further, such programs and services are now tasked with developing new and creative ways to reach their patients and/or clients. Such efforts, for instance, can help with high speed internet access, provide support regarding technology to connect to their social network, increasing use of telemedicine and telepsychiatry, providing home delivered meals, and distributing the COVID-19 vaccine.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Western Carolina University Institutional Review Board. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
RG contributed to the conception, study design, and interview guide. HM and HD conducted interviews and member checking. RG, HM, and HD conducted the qualitative analysis. RG, HM, HD, and EA wrote sections of the manuscript. All authors contributed to the manuscript revision, read, and approved the submitted version.
|
v3-fos-license
|
2018-04-03T05:23:02.665Z
|
2016-06-09T00:00:00.000
|
16820819
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/criot/2016/1706915.pdf",
"pdf_hash": "a288978a7be4e039db70fa6049e2b03e3939f09e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42465",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "008f3b39b432f9bef087eeef22fafe60cb287089",
"year": 2016
}
|
pes2o/s2orc
|
Giant Primary Schwannoma of the Left Nasal Cavity and Ethmoid Sinus
A unilateral tumour in the nasal cavity or paranasal sinuses is commonly caused by polyps, cysts, and mucoceles, as well as invasive tumours such as papillomas and squamous cell carcinomas. Schwannomas, in contrast, are rare lesions in this area (Minhas et al., 2013). We present a case of a 52-year-old female who presented with a 4-year progressive history of mucous hypersecretion, nasal obstruction, pain, and fullness. Imaging of the paranasal sinuses showed complete opacification of the entire left nasal cavity and sinuses by a tumour causing subsequent obstruction of the frontal and maxillary sinuses. The tumour was completely excised endoscopically. Histopathology was consistent with that of a schwannoma.
Background
Schwannomas are benign tumours originating from peripheral nerve sheaths. Previous reports indicate that 25-50% of schwannomas occur in the head and neck region, but tumours originating from the nasal cavity or paranasal sinuses are rare, with a reported rate of approximately 4% [1,2]. In 2001, approximately 40 cases of sinonasal schwannomas had been reported [3]. In 2016, we found just over 100 cases reported in the literature [4].
We report the case of a huge nasoethmoidal schwannoma excised endoscopically.
Case Presentation
A 52-year-old female presented with a 4-year progressive history of left nasal obstruction, pain, and fullness with intermittent epistaxis. Her symptoms began in China where she described an episode of an upper respiratory tract infection with subsequent development of ongoing mucous production. Other features of her history were otherwise unremarkable.
Findings on nasendoscopy showed a huge left nasal polyp completely obstructing the anterior nasal cavity limiting further examination. On nasendoscopy through the right nostril, the tumour could be seen occupying the left nasopharynx but the right nasal cavity was clear. Examination of the eye and the oral cavity was unremarkable.
Investigations
High resolution CT of the paranasal sinuses showed complete opacification of the left frontal, ethmoidal, maxillary, and sphenoidal sinuses and nasal cavity. The tumour significantly displaced the left lateral nasal wall into the maxillary sinus. There was also hyperostosis of the sphenoid and maxillary sinus walls. The right nasal cavity and sinuses were clear. Axial, coronal, and sagittal views are demonstrated in Figures 1-3.
An MRI was performed to better visualise the soft tissue mass. The mass was identified to fill the entire left nasal cavity, extending into the choana to completely fill the nasopharynx. We did not acquire a preoperative tissue biopsy given how promptly a complete excisional procedure could be performed. Furthermore, given the patient's severity of symptoms, complete excision was necessary regardless of the diagnosis, and therefore we felt a biopsy would not have changed immediate management.
Treatment
Endoscopic resection of the schwannoma was undertaken. The patient was placed under general anaesthetic in a reverse Trendelenburg position. The superior extent of the tumour was dissected off the anterior skull base with no attachment found. Clearance of the frontoethmoidal recess was followed by free evacuation of mucus from the frontal sinus. Inferiorly the tumour was not attached to the nasal floor. Obstruction of the left maxillary sinus ostium was addressed with clearance of the maxillary sinus contents. The tumour was attached posteriorly to the basisphenoid, the point of presumed focal origin. The nasopharyngeal component was easily removed Case Reports in Otolaryngology 3 with no attachment found. The floor of the sphenoid sinus was drilled further as the pterygopalatine ganglion was felt to be the likely point of origin and hemostasis was achieved. The tumour was removed in two sections.
The patient was extubated and had an uneventful postoperative recovery.
The two separate sections of the nasal mass were sent for histopathological examination (29 × 25 × 22 mm, 40 × 22 × 20 mm.) Both sections were composed of Antoni-A and Antoni-B areas of variable cellularity consistent with those of a schwannoma. As expected, immunoperoxidase stains for S100 and SOX10 were positive. Tumour was noted to compress thin strips of bone in a submucosal distribution in all margins. There was no evidence of malignancy. Fungal microscopy and culture showed no elements or growth.
Outcome and Follow-Up
The patient had a very good response to surgery at two-month follow-up, with complete resolution of symptoms. On repeat nasendoscopy, there was no evidence of residual or recurrent disease.
Discussion
Unilateral tumours in the nasal cavity causing nasal obstruction, pain, fullness, and epistaxis are usually caused by benign disease processes such as polyps, cysts, and mucoceles. A unilateral tumour originating from the nasal cavity should also stimulate the consideration of the rare esthesioneuroblastoma, a neoplasm originating from the olfactory neuroepithelium that has significant heterogeneity in management and variation in prognosis [5].
Schwannomas of the nasal cavity and sinuses produce similar symptoms but are much rarer. Previous reports suggest that sinonasal schwannomas represent less than 4% of all head and neck schwannomas, with only approximately 40 cases reported as of 2001 and 100 cases as of 2014.
Schwannomas are benign tumours of peripheral nerve sheaths, and it has been proposed that sinonasal schwannomas may originate from the ophthalmic or maxillary branches of the trigeminal nerve or from sympathetic or parasympathetic fibres from the carotid plexus or sphenopalatine ganglion [6]. The majority of patients present with progressive nasal obstruction with pain, headache, and epistaxis but can occasionally cause ptosis, proptosis, or diplopia.
The diagnostic workup for sinonasal schwannoma should include nasendoscopy, CT, and MR imaging of the paranasal sinuses to examine the extent of disease and to guide surgical approach for excision. As most schwannomas have focal origin and are in the most part encapsulated, these tumours are commonly amenable to endoscopic resection.
This case report highlights the need for schwannoma to be included in any differential diagnosis of any soft tissue mass of the sinonasal spaces.
|
v3-fos-license
|
2022-05-28T15:09:27.758Z
|
2022-05-01T00:00:00.000
|
263374900
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.1025355/pdf",
"pdf_hash": "5de84cd2ffdf4d3ad54b67d656cfd0090dacb2f4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42468",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "1376362070be374df984c2ec531ed98919be3218",
"year": 2022
}
|
pes2o/s2orc
|
Safety profile of robotic-assisted transperineal MRI-US-fusion guided biopsy of the prostate
Introduction Robotic-assisted transperineal MRI-US-fusion guided biopsy of the prostate is a novel and highly accurate procedure. The aim of this study was to evaluate the MonaLisa prostate biopsy system in terms of safety, tolerability, and patient-related outcomes. Methods This prospective study included 228 patients, who had undergone Robotic-assisted transperineal MRI-US-fusion guided biopsy of the prostate at the University Hospital Basel between January 2020 and June 2022. Peri-operative side effects, functional outcomes and patient satisfaction were assessed. Results Mean pain score on the day of biopsy was 1.3 points on VAS, which remained constant on the day after biopsy. Overall, 32 of 228 patients (14%) developed grade I complications according to Clavien-Dindo classification. No higher-grade complications occurred. Gross haematuria, hematospermia and acute urinary retention occurred in 145/228 (63.6%), 98/228 (43%) and 32/228 (14%) patients, respectively. One patient (0.4%) developed urinary tract infection. Conclusions Robotic-assisted transperineal MRI-US-fusion guided biopsy of the prostate performed under general anesthesia is a safe and well tolerated procedure. This technique allows to omit perioperative prophylaxis and at the same time minimizes the risk of infectious complications. We attribute the favorable risk profile and tolerability to the minimal invasive approach via two entry points.
Introduction
Prostate cancer (PCa) is the second most common malignant disease in men worldwide (1) Suspicion for PCa is based on pathological digital rectal examination (DRE), prostate specific antigen (PSA) or magnetic resonance image (MRI) findings and indicates, as standard of care, a biopsy of the prostate (PB x ) for histopathological verification (2). PB x represents one of the most common urological procedures, with more than 1 million interventions performed in Europe and the United States every year (3). PB x can be performed via a transrectal (TR) or transperineal (TP) route, each approach being associated with specific benefits and limitations. TR offers practicability in the in-office setup due to feasibility under local anesthesia reflected by the majority of PB x being performed via the TR approach in the US (93.1 -99.2%) (4). However, punction of the prostate through the rectum ampulla is associated with a significant risk for infectious complications (5). The incidence for infectious complications after TR-PB x ranges between 5 and 7% with a hospitalization rate of about 2% (2,3). Rising rates of fluorchinolone-resistance organisms, which could be found in up to 30% of rectal swab cultures prior to TR-PB x , possibly aggravate the situation (2). With the TP approach infectious complications are significantly lower, even negligible (2,6,7). Technological advances in diagnostics of PCa, like the implementation of multiparametric MRI (mpMRI) and MRI-targeted PB x have increased the detection rate of significant PCa, simultaneously decreasing the detection rate of clinical insignificant PCa (8). Newly available roboticassisted biopsy systems like MonaLisa combine the robotic precision with the preferable transperineal approach. Furthermore, this system allows for a minimal-invasive and gentle sampling requiring only two puncture sites and thus promising lower complication rates and patient tolerability.
The robotic-assisted MRI-TRUS-fusion allows for highly precise biopsies with maximal reproducibility, while safely sparing the neurovascular bundle. So far there are no prospective reports on patient related outcomes in terms of tolerability and complications after robotic-assisted transperineal MRI-US-fusion guided biopsy of the prostate (RA-TP-PB x ). An upcoming PB x bearing uncertainty regarding a suspected malignant disease as well as the interventional risks, poses a physical and psychological burden for patients. Therefore, the ideal biopsy technique is as painless as possible and combines low complication rates with upmost diagnostic precision. The aim of this study was to evaluate the MonaLisa prostate biopsy system in terms of safety, tolerability, and patient-related outcomes.
Materials and methods
This prospective study analyses the safety profile and functional results of 228 patients, who had undergone RA-TP-PB x at the University Hospital Basel between January 2020 and June 2022. Indication for biopsy resulted from suspicious DRE, elevated PSA values or suspicious lesions in mpMRI. Imaging was performed in all patients prior to biopsy, suspicious lesions were classified according to PI-RADS v2.1. The study was approved by the local ethics committee (ID 2020-01381) and was performed in accordance with the Declaration of Helsinki. All patients provided written informed consent. Side effects, clinical, functional, histological, and demographic data were collected and assessed. In addition, medication for male urinary dysfunction, type of anticoagulation and immunodeficiency, including diabetes mellitus type 2, immunosuppressants or acquired immune deficiency syndrome (AIDS), were recorded.
Biopsy technique
A 3D model of the prostate, including suspicious lesions, was generated by a skilled team of radiologists (DJW, PB) and RA-TP-PBx was performed with an iSR'obot ™ MonaLisa device (Biobot © ) ( Figure 1) by one experienced surgeon (CW). Anticoagulation with factor Xa inhibitors and phenprocoumon was discontinued and bridged with low-molecular-weight heparin according to the individual risk of a thromboembolic event. Therapy with acetylsalicylic acid was continued and was used to bridge patients under therapy with clopidogrel. Standardized, anti-infective prophylaxis was administered to the first 60 (26.3%) patients. After the initial implementation phase of the new biopsy technique anti-infective prophylaxis was omitted if not indicated by positive findings in preoperative urine culture. After RA-TP-PB x no transurethral catheter was used by default. A detailed description of our procedure has already been published previously (9).
Analysis and statistical methods
Validated questionnaires, including "International Prostate Symptom Score" (IPSS) with quality of life (QoL), "International Consultation on Incontinence Questionnaire -Urinary Incontinence" (ICIQ), and "National Institutes of Health -Chronic Prostatitis Symptom Index" (NIH-CPSI) were used to assess functional outcome before and about one week after biopsy. Additionally, the occurrence of side-effects including acute urinary retention (AUR), gross hematuria, hematospermia, pain according to visual analog scale for pain (VAS, 1 -10 points), urinary tract infections (UTI), local complications and patient satisfaction were collected and analyzed.
Database was created using Excel (Microsoft © ), statistical analyses were performed with SPSS Statistics 24.0 (IBM © ). The Chi-squared and Fisher`s exact tests were used to compare nominal data. For determination of significant differences among the normally distributed data the Student`s t test (dependent/independent) was applied. Logistic regression was used for binary classification, i.e. to estimate the posterior probability of a binary response based on a list of independent predictor variables. This probability is described by a generalized linear model. Odd`s ratio was performed for risk assessment. All tests were performed at a two-sided significance level of a = 0.05.
to an increased risk of AUR (OR = 2.49 and 2.29, respectively). Using multivariate multiple regression, only for AUR a significant overall model (p = 0.04) was demonstrated, with none of the predictors providing a clear prediction. Significant influence was shown for IPSS ≥ 8 on "Change of IPSS", although this result is considered random with regard to the insignificant overall model. No statistically significant change of functional scores (IPSS, QoL and ICIQ) occurred in our cohort shortly after biopsy. One patient (0.4%) developed urinary tract infection (UTI). 66/228 (28.9%) had undergone prostate biopsy priorly. 48/66 (84.2%) of these patients favored transperineal robotic-assisted biopsy over all other methods and rated transperineal robotic-assisted biopsy as the most pleasant biopsy approach. Regarding local conditions, haematoma at puncture, local skin infection and bleeding from puncture site occurred in 8/228 (3.5%), 0/228 (0%) and 10/228 (4.4%), respectively. Detailed data for functional outcome and side-effects are summarized in Table 2. Notably, no patients with immunodeficiency developed any infectious complications.
Sub-group-analysis for the functional outcome and side effects and subgroup specifications are summarized in Table 3 and Supplementary Table 1, respectively.
Discussion
To the best of our knowledge, this is the first prospective study to evaluate safety, tolerability, side effects, and functional outcome of transperineal robotic-assisted prostate biopsy. Transrectal ultrasound-guided biopsy of the prostate still is used as the standard approach for obtaining representative samples for identification and classification of PCa (10). However, the current EAU Guidelines 2022 clearly favor the perineal access route, due to the lower risk of infectious complications (1). Our study reports the outcomes of roboticassisted perineal biopsy, that requires only two puncture sites. The applied sampling strategy provides histologic evaluation of the entire gland including suspicious lesions (9). Overall, 14% of our patients developed grade I complications according to Clavien-Dindo classification. The superior tolerability of the RA-TP-PB x is highlighted by the mean value of 1.3 points on VAS for pain on the day of and 1.2 points on the days after biopsy. TP-PB x performed under general anesthesia also displays a favorable pain profile (VAS 1.3) as compared to TP-PB x (VAS 2) and TR-PB x (VAS 2) in local anesthesia (11). Furthermore, most patients (84.2%) of our cohort having undergone conventional non-robotic biopsy, preferred RA-TP-PB x . Although, feasibility of the TP-PBx in local anesthesia was shown in various studies (6,12), general anesthesia is recommended in RA-TP-PB x in order to enable maximum diagnostic accuracy. Hematuria and hematospermia were identified as most common side effects. Rates of occurrence were comparable to other studies reporting sides effects of TP-PB x and TR-PB x (13). Notably, none of our patients developed significant gross hematuria requiring bladder irrigation. A further advantage of the TP-PB x is the absence of hematochezia or rectal bleeding, which is described with an incidence of up to 45% in transrectal biopsy (3). In our cohort, the rate of AUR after RA-TP-PB x was 14%, which is comparable to the study of Pepe et al. with 11.1% on saturation TP-PB x and > 24 cores taken (14), yet higher than in studies with lower number of biopsy cores taken (10-18) with rates of AUR ranging from 1.4% to 6.7% (15,16). Even though the number of biopsy cores is considered a risk factor for AUR (14), the number of cores (≥25) had no significant impact on the risk of appearance of an AUR in our cohort applying a target saturation approach (9). Using multivariate multiple regression, an significant overall model (p = 0.04) for AUR was shown, with none of the predictors providing a clear prediction. RA-TP-PB x allows for complete diagnostic coverage of the prostate via only two puncture sites. This sterile and minimally invasive approach resulted in the occurrence of only one UTI (0.4%) requiring intravenous antibiotic treatment. Notably, this patient had received antibiotic treatment with oral cephalosporine according to resistency profile, however the duration of pretreatment (single dose) turned out to be insufficient given the histopathology also revealed acute inflammation. The rate of UTI is comparable to other studies reporting rates of UTI after TP-PBx between 0 -0.7% (15)(16)(17). In contrast, TR-PB x is associated with higher rates of infectious complications ranging between 2 -5% despite antibiotic prophylaxis (11,18,19). In line with the study of Günzel et al. (11), omission of standard perioperative antibiotic prophylaxis in TP-PB x did not result in a significant increase of infections. Notably, none of the immunodeficient patients developed infectious complications indicating that the sterile and minimally invasive biopsy technique enables to safely omit perioperative antibiotic prophylaxis even in patients at special risk for the development of infectious complications. Requiring no antibiotic prophylaxis helps to reduce the risk of antibiotic related complications and the development of drug resistant bacteria. Our results corroborate the findings from other groups (20). However, single center data, limited patient number and non-randomized trial design without a control group represent limitations of this study. Further studies are required to confirm our results. Nevertheless, this work indicates the superior safety profile of robotic assisted transperineal prostate biopsy as compared to a transrectal approach. We assume that the minimally invasive biopsy technique via only two entry points diminished local tissue trauma and subsequently reduced the risk for infectious complications.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Ethikkommission Nordwest-und Zentralschweiz. The patients/participants provided their written informed consent to participate in this study.
Author contributions
All authors have conjointly designed the study, and MW, PT, and CW interpreted the data and drafted the manuscript. AM supported data collection and patient care. All authors designed and critically revised the manuscript for important intellectual content. MW, PT, and CW were involved in the statistical analysis. All authors contributed to the article and approved the submitted version.
Conflict of interest
Author CW was supported by grants from Siemens Healthineers and Uromed.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. NIH-CPSI, chronic prostatitis symptom index; INF, histology-proven inflammation; MMUD, medication for male urinary dysfunction; AC, anticoagulation; BC, biopsy cores; PV, prostate volume; IPSS, international prostate symptom score; MMR, multivariate multiple regression (overall model); ICIQ, international consultation on incontinence questionnaire; SD, standard deviation; AUR, acute urinary retention. *pvalue determined using multivariate multiple regression. # pvalue determined by an independent Student`s t test. 1 change of functional parameters (-) decrease of score after biopsy, (+) increase of score after biopsy.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
v3-fos-license
|
2022-08-26T15:19:49.394Z
|
2022-08-12T00:00:00.000
|
251823052
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://hrmars.com/papers_submitted/14681/comparative-analysis-of-online-student-engagement-across-gender.pdf",
"pdf_hash": "46bafe81fc53a53c5a92822cc237a69964aa11b7",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42469",
"s2fieldsofstudy": [
"Education"
],
"sha1": "e3ec03150a1861ddbc77d2c80f1c057db6c8a3ae",
"year": 2022
}
|
pes2o/s2orc
|
Comparative Analysis of Online Student Engagement across Gender
Student engagement is an important component to learning. It has shown a connection to higher academic achievements among students who are more engaged. Student engagement in collaborative writing is required to ensure the success of the group in achieving the goal. However, due to Covid-19 circumstances, collaborative writing projects among the academic writing students had to be carried out online. The objectives of this study are to assess the overall student engagement in online collaborative writing projects, and investigate the differences in their responses based on gender. Male and female student engagement was found to be more influenced by participation attribute, followed by performance, emotions and skills. Female students’ way to participate was by engaging in online conversation, while male students prefer to get to know their team members. Both groups, however, did not prefer to make regular post on online forums. Future research is suggested to include peer evaluation as a method to verify online student engagement in collaborative writing projects.
Introduction
The emergence of Covid-19 has ramified the whole world with changes in many facets including a country's economy and the operations of educational institutions. While most of the economic activities were put on hold to control the spread of Covid-19 in Malaysia, the higher education institutions (HEIs) experienced sudden closures resulting in the largest online movement in the history of education (El Said, 2021). Thus, a shift in the teaching and learning process has taken place paving the way for the new education system i.e., online and distance learning (ODL). Students and instructors who are out of their physical classrooms need to adapt to an entirely different experience of the system. As ODL demands the use of apposite and relevant pedagogies, they ought to equip themselves with ample knowledge and skills on the information and communication technology (ICT) (Pokhrel & Chhetri, 2021).
Students and instructors' readiness has been an issue of great concern due to the abrupt transition from face-to-face classes to online teaching and learning system. One instance highlighted in Chung et al.'s (2020) study was that more than half of the students were reluctant to continue with online learning in the future if they were given a choice.
Coping with hard times in facing difficult circumstances particularly internet connectivity and difficulty in understanding the content of the subjects were the main reasons reported by degree and diploma students of one public university in Malaysia. The instructors, on the other hand, expressed their inability to integrate online learning in the present responsibilities, and they claimed that the time to focus on that matter was scarce (Nwagwu, 2020).
Collaborative learning is one of the alternatives to consider in addressing the issue of limited social interactions between peers in online instruction (Sankaranarayanan et al., 2020). On one hand, collaborative work and assignment provides time and tools which accommodate the individuals' cognitive styles and learning preferences; thus, enabling each member in the group to work collaboratively. However, lack of experience in online collaborative assignment or activity has led these students to experience challenges and obstacles in dealing with the tasks (Demosthenous, 2020).
While previous studies have begun to research ways to enhance student engagement in online learning, researchers have gone beyond comparing the traditional mode of education and online learning environment. In particular, the current study aimed to: 1. Assess overall student engagement in online collaborative writing projects; 2. Investigate gender differences in online student engagement of academic writing.
Significance of the Study
The findings of this current study would be useful for ESL instructors to obtain feedback on the level of student engagement with the course content, instructors and other students. The instructors; for example, would know their students' attitudes, thoughts, behaviour and the way they communicate with other peers (Dixson, 2015) while engaging in collaborative learning projects; therefore, evaluation on the course designs i.e., the course assessments could be conducted. It is worth to note that instructors' roles as facilitators are deemed significant to ensure effective and meaningful student engagement in the online learning settings.
Additionally, teaching effectiveness demonstrated by the instructors could be measured. By evaluating the teaching and learning aspects, it offers rooms for improvement pertaining to course level, types of content, student preparedness and so on. Hence, this aids information into training design for the instructors themselves and simultaneously increases student engagement in the online learning environment.
Literature Review
The importance of social interaction during teaching and learning process has been substantially considered since it has provided benefits to the individual when interacting with other people. A review on studies of social interaction in collaborative learning has found that communication is a key element for successful learning experience either in a traditional classroom setting or online learning environment (Hussin et al., 2019). Active interaction involves the activities of sharing ideas, discussing, negotiating, exchanging opinion and making decisions which promote the learners' knowledge and thinking skills. Effective interaction between the learners and instructors or peers also enables them to acquire knowledge better and encounter productive learning environment. With the advance of technology and the occurrence of pandemic Covid-19 in recent years, online learning environment has been practiced throughout the world in which the interaction between learners and technology has also been the focus of studies.
Collaborative Writing
Collaboration is defined as an action where two or more learners pool knowledge, resources and expertise from different sources in order to reach a common goal (Scoular et al., 2020). It requires the learners to work together on the same task through the division of labour which may involve interdependent tasks. In collaborative classrooms, the teaching activity takes place with other processes that are based on learners' active work of the course material. By using the information or ideas, the learners experience social stimulation of mutual exploration, meaning-making, and feedback which leads to better understanding of the problem and the creation of new understandings (Smith & MacGregor, 1992).
In the sense of collaborative writing, it refers to the co-authoring of a text by two or more writers which involves a commonly negotiated and shared decision making process that can possibly create the sense of shared ownership to the text produced in groups (Storch, 2013). Collaborative writing involves the learners to participate in small groups and be equally responsible to accomplish the writing task together by exchanging ideas and solving the problems that arise during writing. The outcome of collaborative writing is a collective cognition that includes new vocabulary, enhanced expression of ideas and knowledge of grammar which are developed through the learners' insights that cannot be traced back to one individual's contribution (Anshu & Yesuf, 2022).
A study by Malette and Ackler (2018) investigated the impacts of interventions on engineering students' collaborative writing by focusing on their experiences through a series of interviews which concluded that women writers often do more writing during the collaborative projects. Since the study also found that the women's writing labour is unrecognized or undervalued, the authors suggested that writing task in the collaborative project is made more visible by insisting each student to contribute to the writing (Malette & Ackler, 2018). This leads to the notion that the evaluation for collaborative writing task is assigned individually to each group member rather than assessing the writing product as a whole fulfilment of task. In addition, Almahasneh and Abdul-Hamid (2019) asserted that peer assessment is suitable for students as it increases their performance in writing due to the exchange of comments during collaborative writing task.
On top of that, Deveci (2018) investigated the views of 64 university students towards collaborative writing project satisfaction using a survey and a discourse completion task. The study's analysis found that female students were overall more content with their experience in collaborative writing since the task particularly contributed positive effects on their English language and teamwork skills. These reviews, however, require further empirical investigations especially in the setting of online collaborative writing which promises a new domain of research.
Online Collaborative Writing
The practice of online collaborative writing has raised the interest of researchers in investigating its implementation and effectiveness in teaching and learning process. A study by Limbu and Markauskaite (2015) presented a phenomenographic study that explored university students' perceptions by using interview method on the effects of online collaborative writing. The study concluded that online collaborative writing allows the division of work between the students, provides a combination of expertise, enables deeper understanding of content through the fusion of ideas and insights, and offers a means to develop new skills and attitudes for collaborative work and interaction (Limbu & Markauskaite, 2015).
In another study, Nykopp et al (2019) investigated the ways learners coordinate their collaborative online writing which are derived into text-related activities, task-related activities and social activities. When learners perform task-related activities, they acquire the relevant information by sharing resources and exchanging ideas while social activities allow them to maintain positive group atmosphere by discussing the strategies, monitoring the activities and reflecting the process. The study also found that the learners applied four distinct coordination profiles when completing online collaborative writing task which are classified into text-focused task coordinators, text-focused text coordinators, task and text coordinators, and social coordinators facing technical problems (Nykopp et al., 2019). This shows that learners utilize and experience different collaborative styles and approaches when fulfilling the given online writing task. Bikowski and Vithanage (2016) explored the effectiveness of technology-enabled collaborative writing in improving the learners' writing skills and their attitudes towards collaborative writing. Findings revealed that 67 percent of the respondents believed webbased collaborative writing improved their writing experience while two thirds of them were in favour of online collaborative writing activities as Google Docs assisted them to plan and organize the writing activities and check for grammar points (Bikowski & Vithanage, 2016). Another collaborative writing study examined the usage of Wiki as a platform for composing English language essays by conducting pre-test and post-test written assessments which concluded that learners positively showed encouraging improvement in their writing skills (Ithnin et al., 2018). These studies justify the notion that online collaborative writing activity is worthy in practice as it promotes the students' ability to write and develops their collaborative and social skills.
Learner Engagement
Collaborative writing activities require learners to be engaged in the task given and at the same time maintain continuous communication among the individuals involved in the group. Learner's success to fulfilling task has been linked to learners' engagement; thus, leading to the improvement of learning outcomes. As defined by Bond and Bedenlier (2019), engagement is the energy and effort that learners apply during learning activities that can be discerned through the indicators of behavioural, cognitive or affective. Behavioural indicators are shown by being involved, dedicated and optimistic in activities; cognitive indicators denotes being understanding, having self-control and practicing deep learning strategies; while affective indicators are displayed by being interested, displaying positive responses to the learning environment and having a sense of belonging (Bond & Bedenlier, 2019).
Studies have examined the impact of student engagement when conducting writing task in online class setting (Fredrickson, 2015;Dixson, 2015). Fredrickson (2015) made a comparison measure of learner engagement, learning and satisfaction of an online course which concluded that collaborative writing task has a positive impact on learner engagement. However, the study found that collaborative writing task and learner satisfaction have negative relationships with learner interaction (Fredrickson, 2015) which requires further investigation on the impact of collaborative writing task especially in the setting of online learning. Another study by Dixson (2015) measured learners' engagement in online writing task by using Online Student Engagement scale (OSE) which focused on learners' active behaviour, thinking process, feelings about learning and learners' connection with the content, lecturer and other learners in relation to performance, skills, participation and emotion. Even though learner engagement on OSE has been linked to observational learning behaviour and application learning behaviour, the study found that OSE has significant relationship only with application learning behaviour like writing e-mail or answering quiz. As such, this study seeks to further investigate the application of OSE measures towards collaborative writing task in online learning environment.
Research Methodology
The focus of this study was to understand academic writing students' engagement in online classes during the open and distance learning (ODL) semester. A total of 161 students enrolled in an ODL writing report course in the respective higher learning institution. Based on Krejcie and Morgan's (1970) table, the appropriate sample size was between 113 and 114. Therefore, using the simple random sampling, a total of 130 responses were gathered. Following to this, the Mahalanobis distance was used to find outliers where six cases were found to have p<.001 and therefore were deleted. Another five cases were deleted due to straight-lining issue, leaving a total 119 cases for further analysis.
The information required for the main study was gathered using questionnaire survey. A set of Online Student Engagement Scale items were adapted from Dixson's (2015) research titled "Measuring Student Engagement in the Online Course: The Online Student Engagement Scale (OSE)". All items were measured on a 5-point Likert scale. Therefore, a mean value above 3 indicates a positive perception of the items measuring OSE between male and female respondents. A mean value below 3 implies a negative perception towards the corresponding items. A standard deviation equals to or greater than 1 shows a relatively high variation. In contrast, a value below 1 is believed to have low variation. As this study aimed to report the different gender responses on the items measuring OSE, it began with the descriptive analysis of gender and the overall means of OSE responses by gender. OSE responses were then analysed by different attributes; skills, emotions, participation, and performance, which were then distinguished into different gender group. The items for each attribute are shown in Table 1. Table 1 Items Measuring OSE Attributes Skills Participation S1-study regularly S2-stay up on reading S3-look over class notes S4-be organized S5-listen/read carefully S6-take good notes over readings, PPT, video lectures Part1-have fun in online chats Part2-participate actively in forums Part3-help fellow students Part4-engage in online conversations Part5-post regularly in forum Part6-get to know other students
Emotions
Performance E1-Put forth effort E2-find ways to make materials relevant to studies E3-apply course materials to studies E4-find ways to make material interesting E5-really desire to learn Perf1-get good grades Perf2-do well on tests/quizzes
Demographic Analysis
In order to make a comparison between male and female academic writing students in the particular semester, a demographic analysis was presided and the findings are tabulated in Table 2. Table 2, 19.3 percent or 23 respondents were male and 80.7 percent or 96 of them were female. The respondent distribution was in proportionate to the size in the population where 20 percent students were male and 80 percent female.
Data Analysis
The analysis of data was conducted in phases to address each objective. The first objective was to assess the overall student engagement in online collaborative writing projects, followed by the investigation of the gender differences in online student engagement of academic writing. In the analysis of the overall OSE, the data distributions were tabulated into different engagement attributes namely skills, emotions, participation, and performance. Following to that is the analysis of each attribute by gender.
Overall OSE Attribute Responses
In general, students' responses on OSE attributes are expected to be different between the two genders. However, despite the lower mean values of each attribute between male and female respondents, the orders of the four attributes were the same. Table 3 shows the overall comparison of OSE attribute responses by gender. .72240 From the analysis, the highest mean value of OSE attributes was recorded from participation (4.15) followed by performance (4.04), emotion (4.00), and skills (3.92). Based on the overall mean values, the attribute responses by gender recorded similar sequence. By gender, participation attribute was placed first in the sequence based on the mean values of 4.18 (female) and 3.91 (male). The second attribute in the sequence is performance by both genders with mean value 4.07 (female) and 3.91 (male). The lowest mean values were recorded from skills attribute with 3.99 (female) and 3.64 (male). Emotion, on the other hand, was recorded with higher mean values than skills with 3.88 (male) and 4.08 (female).
Gender Comparison of OSE Responses by Items in Different Attributes
The subsequent analysis was on the items for each attribute. As depicted in Table 4, the six items measuring skills attribute were analysed and presented in different groups based on gender. Table 4 shows the gender comparison of the OSE based on the skills attribute. The highest and lowest means were consistently recorded from male and female respondents. The highest means by both groups were from item S6 (taking good notes over readings, PPT, video lectures) with mean values 4.18 (female) and 3.87 (male), while the lowest means by male (M=3.35) and female respondents (M=3.78) were recorded from item S2 (staying up on reading). Meanwhile, S5 (listening/reading carefully) was recorded as the second highest mean by female respondents (M=4.07), and this item was placed the third by male respondents together with item S1 (study regularly) with mean value of 3.70 respectively. This makes S3 (looking over class notes) as the second preferred activity among male respondents with mean value of 3.74. On the contrary, the third level item recorded from female respondents was S4 (being organized) with mean value of 4.01. The second attribute to measure students' OSE is emotions and the analysis results are shown in Table 5. Between the two genders, female respondents scored the highest mean value on E2, which is related to the act of finding ways to make materials relevant, with the value of 4.10. Male respondents, on the other hand, recorded the highest value from two items, E2 and E4 (M=4.04) which are related to the act of finding ways to make materials relevant to the studies and to make materials interesting. Both genders, however, shared the same lowest mean from item E5 which is on the desire to learn with 3.91 (female) and 3.70 (male). Compared to the male respondents who rated E4 as one of the highest items, female respondents rated E4 as the second lowest with mean value of 3.98. The next attribute in OSE is participation which is represented by six items. The highest mean value recorded by female respondents was from Part4 (engage in online conversation, M=4.37) while male respondents recorded the highest value of 4.13 from Part6 (get to know other students). Despite the difference in the top ranked item in participation attribute, both female and male respondents ranked Part5 (post regularly in forum) as the lowest with mean value of 3.89 and 3.74, respectively. Part3, helping fellow students, was ranked second in the list by both genders (female=4.35; male=4.09) while Part2 (participate actively in forums) received an equal mean value of 4.09 by male respondents. .869 Finally, the last attribute in OSE is performance as represented by two items, Perf1 and Perf2. While the higher mean value was recorded from Perf2 (do well on tests/quizzes; M=4.10) and lower mean value of 4.03 from Perf1, male respondents recorded the higher mean value from Perf1 (get good results; M=3.96) and lower mean value from Perf2.
Conclusion
The analysis of data from the survey provided the findings for the proposed objectives; (i) to assess overall student engagement in online collaborative writing projects, (ii) to investigate gender differences in online student engagement of academic writing. Firstly, the online student engagement in the academic collaborative writing project demonstrated by the students from the current semester shows similar indications. The biggest attribute influencing the OSE among the target group is participation, followed by performance and emotion. In contrast, the least influencing attribute on the OSE in general is skill. The same order of attributes is found to be similar for both genders.
The findings suggest that students' participation during online collaborative writing projects significantly contribute towards students' engagement. Despite the same attitude of both genders on participation, female students have a preference on the engagement in online conversations, compared to male students who prefer to build rapport with other students in order to grow engagement in online collaborative writing. In contrast, making regular post in online forum is the least preferred activity for both male and female students. These findings are consistent with the discovery by Hussin et al (2019) which emphasizes on the importance of social interaction during teaching and learning process, particularly active interactions. Active interactions, according to these researchers, include the activities of sharing ideas, discussing, negotiating, exchanging opinions, and making decisions.
Besides participation, the emotion attribute shows a rather significant influence in online engagement based on the previous studies. Mallette and Ackler (2018) previously revealed that women often do more writing during collaborative writing projects compared to men. This supports the current finding that female respondents prefer putting forth effort as a way to engage in online projects. Additionally, female students have been found to feel more satisfied with their experience in collaborative writing since it mainly contributed to their English and teamwork skills (Deveci, 2018).
For future research, it is encouraged for the researchers to include peer evaluation as a method to verify online student engagement in collaborative writing projects. Getting peer evaluation will provide more insights into team members' engagement especially in terms of their participation, emotion, skills and performance attributes.
|
v3-fos-license
|
2020-04-24T14:39:01.969Z
|
2020-04-23T00:00:00.000
|
216085851
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://gutpathogens.biomedcentral.com/track/pdf/10.1186/s13099-020-00361-w",
"pdf_hash": "aa894b34914db3dcec6db345c49e2460b6e527a3",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42471",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "e70ece9a61c308231137f8b6aef07f757cf32e05",
"year": 2020
}
|
pes2o/s2orc
|
EV71 virus reduces Nrf2 activation to promote production of reactive oxygen species in infected cells
Background: Emerging evidence closely links Enterovirus 71 (EV71) infection with the generation of reactive oxygen species (ROS). Excess ROS results in apoptosis and exacerbates inflammatory reactions. The Keap1–Nrf2 axis serves as an essential oxidant counteracting pathway. Methods: The present study aimed to elucidate the role of the Keap1–Nrf2 pathway in modulating apoptosis and inflammatory reactions triggered by oxidative stress in Vero and RD cells upon EV71 infection. Results: Elevated ROS production was identified in EV71 infected Vero and RD cells. The percentage of dead cells and expression of inflammation-promoting cytokines were increased in these cells. EV71 infected cells also displayed reinforced Keap1 expression and abrogated Nrf2 expression. Keap1 silencing resulted in the downstream aggregation of the Nrf2 protein and heme oxygenase-1 HO-1. Keap1 silencing repressed ubiquitination and reinforced Nrf2 nuclear trafficking. Furthermore, silencing Keap1 expression repressed ROS production, cell death, and inflammatory reactions in EV71 infected RD and Vero cells. In contrast, silencing of both Keap1 and Nrf2 restored ROS production, cell death, and inflammatory reactions. Nrf2 and Keap1 modulated the stimulation of the Akt sensor and extrinsic as well as intrinsic cell death pathways, resulting in EV71-triggered cell death and inflammatory reactions. Conclusions: EV71 infection can trigger ROS production, cell death, and inflammatory reactions by modulating the Nrf2 and Keap1 levels of infected cells.
Background
Hand, foot, and mouth disease (HFMD) is a viral infection that frequently occurs in infants and children.Common symptoms are blisters and flu-like symptoms [1][2][3].HFMD is caused by several enteroviruses, including coxsackie virus A16 and enterovirus 71 (EV71) [4][5][6].EV71 is a single positive RNA strand virus that belongs to the Enterovirus genus of the Picornaviridae family [7,8].EV71 infections are frequently linked to aggressive pulmonary, gastrointestinal, and neurological malfunctions in children.Additionally, the boosted generation and reaction of inflammation-promoting cytokines and chemokines influences the severity of EV71 infection [9].
Nuclear factor (erythroid-derived 2)-like 2 (Nrf2) and Kelch-like ECH-associated protein 1 (Keap1) have attracted attention concerning reactive oxygen species (ROS)-linked etiology.Expression of detoxifying enzymes (DEs) and antioxidant enzymes (AEs) is triggered by Nrf2, which is essential in the defense of vertebrates from stress in their surroundings [10].Nrf2 can also enhance the activity of DE and AE related genes in protective responses to stresses that include
R E T R A C T E D A R T I C L E
ROS, reactive nitrogen species (RNS), and electrophiles [11,12].On the contrary, the dominant feature of Keap1 is as an oxidative stress (OS) sensor that specifically involves Nrf2, an E3 ubiquitin ligase substraterecognizing subunit.Keap1 reinforces degeneration via the ubiquitin-proteasome system to repress Nrf2 in the absence of stress.The cysteine residue of Keap1 reduces Nrf2 ubiquitination in the presence of electrophiles or OS.The NRF2 protein triggers target expression via intracellular aggregation, which protects cells against surrounding stress.
ROS are crucial signaling agents that are essential for the development of inflammatory diseases [10].Multiple downstream effects of reinforced OS (promotes ROS generation) are directly related to the stimulation of multiple inflammation cascades [11,12].The interaction between the inflammatory reactions and ROS has been recently investigated, with ROS arising from the mitochondria directly triggering agents that reinforce the expression of inflammatory cytokines via distinct pathways [13,14].Both ROS and mitochondria are crucial to stimulate cell death in physiologic and pathologic circumstances.ROS both arises from mitochondria and affects mitochondria.Cytochrome c generated from mitochondria stimulates caspases and seems to be dominantly regulated by ROS, either directly or indirectly [15].ROS can modulate cell death at the transcription level by repressing the expression of viability-promoting proteins, including inhibitor of apoptosis proteins (IAPs), B cell lymphoma 2 (Bcl-2), survivin, and Bcl-XL, and reinforcing the expression of cell death-promoting agents [16].ROS also stimulate the transcription of cell death-promoting genes that are critical in triggering intrinsic cell death pathways, including p53 upregulated modulator of apoptosis (Puma), Apoptotic protease activating factor 1 (Apaf-1), bcl-2-like protein 4 (Bax), Noxa, and BH3 interactingdomain death agonist (Bid), apart from extrinsic cell death-promoting agents, including Fas, Death receptor 4 (DR-4), Fas-L, and DR-5 [16].The exact mechanisms of ROS-related inflammatory reactions and cell death in EV71 infection are unclear.
Our research explored the effect of EV71 infection on the stimulation and expression of Keap1-Nrf2 axis members using cell-based experiments.Furthermore, we elucidated the effect of Nrf2 and Keap1 on ROS production triggered by EV71 infection, and the effect of this ROS production on cell death, inflammation-promoting cytokine generation, and related signals.The findings revealed that the Keap1-Nrf2 axis is a crucial regulator of EV71-triggered ROS generation, inflammatory reactions, and cell death, with a crucial effect on viral replication.
Cell cultivation
RD and Vero cells were provided by American Type Culture Collection and were cultured in Dulbecco's modified Eagle's medium (DMEM; Gibco) containing penicillinstreptomycin (2% v/v) and fetal bovine serum (FBS, 10%; Gibco) at 37 °C in an atmosphere of 5% CO 2 .
Virus propagation
Human EV71 (GenBank accession number AF30299.1) stocks were produced in Vero cells, which were infected and then inoculated onto dishes (10 cm 2 ).Vero cells were grown to near-80% confluency and were infected with EV71 virus diluted in DMEM.Aft a 1.5-h adsorption at 37 °C in a 5% CO 2 atmosphere, the cells received DMEM containing 2% FBS.Infection continued until the monolayer demonstrated a cytopathic effect (CPE), 1 or 2 days after the infection.The cells and cultivation media were collected using a conical polypropylene tube and were treated using three freeze-thaw cycles.The final cell suspension was centrifuged for 10 min at 4500 rpm.The supernatant was removed, added to cryovials, and preserved at − 80 °C.
TCID 50 titration
The 50% tissue cultivating infectious dose (TCID 50 ) titers were determined per ml.Briefly, Vero cells were seeded in 96-well plates (5 × 10 3 /well) 1 day prior to infection.DMEM containing 2% FBS (10 2 to 10 7 ) was utilized to serially dilute viruses, which were subsequently added to the wells.Plates were incubated from 2 to 5 days at 37 °C in a 5% CO 2 atmosphere.The CPE was assessed by microscopy after the 2-to 5-day infection.The virus titer (TCID 50 ) was examined by using the Reed-Muench endpoint calculation approach.
EV71 infection
RD and Vero cells were infected with EV71 virus.Cell monolayers cultivated in 10 cm-diameter dishes to 50% confluency were treated with EV71 viruses at a multiplicity of infection (MOI) of 5 TCID 50 /cell.DMEM without FBS was used to wash the cells following a 1 h adsorption at 37 °C in a 5% CO 2 atmosphere to eliminate virus that had not adhered.The cells then received fresh DMEM containing 10% FBS.The cells were sampled at defined times and analyzed.
Small interfering RNA (siRNA) transfection
Cells were transiently treated with Keap1 siRNA and/or Nrf2 siRNA using Accell siRNA delivery medium (Dharmacon, USA) according to the manufacturer's instructions.Cells (2 × 10 5 /well) were added to wells of 12-well plates and cultivated overnight at 37 °C in an atmosphere
R E T R A C T E D A R T I C L E
of 5% CO 2 .Then, 2 µm of Keap1 and Nrf2 siRNA (Smartpool, Dharmacon) in 1× siRNA buffer (Dharmacon) were transferred to each well, cultivated for 72 h at 37 °C and 5% CO 2 , and lysed.Keap1 and/or Nrf2 silencing efficiency was examined by qPCR.
Western blotting
Cell lysates were collected using RIPA buffer.Proteins were resolved by 10% SDS-PAGE and transferred Immobilon polyvinylidene difluoride membranes with a pore size of 0.45 µm.Each membrane was blocked for 60 min using 5% bovine serum albumin at 25 °C.Primary antibody was added and incubated at 4 °C for 16 h, followed by the addition of secondary antibody for 1 h at 4 °C.Enhanced chemiluminescence was performed using the SuperSignal ® West Femto Maximum Sensitivity Substrate Kit (Thermo Fisher, Waltham, MA, USA) and a C-DiGit ® Blot Scanner (LiCor, USA).
Quantitative real-time PCR (qPCR)
RNA was isolated from RD and Vero cells using TRIzol reagent (15596026, Invitrogen ™ ).The transcription of various genes was quantified using SYBR Green master mix.Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) served as the internal control.qPCR was carried out using a reaction volume of 0.02 mL in a RT-PCR system (Roche, Switzerland) and SYBR Green PCR master mix (Thermo Fisher Scientific).qPCR was conducted at 95 °C for 10 min, followed by 40 cycles of 60 °C for 15 s and 72 °C for 30 s. Copy number of the target genes was evaluated using the comparative CT approach (2 −ΔΔCT ) and an internal reference.The primer sequences are presented in Table 1.
ROS production
ROS production in cells was examined using the 2ʹ,7ʹdichlorofluorescein diacetate (DCFH-DA) fluorescence probe.Subsequent to infection with EV71 for a defined time, the cells were incubated with DCFH-DA (10 µmol/L) for 0.5 h in the dark at 37 °C.Fluorescence intensity was assessed using excitation and emission wavelengths of 488 and 525 nm, respectively, using a model BX51 fluorescence microscope (Olympus, Japan).
Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay
The TUNEL fluorescence kit (Roche) was used for cell staining and assessment of cell death.4′,6-Diamidino-2-phenylindole (DAPI, 1:5000; Beyotime, China) was used to stain the nuclei of cells.Cell death was assessed by calculating the number of TUNEL positive cells using a model SP8 laser scanning confocal microscope (Leica, Japan).
Analysis of cell death
Cell death was assessed using Annexin V/propidium iodide (PI) staining and flow cytometry as previously described [17].
Statistical analyses
Results are presented as average ± standard deviation (SD).Differences among the various groups were assessed using ANOVA and Tukey post hoc test.Differences among groups were assessed using the two-tailed t-test.A P-value < 0.05 was regarded as significant.
Ethics statement
This study was approved by the Ethics Committee of China-Japan Union Hospital, Jilin University.
EV71 infection stimulates ROS production, cell death, and inflammatory reaction of infected cells
RD and Vero cells were infected with EV71 virus at a MOI of 5. To assess the efficiency of infection, VP1 was found upregulated in both cells infected with EV71 at
Table 1 Sequences of primers
Primers Sequences the mRNA and protein levels (Fig. 1a-c).Since ROS generation induces apoptosis and inflammation, we determined the ROS generation in EV-71 infected RD and Vero cells.EV71 infection resulted in remarkably increased ROS production in both cells (Fig. 1d).
Next we determined the effect of EV71 infection on apoptosis and inflammation.Annexin V-FITC/PI flow cytometry revealed increased apoptosis in EV71 infected RD and Vero cells (Fig. 1e, f ).Since excessive and robust pro-inflammatory cytokine production increases the severity of EV71 infection [10] and is highly relevant for apoptosis [20], production of these cytokines was investigated.The qPCR and ELISA data confirmed the significant increases in the levels of
R E T R A C T E D A R T I C L E
IL-1β, IL-6, and TNF-α mRNA and protein after EV71 infection (Fig. 1g, h).
EV71 infection mediates the Keap1-Nrf2 pathway
Since Keap1-Nrf2 axis played an essential role in the regulation of ROS production, we determined the expression level of Keap1 and Nrf2 in EV71-infected cells.Keap1 expression was reinforced in the infected cells, while the Nrf2 protein level was downregulated (Fig. 2a, b).qPCR was performed to assess the mRNA levels of Keap1 and Nrf2.The level of Keap1 mRNA was significantly increased in EV71-infected cells, while Nrf2 protein did not show any difference in the different groups (Fig. 2c, d).These data suggested that Nrf2 was regulated at the protein, rather than mRNA, level in the infected cells.
Silencing of Keap1 in EV71-infected cells influences downstream Nrf2 and HO-1 signals
To verify the effect of Keap1 on Nrf2 signal transduction during EV71 infection, Keap1 was silenced in Vero and RD cells prior to EV71 infection.Keap1 was downregulated upon transfection with Keap1 siRNA, whereas the downstream Nrf2 and HO-1 proteins were upregulated (Fig. 3a, b).EV71 infection resulted in the remarkable increase in Nrf2 ubiquitination, which contributed to Nrf2 reduction.In contrast, ubiquitination of Nrf2 was downregulated upon Keap1 depletion (Fig. 3c, d).Silencing of Keap1 significantly decreased Keap1 mRNA expression, while increasing HO-1 mRNA expression.However, Nrf2 mRNA expression was not altered (Fig. 3e-g).Since Nrf2 is mainly expressed in the nucleus, its subcellular localization was examined in each group using an immunofluorescence assay.EV71 infection resulted in the remarkable repression of Nrf2 nuclear translocation, but KEAP1 silencing in EV71 infected cells restored Nrf2 localization in the nucleus (Fig. 3h, i).These findings suggested that silencing of Keap1 influences the increased Nrf2 and HO-1 signaling in EV71-infected cells.
Silencing of Keap1 regulates ROS generation, apoptosis, and inflammation in EV71-infected cells
We further determined the effect of Keap1 silencing on ROS generation, apoptosis, and inflammation in EV71infected cells.Excessive production of ROS following EV71 infection was significantly decreased after Keap1 silencing (Fig. 4a).Annexin V-fluorescein isothiocyanate and PI flow cytometry analyses showed that the apoptosis rate of infected Vero and RD cells was reduced due to KEAP1 downregulation (Fig. 4b, c).Findings of qPCR and ELISA suggested that Keap1 depletion substantially lessened the production of inflammation-promoting cytokines in EV71 infected RD and Vero cells at the mRNA and protein levels (Fig. 4d-i).The collective
R E T R A C T E D A R T I C L E
findings suggested that Keap1 modulates ROS production, apoptosis, and inflammation in EV71-infected RD and Vero cells.
Nrf2 expression is responsible for Keap1-regulated, EV71-induced ROS generation
Keap1 and Nrf2 were co-silenced in EV71 infected cells to explore the effect of Nrf2 on ROS production, cell death, and inflammatory reactions.Nrf2 and downstream HO-1 expression were remarkably lessened at both the mRNA and protein levels subsequent to Nrf2 silencing, while Keap1 expression was unaffected even after Nrf2 silencing (Fig. 5a-e).These findings indicated that Nrf2 and Keap1 have appreciable effects on signaling pathways of RD and Vero cells infected with EV71.We then evaluated the effect of the co-silencing of Keap1 and Nrf2 on ROS production, apoptosis, and inflammation of EV71-infected RD and Vero cells.Nrf2 silencing recovered ROS production in cells, which was reduced by Keap1 depletion (Fig. 6a).Flow cytometry data indicated the Nrf2 silencing increased the proportion of apoptotic cells EV71-infected RD and Vero populations caused by Keap1-silencing (Fig. 6b, c).In addition, robust proinflammatory cytokine production was observed in the Keap1 and Nrf2 co-silencing group at both the mRNA and protein levels using qPCR and ELISA (Fig. 6d-i).These data demonstrated that Nrf2 is responsible for Keap1-mediated, EV71-induced cellular dysfunction.Because Akt phosphorylation is positively correlated with apoptosis and inflammation [18][19][20], we next assessed the effect of Nrf2 silencing on the activation of the Akt inflammatory sensor and the activity of proteins associated with cell death, including Caspase-3, Fas-L, Bax, and Fas.EV71 infection remarkably reinforced the expression of all these proteins (Fig. 7a, b), while depletion of Keap1 downregulated their expression.Notably, co-silencing of Nrf2 restored the cellular level of these proteins in the Keap1 silenced cells.These findings demonstrated that Keap1 can enhance EV71-triggered cell death and inflammation by reinforcing the concentrations of the cell death-promoting proteins and Akt phosphorylation, while Nrf2 downregulates them.
EV71 propagation is regulated by Keap1 and Nrf2
Based on the above observations, we hypothesized that EV71 replication is regulated by Keap1 and Nrf2 silencing.To explore this, we assessed the virus replication rate in infected cells after Keap1 and/or Nrf2 silencing.Silencing of Keap1 reduced viral replication in RD and Vero cells, whereas Nrf2 silencing and co-silencing restored the virus titer 12 to 72 h post-infection (Fig. 8a, b).These results suggested that Nrf2 downregulation is required for efficient EV71 propagation.
Effect of ROS on EV71-induced apoptosis and inflammation
The effect of ROS has been demonstrated by treatment of EV71-infected cells with 10 μM of the ROS inhibitor N-acetyl-l-cysteine (NAC).Determinations of apoptosis and inflammation showed that NAC treatment contributed to a reductions in the number of EV71-induced apoptotic cells (Fig. 9a) and production of inflammatory factors (Fig. 9b), suggesting that ROS promoted the EV71-induced apoptosis and inflammation.
Discussion
Our data showed that EV71 infection triggered ROS generation, apoptosis, and inflammation of infected cells, and upregulated the expression of Keap1 but reduced the level of Nrf2.The induced ROS, apoptosis, and inflammation in the infected cells were decreased by Keap1
R E T R A C T E D A R T I C L E
silencing but were restored by co-silencing of Keap1 and Nrf2 in Vero and RD cells.Additionally, reduced viral replication was observed after Keap1 silencing, while virus propagation was recovered by Nrf2 silencing.These findings strongly suggest that Keap1-Nrf2 axis exerts its regulatory effect on EV71 replication by inducing ROS production, apoptosis, and inflammation.Redox homeostasis is an essential host factor contributing to the prognosis of infectious diseases.ROS generation is triggered by EV71 infection, which in turn reinforces viral replication [21].Knowledge of the mechanism of EV71-triggered ROS generation is insufficient.Multiple viral and mitochondrial proteins influence one another, and can induce mitochondrial malfunction as well as the production of ROS.For instance, hepatitis B virus X protein (HBx, a hepatitis B viral protein) has an effect on mitochondrial heat shock proteins 60 and 70, and on voltage dependent anion channel 3 [22,23], and
R E T R A C T E D A R T I C L E
triggers OS [24].The core protein of HCV binds to mitochondria and reinforces OS [25][26][27].The PB1-F2 protein of Influenza A virus targets mitochondria and induces abnormalities [28,29].EV71 infection reportedly triggers mitochondrial ROS production, which is crucial to viral replication and the consequent reduced efficiency of energy generation; the biogenesis of mitochondria is enhanced in infected cells to make up for the infectionrelated malfunction [30].EV71 infection also increases ROS production [21,30].Nonetheless, the mechanism underlying ROS generation in cells infected by EV71 remains elusive, as is the ROS involved in the infection.Our research revealed that EV71 infection leads to a robust production of ROS, which may be involved in enhanced cellular apoptosis and viral replication.Administration of NAC ameliorated the apoptosis and inflammation of Vero and RD cells induced by EV71 infection.A plausible hypothesis is that apoptosis induced by ROS generation weakens the cellular membrane and facilitates the release of the viral particles.However, we did not identify the type of ROS involved in the EV71-mediated apoptosis and inflammation in Vero and RD cells, due to the non-specific nature of the DCFH-DA assay and the ROS scavenger NAC.The identification of ROS species during EV71-infected Vero and RD cells will be investigated in a future study.
Several studies have demonstrated the anti-oxidation and anti-inflammation effects of Nrf2 activation, and the generation of ROS during endogenous or exogenous cell oxidation stress.Nrf2 is dominantly bound to and is ubiquitinated by Keap1.Nrf2 localizes in the cytoplasm under physiological conditions.Nevertheless, Nrf2 can be translocated to the nucleus and can react with antioxidant agents, triggering the transcription of cytoprotective genes in response to electrophiles and OS.A majority of viruses bring about OS and reinforce the activities of radicals as well as ROS.These events cause the cellular immune system to stimulate Nrf2 and upregulate cytoprotective genes [31].For instance, HO-1 is downregulated in the replication process of the Zika virus by regulating Nrf2 transcription factor expression or activity [32].HBV [33], HCV [34], Dengue virus [35], Human immunodeficiency virus [36], Respiratory syncytial virus [37], and Marburg virus [38] activate Nrf2 and then elicit expression of antioxidant response genes, including NQO1, GSPT2, and HO-1.In addition, although our data demonstrated that EV71 infection enhance keapl expression in RD and Vero cells.However, the mechanism, by which EV71 enhances the expression of Keapl expression, is not identified.This mechanism might be related to EV71 survival.As a limitation of the present
Conclusions
Our data strongly support a critical role of Keap1-Nrf2 signaling in EV71 proliferation in infected Vero and RD cells.Whether Keap1 and Nrf2 directly affect EV71 proliferation through apoptotic pathways or other signaling pathways requires further investigation.In the future, we will perform in vivo experiments in Keap1 and Nrf2 knockdown/knockout murine models to better understand these mechanisms.
*
Correspondence: hongyanliyx@163.com† Zhenzi Bai and Xiaonan Zhao contributed equally to this work and should be considered as equal first coauthors Infectious Department, China-Japan Union Hospital, Jilin University, No.126, Xiantai Street, Economic Development Zone, Changchun 130033, Jilin, China
Fig. 1
Fig. 1 EV71 infection induces ROS generation, apoptosis, and inflammation of infected cells.a-c Reinforced mRNA and protein of VP1 in EV71-infected Vero and RD cells was identified in comparison to non-infected cells (Control) using western blotting and qPCR analyses.d ROS generation was examined in infected Vero and RD cells as evidenced by DCF fluorescence intensity assay compared to non-infected normal mice (Control).e, f Annexin V-FITC and PI flow cytometry was performed to assess the number of apoptotic Vero and RD cells.The upper right quadrant of every plot represents early dead cells.g qPCR analyses of the inflammation-promoting cytokines IL-1β, IL-6, and TNFα produced by infected cells.h The protein expression levels of IL-1β, IL-6, and TNFα of the infected cell were quantified using ELISA.Data are presented as mean ± SD. *P < 0.05, **P < 0.01, ***P < 0.001 vs.Control group
Fig. 2 Fig. 3
Fig. 2 EV71 infection upregulates Keap1 but downregulates Nrf2 levels.a, b Western blot to assess Keap1 and Nrf2 protein expression in the EV71-infected cells.c, d qPCR analysis to assess Keap1 and Nrf2 mRNA levels in the EV71-infected cells.Data are presented as mean ± SD. *P < 0.05, **P < 0.01 vs.Control group
Fig. 4
Fig. 4 Keap1 silencing reduces ROS generation, apoptosis, and inflammation in EV71-infected cells.Vero and RD cells were infected by EV71 at an MOI of 5 subsequent to 24 h transfection using Keap1 siRNA vector.a DCF fluorescence intensity indicated ROS generation in the infected RD and Vero cells in comparison to non-infected normal mice (Control).b, c Quantity of dead RD as well as Vero cells using Annexin V-FITC and PI flow cytometry.The upper right quadrant of every plot stood for early dead cells.d-f Expression of inflammation-promoting cytokines IL-1β, IL-6, and TNF-α in the infected cells were assessed by qPCR.g-i The protein expression levels of IL-1β, IL-6, and TNF-α of the infected cells were quantified using ELISA.Data are presented as mean ± SD. *P < 0.05, **P < 0.01, ***P < 0.001 vs. Indicated group
Fig. 5
Fig. 5 Keap1 and Nrf2 co-silencing blocks Keap1-Nrf2-HO-1 transduction in EV71-infected cells.Subsequent to 24 h co-transfection with Keap1 and Nrf2 siRNA vectors, Vero and RD cells were infected by EV71 at an MOI of 5. a, b Western blot was used to examine the expression of Keap1, Nrf2, and HO-1 proteins in infected and non-infected cells.c-e qPCR was used to assess the mRNA of Keap1, Nrf2, and HO-1 in infected and non-infected cells.Data are presented as mean ± SD. *P < 0.05, **P < 0.01 vs. Indicated group
Fig. 6
Fig. 6 Nrf2 silencing restores ROS generation, apoptosis, and inflammation of EV71-infected cells with Keap1 silencing.Subsequent to 24 h co-transfection with the Keap1 and Nrf2 siRNA vector, Vero and RD cells were infected by EV71 at an MOI of 5. a DCF fluorescence intensity indicated ROS generation in infected RD and Vero cells in comparison to the non-infected normal mice (Control).b, c Dead RD and Vero cells were enumerated using Annexin V-FITC and PI flow cytometry.The upper right quadrant of every plot stood for early dead cells.d-f Expression of inflammation-promoting cytokines IL-1β, IL-6, and TNFα in the infected cells were assessed using qPCR.g-i The protein expression levels of IL-1β, IL-6, and TNF-α of the infected cell were quantified using ELISA.Data are presented as mean ± SD. *P < 0.05, **P < 0.01, ***P < 0.001 vs. Indicated group
Fig. 7 Fig. 8
Fig. 7 Influence of Keap1 and Nrf2 on Akt activation and expression of pro-apoptotic proteins in EV71 infected cells.Subsequent to 24 h transfection using Keap1 and Nrf2 siRNA vector, Vero and RD cells were infected by EV71 at an MOI of 5. a, b Western blot demonstrated that Keap1 and Nrf2 silencing modulates Akt, phosphor Akt, Bax, Fas, Fas-L, Caspase-3, and cleaved Caspase-3 expression levels in EV71 infected Vero and RD cells
Fig. 9
Fig. 9 Effect of ROS on EV71-triggered apoptosis and inflammation.Vero and RD cells were initially infected with EV71 virus at an MOI of 5 and then treated with 10 μM of the ROS inhibitor NAC for 5 h. a Quantity of dead RD and Vero cells were evaluated using Annexin V-FITC and PI flow cytometry.The upper right quadrant of every plot displays early dead cells.b The protein expression levels of IL-1β, IL-6, and TNF-α of the infected cell were quantified using ELISA.Data are presented as mean ± SD
|
v3-fos-license
|
2018-12-07T08:11:23.005Z
|
2015-08-26T00:00:00.000
|
54778676
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://isprs-archives.copernicus.org/articles/XL-1-W4/321/2015/isprsarchives-XL-1-W4-321-2015.pdf",
"pdf_hash": "b94e298881e8d9124593efa9da8a9c5f02247bc2",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42473",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "b94e298881e8d9124593efa9da8a9c5f02247bc2",
"year": 2015
}
|
pes2o/s2orc
|
Investigations on the quality of the interior orientation and its impact in object space for UAV photogrammetry
With respect to the usual processing chain in UAV photogrammetry the consideration of the camera’s influencing factors on the accessible accuracy level is of high interest. In most applications consumer cameras are used due to their light weight. They usually allow only for automatic zoom or restricted options in manual modes. The stability and long-term validity of the interior orientation parameters are open to question. Additionally, common aerial flights do not provide adequate images for self-calibration. Nonetheless, processing software include self-calibration based on EXIF information as a standard setting. The subsequent impact of the interior orientation parameters on the reconstruction in object space cannot be neglected. With respect to the suggested key issues different investigations on the quality of interior orientation and its impact in object space are addressed. On the one hand the investigations concentrate on the improvement in accuracy by applying pre-calibrated interior orientation parameters. On the other hand, image configurations are investigated that allow for an adequate self-calibration in UAV photogrammetry. The analyses on the interior orientation focus on the estimation quality of the interior orientation parameters by using volumetric test scenarios as well as planar pattern as they are commonly used in computer vision. This is done by using a Olympus Pen E-PM2 camera and a Canon G1X as representative system cameras. For the analysis of image configurations a simulation based approach is applied. The analyses include investigations on varying principal distance and principal point to evaluate the system’s stability.
INTRODUCTION
In UAV photogrammetry the interior orientation of the camera system, its stability during image acquisition and flight as well as its calibration options and consideration in the bundle adjustment, are limiting factors to the accuracy level of the processing chain.In most applications of UAV imagery, consumer cameras are used because of a limited payload, a specific hardware configuration and cost restrictions.In general, such consumer cameras provide automatic zoom, image stabilization and restricted options in manual modes.These issues lead to a lower accuracy potential due to a lack of stability and long-term validity of the interior orientation parameters.Besides the use of accuracy limiting hardware components, the application of different software packages for UAV photogrammetry might significantly influence the processing results.In most cases the application of professional photogrammetric processing software for UAV imagery is limited.This would require standard image blocks due to the determination of overlapping areas for automatic processing as it is usually found within aerial photogrammetry products.UAV imagery is highly influenced by the dynamics during the flight and cause more irregular image blocks and subsequent processing difficulties.Therefore, specialized processing software for UAV photogrammetry is established nowadays.Such software products like Agisoft PhotoScan or Pix4D are mainly based on principles of computer vision, namely structure-from-motion approaches.The consideration of fixed pre-calibrated interior orientation parameters is possible.However, this does usually not shape the standard processing scenario within UAV software that considers a self-calibration and EXIF information for the camera's initial parameters.Especially a complete photogrammetric camera calibration might not be available for all user groups.Camera calibration and the evaluation of high-quality interior orientation parameters is one main topic in research and development of photogrammetry since decades (Remondino & Fraser 2006).Within computer vision one can see an increasing interest in the investigations of camera calibration and interior orientation models.Both disciplines use similar strategies in order to get object reconstructions in 3D space by using image based data sets.As a major difference between the two disciplines it can be stated that in photogrammetry accuracy and reliability in object reconstruction including a precise camera calibration is of highest interest whereas in computer vision the scene reconstruction focuses on complete and quick algorithms and systems.A reliable and significant estimation and consideration of the interior orientation parameters is of lower interest and therefore missing in some systems and applications.Nevertheless, both mathematical approaches are based on the central projection.Often the same functional descriptions for the interior orientation, based on Brown (1971), is applied (e.g.OpenCV, iWitness).While a reverse non-linear modelling in 3D space forms the photogrammetric approach (Luhmann 2014), a two-step method based on linear descriptions following Zhang (2000) or Heikkila & Silvén (1997) and subsequent nonlinear adjustments can be found in computer vision, which mainly rely on the usage of planar calibration patterns.Colomina & Molina (2014) give a detailed summary on nowadays UAV systems, techniques, software packages and applications.With respect to the camera's interior orientation parameters the relevance of their estimation and its impact in object space is rarely analysed or documented.Douterloigne et al. (2009) present a test scenario for camera calibration of UAV cameras based on a chess-board pattern.The repeatability of the interior orientation parameters is evaluated by using different image blocks and error propagation systems.An extended test on the comparability of camera calibration methods is done by Pérez et al. (2011).Camera calibration results based on an a priori testfield calibration and a field calibration procedure are evaluated with respect to the parameter values and the resulting precision in image space.A subsequent estimation of its impact in object space is done for one scenario.Here the resulting coordinates of the bundle adjustment at defined signalized targets are checked against their GPS coordinates.Cramer et al. (2014) compare results of an on-the-job calibration within UAV applications and their impact in object space by applying a digital surface comparison to reference points.Simulation based scenarios for image block configurations in field calibrations are published by Kruck & Mélykuti (2014).They notice that the simulation focuses on the determinability but not on the reliability of parameter estimation.Different published analyses and applications include approaches to assess the parameters of interior orientation and its influence on the results of specific UAV applications.
Nevertheless the impact of the interior orientation parameters in object space within UAV flight scenarios has to be analysed furthermore.With respect to the suggested key issues different investigations on the quality of interior orientation and its impact in object space are addressed.On the one hand the investigations concentrate on the improvement in accuracy by applying pre-calibrated interior orientation parameters.On the other hand, image configurations are investigated that allow for an adequate self-calibration in UAV photogrammetry.
Mathematical models of interior orientation
The functional model in photogrammetric reconstruction as well as in computer vision is based on the pinhole camera model or central projection, respectively.Within a self-calibration framework or a previously conducted testfield calibration three groups of parameter sets for modelling the interior orientation of a camera are included in the basic functional model: a. principal distance c and principal point x'0, y'0 b. radial-symmetric lens distortion (rad) c. decentring distortion (tan).
Usually interior camera parameters are set constant for all images of a photogrammetric project.Distortion parameters are defined with respect to the principal point.Then the standard observation equation in central projection (1) yield The standard observation equation (1) summarizes the effects of distortion (2) through the correction of Δx' in the image space xdirection, Δy' respectively (Luhmann et al. 2014).For radialsymmetric lens distortion (3) an unbalanced form (4) is chosen.The decentring distortion (also known as radial-asymmetric and tangential distortion) follows equations (5).
The interior orientation model for photogrammetric approaches and those of computer vision, like Agisoft Photoscan or OpenCV is based on Brown (1971).The results are comparable when applying equality of the principal distance and pixel size in x-and y-direction of image space.
Camera modelling with image-variant parameters causes three more parameters per image to be estimated within the bundle adjustment.Hence the number of unknown grows up to nine per image.These parameters describe the variation of the principal distance and the shift of the principal point, hence the possible displacement and rotation of the lens with respect to the image sensor are compensated by this approach using extended standard observation equation ( 6).
Conversion of interior orientation parameters
Although the basic functional model is equal between the photogrammetric approach (AXIOS Ax.Ori) and those in computer vision (OpenCV and Agisoft PhotoScan) a parameter conversion is necessary.In order to support a direct comparability of the resulting parameters the implemented functional contexts have to be analysed and conversion terms have to be defined.A major difference can be identified within the definition of the coordinate systems (Figure 1).In computer vision approaches the image space is defined within the pixel coordinate system and its common positive directions.This influences the principal distance as well as the principal point.Due to the direct linear imaging description of computer vision approaches the resulting distortion parameters are superimposed by the principal distance when compared to a photogrammetric system.Hastedt (2015) describes the comparability and conversion of parameter groups of interior and exterior orientation.Table 1 summarizes the necessary conversion terms from computer vision results to photogrammetric notation.
Photogrammetry
Table 1.Conversion scheme of interior orientation Some approaches in computer vision, like Agisoft PhotoScan, allow for the estimation and consideration of a skew parameter.
As this is not comparable to photogrammetric approaches it will not be dealt within these investigations.
Tested cameras
For UAV photogrammetry a microdrones md4-1000 is used and an Olympus Pen E-PM2 system camera (Figure 2) is equipped to the system.This camera offers a high imaging quality whereas the geometric quality is influenced by different effects and has to be modelled conscientiously.The camera offers an imaging resolution of 4608 x 3456 pixels with a pixel size of 3.75 µm and a 14 -42 mm zoom lens.The manual mode can be used with manual focus, however the focussing can only be checked in live view and not be fixed at infinity.This limits the usability as the auto-focus needs to be fixed once in order to enable a sharp image acquisition at infinity.Therefore adequate camera settings for flight purposes are limited as the camera settings cannot be changed within the flight in order to support right focussing.
Figure 2. Olympus Pen E-PM2 In addition a Canon G1X system camera is tested for comparison.This camera acquires images with a resolution of 4352 x 2907 pixels.The pixel size is 4.3 µm, the zoom-lens can be used with 15 -64 mm.
Test fields and scenarios
The investigations on the quality of the interior orientation estimation are based on two test scenarios.On the one hand a typical photogrammetric image block (Table 2, top) is taken over a volumetric VDI testfield (VDI 2002, see Figure 3).On the other hand a planar chessboard pattern (Figure 3) is chosen, as it is commonly used within computer vision applications and products.Two different image blocks are taken whereby a strong block configuration is characterized by an almost volumetric image acquisition around the object including images rolled around the camera axis at each camera station.This enables a reliable and stable parameter estimation.The weak block configuration follows some typical descriptions of camera calibration blocks within computer vision.One can often find a recommended image acquisition by slightly moving the camera in front of the calibration pattern.This leads to more or less random results for the interior orientation parameters.Luhmann et al. (2014) and Hastedt (2015) summarize typical scenarios to circumvent these effects.x ' 0 y Table 3. Results of camera calibration on volumetric and planar patterns
Parameter results
The results of the interior orientation parameters are listed in Table 3. Parameters are estimated by the AXIOS Ax.Ori photogrammetric bundle adjustment program using the VDI testfield setup.In addition, the calibration with the planar pattern is analysed with Agisoft Lens and OpenCV.All camera settings were set equally for the different image blocks.The resulting parameters of interior orientation in the photogrammetric environment and in Agisoft Lens are proofed against their reliability reconsidering their standard deviations.
All parameters are reliable if not evaluated furthermore in the following.
The resulting principal point for the Olympus camera has to be considered carefully as well as the parameters of the decentring distortion.The calibration procedure has to include the estimation of both parameter sets, otherwise a high impact in object space will cause a loss of accuracy.This is caused by a high correlation between these parameters.In addition, it leads to significantly different parameter values for the principal point if the decentring distortion is neglected.It can be observed that skipping the decentring distortion parameters, the remaining radial-symmetric distortion parameters cannot be estimated significantly.This gives rise to an under-parametrization which can be observed by analyzing the systems' statistics.In order to check the quality of the parameter estimation with and without estimating the decentring distortion, the distortion-free coordinates for the image corners are calculated.As these do not result in the same coordinates by using the different parameter sets, the choice of the parameter set is of high importance for this camera and its calibration.
Analyzing the resulting parameters by comparison of the different software packages and testfield configurations it can be summarized that Agisoft Lens and OpenCV, operated with a planar chess board pattern, provide relatively good estimations of the interior orientation parameters.However, a strong image block has to be chosen in order to provide repeatable and reliable parameters although the resulting parameters remain almost the same.As indicator the remaining standard deviations can be used.This advice also follows previous analyses and experiences on camera calibration and minimization purposes of correlations between the parameters of interior orientation and exterior orientation within the bundle (Hastedt 2015, Luhmann et al. 2014).High correlations indicate that the parameters are not estimated independently.Therefore the usage of separated parameters of such systems might lead to errors in a subsequent application.
Table 3 shows deviations in the principal distance and principal point up to 20 µm for this specific calibration.It has to be considered that the instability of the camera and its components do not take a similar effect as it would be within a UAV flight.The extend of the variation in principal distance and principal point can be estimated by using the calibration over the VDI testfield since many images are taken from different viewing directions in a larger spatial volume.The resulting variation for the Olympus camera results to a range of 44 µm for the principal distance (Figure 4) and 45 µm for the principal point components (Figure 5).
Impact in object space
In order to evaluate the impact of the different calibration results in object space a test is modelled after a prospecting flight scenario.A testfield at a planar wall (Figure 6) is captured by a set of overlapping images in row.For all signalized points a. their coordinates in object space, estimated within a bundle adjustment, are transformed to their control point coordinates b. their within a bundle adjustment estimated coordinates in object space, including pre-calibrated interior orientation parameters, are transformed to their control point coordinates c. forward intersections for object coordinates are calculated, based on previously estimated interior and exterior orientation parameters using the overlapping images (as it would be in flight, too), and transformed to their control point coordinates d. forward intersections for object coordinates are calculated, based on previously estimated exterior orientation parameters using the overlapping images and pre-calibrated interior orientation parameters, and transformed to their control point coordinates.
Figure 6.Wall testfield The results in object space are listed in Table 4. Three camera settings are chosen in order to demonstrate a possible effect of changing settings on the results in object space.The images are taken with a ground sample distance of about 0.2 mm according to the lab test.As expected the b and d scenario, introducing pre-calibrated interior orientation parameters, offer a significant higher accuracy level in object space.As expected, the impact right after calibration or considering coordinates of the bundle adjustment itself show best results.The change of camera settings therefore cause a loss in accuracy, for this test scenario, of about 1 -2 mm, relatively about 200% up to 640%.This range illustrates the unknown dependency of the settings of calibration to those of image acquisition for object reconstruction.Nevertheless, the introduction of (any) reliable parameter estimation in a project using comparable camera settings lead to an increasing accuracy level in object space.
FLIGHT SIMULATION
In order to evaluate the impact of the interior orientation parameters on the reconstruction in object space, some UAV flight simulations are used.For this purpose an area of 236 m x 134 m, as it represents a real-world benchmark area, is chosen.Due to nowadays used UAV software packages including dense matching algorithms, an usual image overlap of 80% each along and across the flight direction is used.Figure 7 shows a resulting flight scheme for a flying height of 50 m .The simulation is based on an idealised surface of a double-sinus waveform.This is undulated by 10 m and 30 m respectively.Figure 8 and Figure 9 show a three-dimensional view on the UAV flight scenarios.The simulation is based on a standard uncertainty for image measurements of 0.003 mm which is equivalent to a bit less than a pixel for the Olympus camera.With respect to the analyses of interior orientation and its estimation within a UAV flight scenario, three types of flight configurations are applied: 1.The first scenario corresponds to a typical flight where all images are taken in the same relative orientation.2. The second configuration includes additional images taken above the centre of the area by rotating the camera, respectively consecutively changing the yaw angle of the UAV by +90° (see Figure 10).
3. In the third scenario two tilted images are added to the previous configurations by changing the roll angle to 25° and -25° for two images above the centre of the area (see Figure 10).The processing of the flight simulations follows a usual setup for the bundle adjustment in UAV-photogrammetry.In this case six homogeneous distributed control points are introduced to define the datum.Their standard deviations are set to values of 15 mm for X-and Y-and 30 mm for Z-coordinates.This follows expectable accuracies gained with GPS measurements.Table 5 and Table 6 summarize the results of the described investigations based on flight simulations.The analyses are done by 1) adjusting all interior orientation parameters and 2) adjusting only the principal distance and principal point by introducing pre-calibrated distortion parameters.
For each result the adjusted parameters of interior orientation and its standard deviations are collected.The RMS-values of all object points as well as the s0 value (standard deviation of unit weight) estimate the statistical precision level of the simulated data set.In order to evaluate the impact of the systems' configuration, forward intersections are calculated for a set of 60 object points.Afterwards these coordinates are transformed to a set of control point coordinates.The remaining deviations are summarized as RMS-values in X, Y and Z.In addition the minimum and maximum values as well as the subsequent range of deviations are summarized.The results of forward intersection can be taken as absolute accuracy level for these investigations and indicate the impact of different scenarios in object space.
The results show an increase in the object space accuracy for Xand Y-direction by introducing yaw-changed images (scenario 2) in contrast to a standard data set (scenario 1).If tilted images are introduced the accuracy for the Z-direction rises, too (scenario 3).An increase in Z-direction accuracy is also possible considering pre-calibrated distortion parameters.In all cases the overall accuracy in object space is increasing (RMSvalues and range) compared to the almost stable statistical precisions in the RMS-values from the bundle adjustment.
Considering an instable principal distance and principal point to the flight scenario, the results in object space decrease as expected.Expecting a statistical variation of these three components of 0.01 mm, the remaining accuracies in object space decrease about 120% to 400% (see Table 7).The percentage decrease is less with scenario 3 where rolled images have been introduced.Table 7. Percentage decrease in object space accuracy considering instable interior orientation A higher undulation of the surface lead to a more significant and reliable estimation of the interior orientation parameters.This is especially obvious and to be expected for the principal distance.Its reliability increases when using scenario 3 and should be considered as requirement for a flight scenario if selfcalibration is utilized.
It can be observed that the simulations with a less undulating surface lead to a higher accuracy level in object space.This effect can also be observed when using a larger flying height, e.g. with 75 m.
In general the bundle adjustment results shown in Table 5 and Table 6 have to be considered as statistical data, especially when evaluating the interior orientation parameters.The generated forward intersections and the subsequent derived accuracies in object space demonstrate the dependency on the whole bundle system.While very small deviations to the input data are remaining, the standard deviations of the interior orientation parameters are still high within several microns.This also leads to an influence to the whole system and therefore causes higher deviations in object space.Only when introducing additional, tilted images to the bundle, the standard deviations of the interior orientation parameters are increasing and lead to an increase in object space accuracy.
The resulting accuracies in object space are still influenced by the camera and its interior orientation.Object space accuracies in the order of several ground pixels (GSD) show the limits in standard UAV scenarios.The investigations on image blocks that allow for adequate self-calibration to not replace a metric UAV camera and a serious calibration.However, they help for awareness of additional influences on the accuracy in object space.
SUMMARY
Within these investigations on the impact of the interior orientation parameters in object space different aspects are analysed further.On the one hand nowadays used consumer cameras in UAV photogrammetry, e.g. an Olympus Pen E-PM2 or other system cameras, are analysed with respect to their internal stability and reliability of the chosen parameter set.In addition, the influence of the camera calibration procedure itself is estimated.On the other hand simulation based UAV flight analyses are evaluated in order to investigate adequate flight scenarios for self-calibration.
The camera calibration procedures of Agisoft Lens and OpenCV are compared with respect to a photogrammetric approach.For the results of an Olympus Pen E-PM2 camera it can be summarized that using a strong image block configuration on a planar chessboard pattern, as it is used in computer vision applications, relatively reliable parameters are estimated.For the Olympus Pen E-PM2 camera itself, a high correlation between the components of the principal point and the decentring distortion can be observed.This is not only true for the statistical correlation but also for the resulting parameters itself.They result to significantly different values depending on the set of estimated parameters for the interior orientation.With respect to the camera calibration, these parameters should not be removed as they lead to an underparametrization of the system and therefore to erroneous results for the remaining parameter.This is probably caused by an instability of the camera components.The camera is based on an auto-focus lens, as many consumer cameras do.In addition, a variation of the principal distance and the principal point, estimated for each image separately, is visible.
The investigations on flight scenarios allowing for adequate self-calibration show the necessity of introducing rolled images to a UAV flight scenario.In order to allow for stable and accurate object coordinates in all three coordinate directions, these additional images are recommended.Furthermore, this leads to a higher level of significance for the interior orientation parameter estimation.The camera instability, that should be expected when using consumer cameras, causes a loss in accuracy of at least 200%.The results in object space show the limit of standard UAV scenarios as the remaining deviations in object space are up to the extend of several ground pixels.If self-calibration is used with UAV flights one should be aware of the quality and significance of the parameter estimation.A pre-calibration should be introduced if possible. tan
Figure 4 .
Figure 4. Variation in principal distance for Olympus camera
Figure 7 .
Figure 7. Overview on simulated flight order with 80/80 image overlap
Figure 10 .
Figure 10.Additional images for UAV-flight configurations 1 to 3 (numbers in camera symbols indicate scenario)
Table 2 .
Image block configurations for camera calibration
Table 4 .
Impact of calibration estimation in object space
Table 5 .
Results of flight simulation for scenario 1 and 3
Table 6 .
Results of flight simulation for scenario 2
|
v3-fos-license
|
2022-03-23T15:30:36.468Z
|
2022-03-01T00:00:00.000
|
247605024
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4360/14/6/1220/pdf",
"pdf_hash": "e1450900a04fa5372b44dfe773a7f75d46cea1d3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42476",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "338084bf5ed29e675ac2ae58d7bfb3badfd41afa",
"year": 2022
}
|
pes2o/s2orc
|
Processing and Mechanical Properties of Basalt Fibre-Reinforced Thermoplastic Composites
Basalt fibre is derived from volcanic rocks and has similar mechanical properties as glass fibre. However, poor fibre-matrix compatibility and processing issues are the main factors that have restricted the mechanical performance of basalt fibre-reinforced thermoplastic composites (BFRTP). In this work, basalt continuous fibre composites with polypropylene (PP) and polycarbonate (PC) matrices were studied. The composites were processed by compression moulding, and a processing study was conducted to achieve good quality composites. For the BF-PC composites, the optimisation of material preparation and processing steps allowed the polymer to impregnate the fibres with minimal fibre movements, hence improving impregnation and mechanical properties. For BF-PP composites, a compatibiliser was required to improve fibre-matrix compatibility. The compatibiliser significantly improved the tensile and impact strength values for short BF-PP composites and continued to increase at 40 wt%. Furthermore, the analytical modelling of the Young’s moduli indicated that the induced fibre orientation during processing for short BF-PP composites and unidirectional (UD) BF-PC composites had better stress transfer than that of UD BF-PP composites.
Introduction
Basalt fibre is a bio-derived mineral fibre from volcanic rocks and has several advantages, such as good chemical resistance and mechanical properties [1][2][3][4][5][6]. Basalt fibre has similar tensile properties as glass fibres and has a higher maximum service temperature than glass and carbon fibres, as shown in Table 1. This enables basalt fibre composites to be a viable and sustainable alternative to glass fibre composites for structural applications. Moreover, it has a higher maximum service temperature and good chemical resistance, which makes it suitable for applications in harsh environments, such as composite pipes for the oil and gas industries or chemical storage tanks [1].
Basalt fibre has been studied by several scientists to explore the fibre used as reinforcement for polymer composites [2,3,[7][8][9][10][11]. With good mechanical properties in combination with high temperature and alkaline resistance, basalt fibre is a good candidate for composites with thermoset matrices such as epoxy and polyester. Arshad et al. [7] reported epoxy hybrid composites having a high resistance to changes in temperature and significant enhanced mechanical performance. Basalt fibre reinforced epoxy composites also demonstrated higher resistance to aging in alkaline medium and heat than those of glass fibre epoxy composites based on E-glass and S-glass fibres, while the mechanical properties of the basalt fibre composites are closer to those that of S-glass composites and greater than those of E-glass composites. These properties are connected to the high adhesion between the basalt fibres and epoxy [8].
With increasing demand for lightweight, sustainable materials for metal-replacement in various industries, the global thermoplastic composite market has been projected to Table 1. Properties of basalt fibre with glass and carbon fibres [5,6]. Other aspects include fibre-matrix compatibility and adhesion. The sizing of commercial continuous basalt fibre was developed for epoxy applications, limiting its mechanical performance. Attempts have been made to improve the fibre-matrix adhesion through the use of different compatibilisers or sizing agents [18,19]. Russo et al. [19] showed that the use of polypropylene-graft-maleic anhydride improved the flexural and impact performance of continuous BF-PP composites. However, the highest flexural strength value achieved was 81.1 MPa, which was significantly lower than the expected strength values of continuous glass fibre thermoplastic composites. This could be due to lower fibre volume fraction or composite quality issues, which were not investigated in the study. Therefore, there is a need to understand the processing of the continuous basalt fibre thermoplastic composites in order to achieve a good quality composite with the consistent and good mechanical performance required for structural applications.
Basalt E-Glass S-Glass Carbon
In the present study, a processing study was conducted for the continuous basalt fibre thermoplastic composites to resolve fibre spreading issues and improve matrix impregnation. Basalt fibre is typically sized for epoxy polymers; therefore, polypropylene-graftmaleic anhydride was used as a compatibiliser to improve the fibre-matrix adhesion for basalt fibre-polypropylene (BF-PP) composites. Tensile properties, degree of impregnation, and fibre volume fraction were characterised. Additionally, the effect of compatibiliser on the tensile and impact properties of short BF-PP composites was also studied. Analytical models were used to predict the Young's modulus values for the short and continuous fibre composites, and the predictions will be compared to the experimental results.
Materials
The basalt fibre used in this study was in the form of unidirectional (UD) basalt fabric, provided by Sure New Material (Zhejiang) Co. Ltd. (Tongxiang, China). The UD fabric had a linear density of 400 Tex and a thickness of 0.243 mm. The short basalt fibre was cut from this fabric to approximately 10 mm length before further processing. (Singapore) was used in this study. The PC film has average thickness of 0.1 mm and density of 1.2 g/cm 3 .
Polypropylene (PP) pellets, Cosmoplene, Y101E grade, were provided by The Polyolefin Company Pte. Ltd. (Singapore) and were used in compounding process and to make the PP film using the press at 250 • C. The PP film had average thickness of 0.1 mm to 0.2 mm and density of 0.90 g/cm 3 .
Compatibiliser
Polypropylene-graft-maleic anhydride (MAPP) pellets were utilised to improve fibrematrix adhesion in basalt fibre reinforced PP composites, which were supplied by Sigma-Aldrich, (St. Louis, MO, USA). The MAPP has 8 wt% to 10 wt% of grafted maleic anhydride, melting point of 156 • C and density of 0.934 g/mL at 25 • C.
Processing of UD Basalt Fibre Composites by Compression Moulding
UD basalt fibre composites with both PC and PP were prepared using a stack of fibre fabrics and polymer films under compression moulding process. Layers of film and basalt fabric were cut and stacked in alternate sequence before being placed in a rectangular picture frame mould with inner dimensions of 250 mm by 150 mm or a square picture frame mould with inner dimensions of 200 mm by 200 mm. The basalt fabric and PP film were used without any pretreatment, but PC film was dried overnight at 80 • C. The compression moulding process was conducted using the P500 Collin hot press. In general, the composites went through a pre-consolidation phase to allow time for the matrix to impregnate the fibres, before going into the consolidation phase, cooled down to room temperature and followed by composite demoulding. The parameters for the pre-consolidation and consolidation phases for the BF-PC and BF-PP composites were summarised in Table 2. In this paper, the parameters for the consolidation phase were kept constant for most of the samples except for two BF-PC samples. PP was provided as pellets and was processed into films using hot pressing process. Hot press, 500P, from Collin (Ebersberg, Germany) was used to make the PP film (with and without compatibiliser) at 250 • C. For the PP film with compatibiliser, prior to the making of the film, a compounding process was required. 3 wt% of MAPP was pre-mixed with PP pellets before the compounding process. Compounding of the materials was conducted using a twin-screw extruder, Micro 27, from Leistritz (Nuremberg, Germany), at 100 rpm with a temperature profile ranging from 150 • C to 170 • C.
Short Fibre Composite Processing
BF-PP short-fibre composites were compounded at fibre loadings of 20 wt%, 30 wt%, and 40 wt%. The BF and PP pellets were used as received and appropriate amount of BF and PP pellets were weighed and pre-mixed before compounding process. Compounding of the materials was carried out using a lab-scale twin-screw extruder, HAAKE TM Minilab 2, from Thermo Fisher Scientific (Waltham, MA, USA), at 200 • C and 65 rpm. The compounded samples were then injection moulded using a lab-scale injection moulding machine, HAAKE TM Minijet, from Thermo Fisher Scientific (Waltham, MA, USA). An injection temperature of 200 • C, injection pressure of 150 bar for 20 s and post pressure of 100 bar post pressure for 10 s were used to obtain the specimens required for tensile and impact tests.
Polymer Flow Measurement
Melt flow index (MFI) of the studied polymers were determined for setting process conditions of the composites. The MFI was characterised using the melt flow tester, CEAST MF20, from Instron (Turin, Italy) and in accordance with ASTM D1238 [20]. The MFI analysis for PP was conducted at temperatures from 170 • C, 190 • C, 210 • C, 230 • C, and 250 • C with a 2.16 kg weight. The MFI versus temperature graph for PP was then plotted to determine the relationship between the MFI and temperature of the PP matrix. The MFI analysis of PC matrix was conducted at temperatures of 220 • C, 240 • C, 260 • C, 280 • C, and 300 • C with a 1.2 kg weight.
Measurement of Fibre Volume Fraction of Composites
The fibre volume fraction of the continuous and short-fibre composites was measured using the thermogravimetric analyser (TGA), TGA Q500, from TA instruments (New Castle, DE, USA). 10 mg to 20 mg of each specimen were tested from room temperature to 800 • C at a ramp rate of 10 • C per min. At least three tests were conducted for each sample.
Analysis was conducted using TA Universal Analysis (New Castle, DE, USA). As BF do not burn off in air even at 800 • C, though PP burns off cleanly, the remaining weight fraction at 800 • C correspond to the weight fraction of the BF in BF-PP composites.
Mechanical Tests
The tensile tests were conducted using the Universal Testing Machine 5982 from Instron (Norwood, MA, USA), with non-contacting video extensometer, AVE, for strain measurement, and in accordance with ASTM D3039 [21] and ISO 527 [22] for continuous fibre composites, and polymer and short-fibre composites, respectively. A minimum of 5 specimens were tested for each sample. For the continuous fibre composites, the nominal specimen dimension was 150 mm by 15 mm by 1 mm or 2 mm and emery cloth was used to improve the grip during testing. A cross head speed of 2 mm/min was used. For the polymer and short-fibre composites, the specimen type used was 1BA. A crosshead speed of 5 mm/min was used for PP polymer and 2 mm/min was used for the shortfibre composites.
Flexural test was performed using the Universal Testing Machine 5982 from Instron (Norwood, MA, USA)in accordance with ASTM D790 [23] and a minimum of 5 specimens were tested per sample. The nominal specimen dimension was 60 mm by 12.7 mm by 1 mm and a span of 26 mm was used. Flexural test was not reported for BF-PP composite as the samples did not break.
Izod impact test was conducted using Pendulum Impact Tester, HIT25P from Zwick-Roell (Ulm, Germany) and in accordance with ASTM D256 [24]. The specimens were notched and had a nominal dimension of 63.5 mm by 12.7 mm by 3 mm. A minimum of 10 specimens were tested for each sample.
Investigation of Composite Quality by Microscopic and FESEM Images
The quality of produced composites was characterised using microscopic images of their cross-sections. The composite cross-sections were cut and mounted in a resin block. A surface preparation process including grinding and polishing was conducted for the mounted composite samples to achieve clean surfaces that were ready for analysis under a microscope. Microscopic analysis of the mounted composites was performed using GX51 inverted optical microscope with DP72 attachment from Olympus (Tokyo, Japan), and AnalysisPro software by Olympus (Tokyo, Japan).
For analysis under field emission scanning electron microscope (FESEM), Gold Sputter Coater, EM ACE200, from Leica (Vienna, Austria) was used to coat the surface of the samples before imaging using the FESEM. The samples are secured to the rotating holder of the sputter holder using carbon tape and undergo platinum sputtering for 60 s. After sputter coating the samples, the samples are secured to the sample holder of the FESEM and taped with copper tape at the corners to increase sample conductivity. The samples were then imaged using the FESEM, Ultra Plus from Carl Zeiss (Oberkochen, Germany) using different magnifications to observe differences in the fibre-matrix interactions of each sample.
Sample Annotation
The samples in this paper are described in the following manner, as shown in Table 3. For example, the BF-PC composite that was processed at pre-consolidation temperature of 170 • C with no variation in consolidation parameters, i.e., 220 • C and 5 bar, is denoted as 'PC-170'. The BF-PC composite that was processed at pre-consolidation temperature of 170 • C and with variation in consolidation temperature, i.e., 240 • C instead of 220 • C, is denoted as 'PC-170, 240'. Table 3. BF continuous fibre composites in this paper.
Matrix
Pre-Consolidation Temperature
Consolidation Parameters Variation
Notation Used
Process Optimisation for UD Basalt Fibre PC Composites
Regarding UD composite preparation using compression moulding, significant fibre movements occurred when high temperature or pressure were applied, but poor composite quality resulting from insufficient consolidation was obtained if low processing pressure and temperature were used. In a previous study on the compression moulding of carbon fibre-polycarbonate composites, it was found that the addition of a pre-consolidation step at 170 • C at 0.2 bar prevented extensive fibre movement [25]. Moreover, a consolidation temperature of 220 • C reduced fibre spreading as compared to a consolidation temperature of 240 • C, while an MFI greater than 20 g/min would aid in matrix impregnation. Hence, the initial process optimisation was carried out with the pre-consolidation phase (170 • C, 0.2 bar, 2 to 5 min), followed by the consolidation phase. Such a mild consolidation procedure resulted in poor composite quality, and hence, the processing steps were modified, using a higher pre-consolidation temperature to improve the impregnation and a lower consolidation temperature to prevent excessive fibre movement, as displayed in Figure 1. In addition, the pre-consolidation temperatures of 260 • C and 280 • C were selected, based on the MFI of PC film, to be close to a MFI of 20 g/min as presented in Figure 2.
nation. Hence, the initial process optimisation was carried out with the pre-consolidation phase (170 °C, 0.2 bar, 2 to 5 min), followed by the consolidation phase. Such a mild consolidation procedure resulted in poor composite quality, and hence, the processing steps were modified, using a higher pre-consolidation temperature to improve the impregnation and a lower consolidation temperature to prevent excessive fibre movement, as displayed in Figure 1. In addition, the pre-consolidation temperatures of 260 °C and 280 °C were selected, based on the MFI of PC film, to be close to a MFI of 20 g/min as presented in Figure 2. There was a significant number of voids, particularly between the fibre bundles for the undried BF-PC composite processed at a lower pre-consolidation temperature, as highlighted in the circles in Figure 3a. Higher pre-consolidation temperature and drying were modified, using a higher pre-consolidation temperature to improve the impregnation and a lower consolidation temperature to prevent excessive fibre movement, as displayed in Figure 1. In addition, the pre-consolidation temperatures of 260 °C and 280 °C were selected, based on the MFI of PC film, to be close to a MFI of 20 g/min as presented in Figure 2. There was a significant number of voids, particularly between the fibre bundles for the undried BF-PC composite processed at a lower pre-consolidation temperature, as highlighted in the circles in Figure 3a. Higher pre-consolidation temperature and drying There was a significant number of voids, particularly between the fibre bundles for the undried BF-PC composite processed at a lower pre-consolidation temperature, as highlighted in the circles in Figure 3a. Higher pre-consolidation temperature and drying significantly improved the impregnation and matrix was observed in the middle of the fibre bundle, as seen in the highlighted areas in Figure 3b,c. The improved impregnation was also reflected in the increases in tensile and flexural properties (Table 4). Improved impregnation indicated fewer voids, as seen in Figure 3b,c, and voids are stress concentrators, which would have caused premature failure during mechanical testing.
In addition, it was verified that the consolidation temperature of 220 • C and pressure of 5 bar are the optimised consolidation parameters, as varying the consolidation temperature and pressure did not yield better tensile properties, as shown in Table 4. A further increase in the consolidation parameters could possibly improve the degree of impregnation, but it could also cause fibre spreading issues and hence, the tensile properties did not improve with higher temperature or pressure.
The investigation of composite quality was also conducted using FESEM images, as shown in Figure 4. Polymer residues were observed on the fibres of BF-PC composites processed at different pre-consolidation temperatures, indicating good fibre-matrix adhesion, as seen in Figure 4a-c. However, some slight differences were observed for the fibre tracks where the fibre was pulled out of the matrix. At a lower pre-consolidation temperature of 170 • C and with undried polymer, the fibre track was smoother with minimal matrix deformation, whereby at higher pre-consolidation temperatures of 260 • C and 280 • C, more extensive matrix deformation with multiple steps/holes was observed on the fibre tracks. This indicated slightly improved fibre-matrix adhesion with drying and with higher pre-consolidation temperatures, which aided in stress transfer from matrix to fibre and improved the mechanical properties.
trators, which would have caused premature failure during mechanical testing.
In addition, it was verified that the consolidation temperature of 220 °C and pressure of 5 bar are the optimised consolidation parameters, as varying the consolidation temperature and pressure did not yield better tensile properties, as shown in Table 4. A further increase in the consolidation parameters could possibly improve the degree of impregnation, but it could also cause fibre spreading issues and hence, the tensile properties did not improve with higher temperature or pressure. Note: v F is the measured fibre volume fraction from TGA and n.d. indicated no data. "(u)" behind the sample label indicates the use of undried PC film in initial study.
Processing Optimisation for UD Basalt Fibre PP Composites
The melt flow index of the PP polymer at different temperatures was measured, as presented in Figure 5a and hence, the pre-consolidation temperatures of 210 °C and 250 °C were selected in consideration of the corresponding MFI value of around 20 g/10 min. The optical microscope images showed that the BF-PP composites seemed to have a lower fibre volume fraction as compared to BF-PC composites and had a similarly moderate level of impregnation despite using different pre-consolidation temperatures, as seen in Figure 6a-c. There were matrix and small voids observed in the fibre bundles, indicating that the viscosity of the PP polymer was low and could flow into the fibre bundle. Hence, the change in pre-consolidation temperature did not have an effect on the degree of impregnation for BF-PP composites. Similarly, the tensile properties did not have significant changes with the change in pre-consolidation temperature, as seen in Table 5. However, it was noted that a higher pre-consolidation temperatures increase polymer flow, and hence, the thickness of the BF-PP composites decreased from 2.0 mm to 1.6 mm with an increase in the pre-consolidation temperature from 170 °C to 250 °C. Figure 5 presents the MFI of PP with and without compatibiliser. The MFI of PP with compatibiliser is higher than that of PP without compatibiliser. However, there was no significant improvement in the degree of impregnation as seen from the cross-section presented in Figure 6a,d. Thus, the improved tensile properties for the BF-PP composite with compatibiliser are attributed to the improved fibre-matrix interface as observed in FESEM images in Figure 7a,b. A smooth fibre surface could be seen in unmodified BF-PP, whereas polymer residues were observed when MAPP was added.
Processing Optimisation for UD Basalt Fibre PP Composites
The melt flow index of the PP polymer at different temperatures was measured, as presented in Figure 5a and hence, the pre-consolidation temperatures of 210 • C and 250 • C were selected in consideration of the corresponding MFI value of around 20 g/10 min. The optical microscope images showed that the BF-PP composites seemed to have a lower fibre volume fraction as compared to BF-PC composites and had a similarly moderate level of impregnation despite using different pre-consolidation temperatures, as seen in Figure 6a-c. There were matrix and small voids observed in the fibre bundles, indicating that the viscosity of the PP polymer was low and could flow into the fibre bundle. Hence, the change in pre-consolidation temperature did not have an effect on the degree of impregnation for BF-PP composites. Similarly, the tensile properties did not have significant changes with the change in pre-consolidation temperature, as seen in Table 5. However, it was noted that a higher pre-consolidation temperatures increase polymer flow, and hence, the thickness of the BF-PP composites decreased from 2.0 mm to 1.6 mm with an increase in the pre-consolidation temperature from 170 • C to 250 • C. Note: vF is the measured fibre volume fraction from TGA and values are within. Note that the annotation "-c" indicates that compatibiliser was used. Note: v F is the measured fibre volume fraction from TGA and values are within. Note that the annotation "-c" indicates that compatibiliser was used. Figure 5 presents the MFI of PP with and without compatibiliser. The MFI of PP with compatibiliser is higher than that of PP without compatibiliser. However, there was no significant improvement in the degree of impregnation as seen from the cross-section presented in Figure 6a,d. Thus, the improved tensile properties for the BF-PP composite with compatibiliser are attributed to the improved fibre-matrix interface as observed in FESEM images in Figure 7a,b. A smooth fibre surface could be seen in unmodified BF-PP, whereas polymer residues were observed when MAPP was added.
Processing Conditions and Mechanical Properties of Short Basalt Fibre PP Composites
The tensile and impact properties of the basalt composites were measured and presented in Figure 8a,b and Table 6. The tensile and impact strength for BF-PP short-fibre composites did not increase beyond a fibre loading of 30 wt% when no compatibiliser was used. This might be attributed to the presence of too many fibre ends within the composite, which could have resulted in crack initiation and potentially the composite's failure [26]. On the other hand, when compatibiliser was added, the tensile and impact strength values were significantly higher than those without compatibiliser and continued to increase at fibre loading of 40 wt%. It was noted that at a higher fibre loading, there is more yielding behaviour for the BF-PP composites with compatibiliser, which is reflected in the lower measured values for Young's modulus for the BF-PP composite with compatibiliser.
Processing Conditions and Mechanical Properties of Short Basalt Fibre PP Composites
The tensile and impact properties of the basalt composites were measured and presented in Figure 8a,b and Table 6. The tensile and impact strength for BF-PP short-fibre composites did not increase beyond a fibre loading of 30 wt% when no compatibiliser was used. This might be attributed to the presence of too many fibre ends within the composite, which could have resulted in crack initiation and potentially the composite's failure [26]. On the other hand, when compatibiliser was added, the tensile and impact strength values were significantly higher than those without compatibiliser and continued to increase at fibre loading of 40 wt%. It was noted that at a higher fibre loading, there is more yielding behaviour for the BF-PP composites with compatibiliser, which is reflected in the lower measured values for Young's modulus for the BF-PP composite with compatibiliser.
Processing Conditions and Mechanical Properties of Short Basalt Fibre PP Composites
The tensile and impact properties of the basalt composites were measured and presented in Figure 8a,b and Table 6. The tensile and impact strength for BF-PP short-fibre composites did not increase beyond a fibre loading of 30 wt% when no compatibiliser was used. This might be attributed to the presence of too many fibre ends within the composite, which could have resulted in crack initiation and potentially the composite's failure [26]. On the other hand, when compatibiliser was added, the tensile and impact strength values were significantly higher than those without compatibiliser and continued to increase at fibre loading of 40 wt%. It was noted that at a higher fibre loading, there is more yielding behaviour for the BF-PP composites with compatibiliser, which is reflected in the lower measured values for Young's modulus for the BF-PP composite with compatibiliser. 1 Intended fibre loading. The actual fibre loadings were measured by TGA and were within ±1% of the intended fibre loadings. Note that the annotation "-c" indicates that 2 wt% (w.r.t. fibre loading) compatibiliser were used.
The improvement in tensile and impact strength of the composites with the use of compatibiliser can be attributed to the improved fibre-matrix interface between the basalt fibre and the PP matrix, leading to better stress transfer. This could be observed from the FESEM images of the composite fractured surface, as seen in Figure 9a,b. There were a higher number of holes and large gaps observed between the fibres and matrix for unmodified BF-PP, while the gaps were significantly smaller when MAPP was added to the polymer matrix. In addition, a smooth fibre surface could be seen for the unmodified BF-PP, whereas a slightly rougher fibre surface with polymer residues was observed in the modified composite system. The improvement in tensile and impact strength of the composites with the use of compatibiliser can be attributed to the improved fibre-matrix interface between the basalt fibre and the PP matrix, leading to better stress transfer. This could be observed from the FESEM images of the composite fractured surface, as seen in Figure 9a,b. There were a higher number of holes and large gaps observed between the fibres and matrix for unmodified BF-PP, while the gaps were significantly smaller when MAPP was added to the polymer matrix. In addition, a smooth fibre surface could be seen for the unmodified BF-PP, whereas a slightly rougher fibre surface with polymer residues was observed in the modified composite system.
Analytical Models for Modelling of Young's Modulus
The modulus of polymers and composites can be predicted using models such as the rule of mixture [27] and the Halpin-Tsai model [28]. However, it should be noted that these models often over predict the modulus of polymers and composites since they were originally developed for unidirectional composites and assumed a perfect fibre-matrix interface, and modifications are required.
Analytical Models for Modelling of Young's Modulus
The modulus of polymers and composites can be predicted using models such as the rule of mixture [27] and the Halpin-Tsai model [28]. However, it should be noted that these models often over predict the modulus of polymers and composites since they were originally developed for unidirectional composites and assumed a perfect fibre-matrix interface, and modifications are required.
Halpin-Tsai Model
The Halpin-Tsai model [28] is described below as follows: Where E p is the modulus of the modifier (short fibre in this study), E m is the modulus of the matrix, and v f is the volume fraction of the modifier. The shape factor (ξ) was suggested to be twice the aspect ratio of the modifier. The aspect ratio (AR) is calculated by dividing the length, l p , by twice the radius, r p . However, the Halpin-Tsai model often over predicts the stiffness and is unable to take into account the orientation of the modifiers. Van Es [29] proposed a change in the shape factor expression to account for the random orientation of rod-like modifiers, such that the modulus of the modified polymer or composite is given by the following: where the modulus parallel to the loading direction (E // ) and the modulus transverse to the loading direction (E T ) moduli are calculated using Equation (1) with ξ = 2 for E T and ξ = (0.5AR)1.8 for E // . This will be referred to as the Halpin-Tsai (random) model.
Rule of Mixtures
Fibre reinforced materials that possess high mechanical properties are typically based upon carbon and glass fibres, and the reinforcement effect depends on the length, distribution, orientation, type, processing, and interfacial compatibility of the fibres.
The elastic modulus of the composite system can be derived from the rule of mixtures (ROM) and is given by the following [27]: where E F and E m are the elastic modulus of the fibre and the matrix, respectively, and v F and v m are the volume fraction of the fibre and matrix, respectively.
Young Modulus of Short Basalt Fibre PP Composites
Young's moduli of the short BF-PP composites were predicted using the Halpin-Tsai models. The parameters used are summarised in Table 7. The experimental results match better with the Halpin-Tsai model as compared to the Halpin-Tsai (random) model, indicating that there is some degree of orientation of the fibres along the testing direction, see Figure 10. This could be due to the induced polymer flow during the extrusion and injection moulding processes, which allows the fibres to be aligned along the length of the tensile specimens. In addition, the measured dimension of the basalt fibre has a large standard deviation that contributed to some over-prediction.
Young Modulus of UD Basalt Fibre Composites
The Young's moduli of the UD composites were predicted using the rule of mixtures and the parameters used were the same as those of the BF-PP short-fibre composite. The
Young Modulus of UD Basalt Fibre Composites
The Young's moduli of the UD composites were predicted using the rule of mixtures and the parameters used were the same as those of the BF-PP short-fibre composite. The density and Young's modulus of polycarbonate were taken to be 1.2 g/cm 3 and 2.3 GPa [31], and the parameters used for basalt fibre and polypropylene were summarised in Table 7.
The experimental Young's modulus values reached at least 69% and 81% of the theoretical values for BF-PP and BF-PC composites, respectively, as shown in Table 8. This indicated BF-PC composites had better stress transfer than BF-PP composites. It was noted that the efficiency factor for BF-PP composites did not differ with the use of the compatibiliser, which may be due to the reduction of Young's modulus of the matrix by the compatibiliser that would affect the value of the theoretical Young's modulus. This could be further enhanced by refinement of the pre-consolidation process parameters and with further advancement in compatibiliser technology.
Tensile Strength of Basalt Fibre Composites
The tensile strengths of the BF-PC and BF-PP composites in this study were plotted against the tensile strengths of BF and GF composites in different matrices, which were reported by other researchers, as shown in Figure 11. As compared to BF and glass fibre (GF) thermoset composites, the studied BF-PC thermoplastic composite could achieve comparable mechanical properties. These BFRTP could be a bio-derived alternative to GF composites for structural applications that can be adopted by various sectors such as construction, automotive, and marine applications. composites for structural applications that can be adopted by various sectors such as construction, automotive, and marine applications.
Conclusions
Basalt continuous fibre composites with Polypropylene (PP) and Polycarbonate (PC) matrices were fabricated using a compression moulding process and process optimisation was conducted to resolve the fibre spreading issues and improve fibre-matrix impregnation. Drying is critical for BF-PC composites and the modified procedure with a higher pre-consolidation temperature enabled good matrix impregnation into the basalt fibres
Conclusions
Basalt continuous fibre composites with Polypropylene (PP) and Polycarbonate (PC) matrices were fabricated using a compression moulding process and process optimisation was conducted to resolve the fibre spreading issues and improve fibre-matrix impregnation. Drying is critical for BF-PC composites and the modified procedure with a higher preconsolidation temperature enabled good matrix impregnation into the basalt fibres with minimal fibre movements. For BF-PP composites, a compatibiliser was required to improve fibre-matrix compatibility and improve the tensile strength. Moreover, it was found that the tensile and impact properties for BF-PP short-fibre composites did not increase beyond a fibre loading of 30 wt% if no compatibiliser was used, but the values continued to increase at 40 wt% with the use of compatibiliser. When benchmarked against BF and glass fibre (GF) thermoset composites, a lower fibre volume fraction was obtained for the BFRTP, but the BF-PC composites could achieve comparable mechanical properties. Analytical modelling of the Young's moduli of the BF-PP short-fibre composites was predicted with the Halpin-Tsai model as compared to the Halpin-Tsai (random) model, indicating induced fibre orientation along the testing direction during processing for short BF-PP composites. The experimental Young's modulus values reached at least 69% and 81% of the theoretical values for BF-PP and BF-PC composites, respectively, which indicated that BF-PC composites had better stress transfer than BF-PP composites. This study lays the foundation for the processing and basic mechanical properties of basalt fibre thermoplastic composites with PC and PP as the polymer matrix and contributes to the working knowledge of BFRTP for potential future applications in various sectors such as construction, aerospace, and marine.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
v3-fos-license
|
2017-09-14T09:58:07.862Z
|
2013-08-28T00:00:00.000
|
21469709
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.ajol.info/index.php/tjpr/article/download/93285/82698",
"pdf_hash": "c9ee9ae7787d1c17ba91d076facbeaa2efa5e993",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42477",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c9ee9ae7787d1c17ba91d076facbeaa2efa5e993",
"year": 2013
}
|
pes2o/s2orc
|
Comparative Analgesic and Anti-inflammatory Activities of Two Polyherbal Tablet Formulations ( Aujaie and Surangeen ) in Rats
Purpose: To investigate the analgesic and anti-inflammatory activities of two herbal polymedicines Aujaie and Surangeen to ascertain their therapeutic claims. Methods: A total of 96 rats were divided into two equal groups; one for determination of antiinflammatory activity and the other for analgesic activity. Anti-inflammatory and analgesic activities were evaluated by carrageenan-induced paw edema and formalin-induced paw licking test, respectively. For both studies, group I (untreated control) received 1 ml/kg, (po) of gum suspension 1 h before carrageenan injection. Aspirin (100 mg/kg, po) was given to group II (treated control) before injection. Groups III, IV and V were administered orally aujaie (3, 4 and 5 mg/kg, po, respectively), while surangeen tablets (10, 20 and 40 mg/kg, po) were given to groups VI, VII and VIII, respectively. Pain was experimentally induced by injecting 0.1 ml of 2.5 % formalin (40 % formaldehyde in distilled water) via the subplantar region of the left hind paw. Results: Significant (p < 0.05) anti-inflammatory activity for aspirin (group II as well as for groups III VIII with paw edema inhibition (PDI) ranging from 24.6 90.2 %, There was significant ((p < 0.05) analgesic activity in group II, VI and VII while in groups III V and VIII the activity was insignificant (p < 0.05). Conclusion: Aujaie and surangeen tablets exhibited pronounced analgesic and anti-inflammatory activities in rats depending on the dose employed.
INTRODUCTION
In Pakistan, Hakims (herbal practitioners) use various local herbal polypharmaceutical preparations for the treatment of different types of rheumatic diseases.Although the use of these products has a sound tradition and rational background according to herbal system of medicine, it is essential to investigate the validity by scientific methods [1][2][3][4][5].Such studies can help to determine their therapeutic usefulness.Aujaie and Surangeen tablets tablets have been used in traditional (Unani) system of medicine for the treatment of inflammation and pain associated with rheumatoid arthritis and osteoarthritis [6].Surangeen tablet is a herbal polypharmaceutical formulation containing extracts of 6 medicinal plants.Each [6].Aujaie tablet is another herbal polypharmaceutical formulation containing extracts of 9 medicinal plants.Each tablet contains Calchicum luteum 25 mg, Withania somnifera 20 mg, Zingeber officinalis 20 mg, Aloe indica 10 mg, Curculigo orchioides 10 mg, Ptychotis ajowan (Ajwain) 10 mg, Pimpinella anisum 10 mg, Balsamodendron mukul 5 mg and Pistacia lentiscus 5 mg [5][6][7].
Therefore, the objective of this study was to evaluate and compare the anti-inflammatory and analgesic activities of the aforementioned two herbal poly-pharmaceutical preparations in rats.
Animals and drug administration
Wister albino rats of either sex weighing 200-250 g were used.96 animals were used, 48 for each experiment, i.e., analgesic and anti-inflammatory studies.These 48 rabbits were sub-grouped as shown in Table 1.The animals were housed in standard polypropylene cages and were kept under controlled room temperature (25 ± 10 °C, relative humidity 60 -70 %) and fed with standard laboratory diet with water ad libitum.Two sets of eight groups of six animals each have been used for the experiments (Table 1).The doses of tablets administered to the animals were selected based on human doses (Table 1) as indicated in Unani literature [3][4][5].The animal studies were approved (approval ref no.33-BP/2009/SU) by the Departmental Ethical Committee and were conducted according to international guidelines [8] as well as the guidelines of the Institutional Animal Ethical Committee [7].
Preparation of solutions
Freshly prepared suspensions of all test tablets were prepared by suspending tablet powder in 0.5% suspension of gum tragacanth.0.5 % (w/v) suspension of gum tragacanth was prepared by dissolving 0.05g of gum tragacanth in 10 ml of distilled water. 1 % (w/v) carrageenan suspension is prepared by dissolving 1g of carrageenan in 100 ml of normal saline.2.5 % formalin solution was prepared by dissolving 6.8 ml of 37 % formalin in 100 ml of distilled water.
Evaluation of anti-inflammatory activity
The animals were fasted for 24 h with free access to water prior to experiments.Approximately 100 µl of 1 % carrageenan suspension (prepared 1 h before each experiment) was injected into the plantar surface of the right hind paw of rat [9,10] and the site of injection was marked.
Rats of group I (control group) received only gum tragacanth solution 1 h before carrageenan injection.Similarly, aspirin was given to the group II (standard group).Three different doses of each of the two herbal preparations were given orally to groups III, IV, V, VI, VII and VIII, respectively by gastric lavage.The anterio-posterior diameter of the rat paw was measured at 0, 1, 2 and 3 h intervals after carrageenan injection using vernier calipers (AM13, Emmay, Pakistan) at the marked site.The difference between the basal value of paw diameter and that measured at different time intervals was noted in millimeters and the difference was regarded as degree of edema (inflammation) developed after carrageenan injection [11][12][13][14][15][16].Paw edema inhibition (PI) at different doses of test and standard drugs was calculated by comparing with untreated control rats, as in Eq 1 [17].
where, V t is rat paw volume at time t, V o is initial rat paw volume (basal value), (V t -V o ) C is edema produced in the control group and (V t -V o ) T is edema produced in the treatment group.
Evaluation of analgesic activity
Three different doses of each of the herbal preparations were given orally by gastric lavage to animals of groups III, IV, V, VI, VII and VIII, respectively.Group I and II animals received gum tragacanth and aspirin suspension, respectively.After 1 h, analgesic activity was determined using formalin-induced paw licking test.100 µl of 2.5 % formalin was injected into dorsal surface left hind paw.After injecting formalin, the rats were observed for 30 min and the number of lickings observed [19,20].Analgesic activity was expressed as "none", "mild" and "good" if reduction in number of lickings was < 20, ≥ 20 but 40 and ≥ 40 % of control, respectively [20].
Behavioral pattern studies
For preliminarily screening of toxic effects of these herbal poly-pharmaceutical preparations, all the treated rats were kept under close observation for one week following the dosing.The symptoms including awareness, mood, motor activity, CNS excitation, posture, motor inco-ordination, muscle tone and reflexes were recorded for 7 days [4].Any mortality occurring during next two weeks were also recorded.
Statistical analysis
Anti-inflammatory and analgesic activities were analyzed using Chi-square test with the aid of SPSS, version 13.0 software (IBM, USA), and p < 0.05 was considered statistically significant.
Anti-inflammatory activity
As shown in Table 2, there was significant (p < 0.05) anti-inflammatory activity in the aspirin group (group II) at the doses administered.For groups III -VIII, anti-inflammatory activity after 1 h (paw edema inhibition, PDI) after 1 h was in the range of 78.58 -90.23) and 3 h and in the range of 24.59 -59.0 %) after 3 h.However after 2 h, anti-inflammatory activity was significant only in groups IV and V with PDI of 44.62 and 63.39 %, respectively, but was insignificant (p < 0.05) in groups III, VI-VIII with PDI of 12.05 -42.60 %.
Analgesic activity
There was significant (p < 0.05) analgesic activity (number of paw lickings) in group II but insignificant in groups III -V and VIII (p < 0.05) On the other hand, groups VI and VII showed significant (p < 0.05) analgesic activity (Table 2).
Side effects observed in rats treated orally with different doses of herbal poly-pharmaceutical preparation are presented in Table 3.
DISCUSSION
One of the most widely used primary tests for screening anti-inflammatory activity of drugs is the carrageenan-induced paw edema in rats [17], while formalin-induced paw licking test has been recommended for screening of analgesic activity.
The results obtained in the present investigation indicate the potent anti-inflammatory and analgesic activities of both polyherbal, Aujaie and Surangeen.
For anti-inflammatory activity, both the test herbal drugs were administered orally in the recommended doses and prescribed manner.Anti-inflammatory activity was observed from the very first hour and continued to the end of the test in all animal groups.This activity may probably be due to the inhibition of different aspects and chemical mediators (such as kinin, histamine, and 5-HT) of inflammation as established for aspirin [7,[9][10][11][12][13][14].
The results indicate dose-dependent antiinflammatory activity for both drugs.None to mild anti-inflammatory action of surangeen probably indicates that it was not able to inhibit sufficiently the kinin-like substance responsible for the 2nd hour plateau phase of inflammation [18].In 3 rd hour, both aujaie and surangeen exhibited non-significant anti-inflammatory activity suggesting that they were not able to combat completely prostaglandin release which might be responsible for the last accelerating phase of inflammation as described previously [13].Histamine and 5-HT are mainly responsible for vasodilatation and increased vascular permeability.
The anti-inflammatory activity of aujaie was not intense in the 2 nd and 3 rd hour but comparable to that of aspirin indicating that aujaie and aspirin inhibit histamine and serotonin mediated first phase of inflammation but aujaie is less effective in shortening the kinin-mediated plateau interval of 1 st phase and the prostaglandin-mediated acceleration phase of inflammation [15].When kinin release occurs, it activates β 1 and/or β 2 receptors, releasing other inflammatory mediators such as prostaglandins, leukotrienes, histamine, nitric oxide, platelet activating factor and cytokines, among others derived mainly from leucocytes, mast cells, macrophages and endothelial cells, causing either cell influx and plasma extravasations and ultimately prolonging the second phase of inflammation [5].Therefore, any anti-inflammatory agent that cannot inhibit kinin plateau of 1 st phase, will not be able to inhibit 2 nd phase of inflammation.It has been reported that the second phase of edema is sensitive to both clinically useful steroidal and non-steroidal anti-inflammatory agents [18,20].This was observed in the positive control whereby aspirin significantly reduced edema.
Pain is associated with the patho-physiology of various clinical conditions such as arthritis, muscular pain, cancer and vascular diseases.Formalin induced paw licking is a suitable method for assessing analgesic activity as it is sensitive for various classes of analgesic drugs, therefore, can be used to clarify possible mechanism of the anti-nociceptive effect of an analgesic.Surangeen showed mild analgesic activity in the same manner as anti-inflammatory activity.Like for anti-inflammatory activity, aujaie exhibited higher analgesic activity than surangeen but less than aspirin.
Formalin test is a biphasic phenomenon involving the direct stimulation of sensory nerve endings that ultimately releases inflammatory mediators such as histamine and serotonin in the late phase.Centrally acting drugs such as opioids inhibit both phases equally but peripherally acting drugs such as aspirin, indomethacin and dexamethasone only inhibit the late phase.The late phase seems to be an inflammatory response with inflammatory pain that can be inhibited by anti-inflammatory drugs [19].The effect of aujaie on the first and second phase of formalin-induced paw licking suggests that its activity may be due to its central action.
CONCLUSION
Both surangeen and aujaie showed significant and consistent anti-inflammatory and analgesic activities in experimental rats probably due to the inhibition of release of histamine, serotonin (5-HT), kinin and prostaglandin.These findings, therefore, support the folkloric use of these polyherbal preparations for the treatment of rheumatism.However, further studies are required to elucidate the exact mechanism(s) of the anti-inflammatory and analgesic activities as well as establish their efficacy and safety for clinical use in rheumatism.
Table 2 :
Anti -inflammatory (in carrageenan-induced paw edema) and analgesic activities (no. of paw lickings) of two herbal products in rats
Table 3 :
Number of rats treated orally with different doses of the herbal preparations with side effects Derle DV, Gujar KN, Sagar BSH.Adverse effects associated with the use of non-steroidal anti-
|
v3-fos-license
|
2021-01-05T15:56:25.714Z
|
2021-01-05T00:00:00.000
|
230719113
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-020-10079-8",
"pdf_hash": "42a18c7c5ab58cf28638972b5a4f10bd6aaff536",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42479",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "42a18c7c5ab58cf28638972b5a4f10bd6aaff536",
"year": 2021
}
|
pes2o/s2orc
|
Temporal trends and regional disparities in cancer screening utilization: an observational Swiss claims-based study
We examined colorectal, breast, and prostate cancer screening utilization in eligible populations within three data cross-sections, and identified factors potentially modifying cancer screening utilization in Swiss adults. The study is based on health insurance claims data of the Helsana Group. The Helsana Group is one of the largest health insurers in Switzerland, insuring approximately 15% of the entire Swiss population across all regions and age groups. We assessed proportions of the eligible populations receiving colonoscopy/fecal occult blood testing (FOBT), mammography, or prostate-specific antigen (PSA) testing in the years 2014, 2016, and 2018, and calculated average marginal effects of individual, temporal, regional, insurance-, supply-, and system-related variables on testing utilization using logistic regression. Overall, 8.3% of the eligible population received colonoscopy/FOBT in 2014, 8.9% in 2016, and 9.2% in 2018. In these years, 20.9, 21.2, and 20.4% of the eligible female population received mammography, and 30.5, 31.1, and 31.8% of the eligible male population had PSA testing. Adjusted testing utilization varied little between 2014 and 2018; there was an increasing trend of 0.8% (0.6–1.0%) for colonoscopy/FOBT and of 0.5% (0.2–0.8%) for PSA testing, while mammography use decreased by 1.5% (1.2–1.7%). Generally, testing utilization was higher in French-speaking and Italian-speaking compared to German-speaking region for all screening types. Cantonal programs for breast cancer screening were associated with an increase of 7.1% in mammography utilization. In contrast, a high density of relevant specialist physicians showed null or even negative associations with screening utilization. Variation in cancer screening utilization was modest over time, but considerable between regions. Regional variation was highest for mammography use where recommendations are debated most controversially, and the implementation of programs differed the most.
Background
In Switzerland, cancer was the second most common cause of death in 2017 [1]. Cancer was shown to have overtaken cardiovascular diseases as the leading cause of death in 12 European Union countries [2]. In Switzerland as well as internationally, public health officials and disease advocacy groups have worked hard in the past years to persuade the population of the importance of targeted cancer screening. These efforts have led to an increased uptake of screening, both in Switzerland and internationally, and have yielded intended results. For example, screening colonoscopy was associated with decreased colorectal cancer incidence and mortality [3,4]. The proportion of colorectal cancer deaths preventable by colonoscopy use within 10 years has been estimated to be 30.7% in Germany, and 33.9% in the United States [5]. However, while many guidelines consistently recommend the use of some preventive measures, such as colorectal cancer screening in certain age groups [6], other screenings, such as breast cancer screening, continue to be controversial because it is unclear whether lifetime benefits outweigh harms and costs in individuals [7]. Many adults receive routine cancer screening even in old age when it is no longer recommended [8]. The burden associated with overdiagnosis and overtreatment is becoming an increasingly recognized issue.
In Switzerland, colorectal cancer screening is recommended routinely between the age of 50 and 69 years, while routine screening of prostate cancer is discouraged without prior comprehensive education of the patient on benefits and harms and shared decision-making [6,9]. In fact, prostate-specific antigen (PSA)-based screening without prior informed decision making is one of five listed procedures to be avoided in the ambulatory sector, according to the Swiss Society of Internal Medicine (www.smartermedicine.ch). Breast cancer screening is often recommended, but this recommendation is debated in Switzerland [7,10]. Since 2011, an increasing number of cantons established breast cancer screening programs. Overall, the implementation of cancer screening programs differs considerably between cantons. Previous research, mainly based on the years 2007 to 2012, has found substantial temporal and regional variation in cancer screening utilization for all three cancer types in Switzerland [11][12][13]. Thereby, screening rates were generally higher in urban areas and in French-and Italianspeaking regions compared to German-speaking region [11][12][13]. While breast cancer screening utilization decreased over time in Switzerland, as well as in Europe and the US [13][14][15], colorectal cancer screening utilization seemed to increase [12,16]. Prostate cancer screening utilization varied with increasing numbers in Switzerland and Sweden and decreasing trends in the US [11,17,18]. However, more recent findings are lacking.
Besides system-related factors, like the existence of national or cantonal screening programs, further factors seem to play a role in whether or not persons participate in cancer screening, such as individual and supplyrelated variables [8,[19][20][21][22]. Furthermore, the patient's type of health insurance plan and healthcare utilization (such as physician consultations) were associated with cancer screening utilization [23,24]. Knowledge in the field of cancer screening coverage and its related factors is important for healthcare providers and policymakers as well as for patients when debating on the future directions of planning and resource allocation. Realworld and updated data obtained from routinely collected sources such as health insurance claims are particularly suitable for the study of screening coverage, because they are not collected by means of self-reporting and, as such, results are not distorted due to inherent recall bias [25]. We therefore aimed to examine colorectal, breast, and prostate cancer screening utilization in the appropriate target populations in the years 2014, 2016, and 2018. Moreover, we aimed to identify factors potentially modifying cancer screening utilization in Swiss adults, based on health insurance claims data.
Study design and study population
This is a retrospective, observational study based on insurance claims data of adults, who were insured at Helsana Group in the period from January to December of the years 2014, and/or 2016, and/or 2018, and also in the year preceding each applicable cross-section. The Helsana Group is one of the largest health insurers in Switzerland, insuring approximately 15% of the entire Swiss population across all regions and age groups. Health insurance is mandatory for all Swiss citizens and is based on a cost sharing obligatory basic coverage consisting of deductibles and co-payments. The height of the deductible ranges from Swiss Francs (CHF) 300 to 2500 and can -to some extent -be chosen by the insured person, whereby higher deductibles lead to lower premiums. Co-payments amount to 10% of the yearly healthcare costs and are limited to CHF 700 per person per year. On top of the mandatory insurance, citizens can buy supplementary hospital insurance, which covers further comfort of (semi-)private wards, free choice of physician, and speed of access to elective procedures.
In Switzerland, colorectal cancer screening is recommended routinely between the age of 50 and 69 years, using fecal occult blood testing (FOBT) biennially or colonoscopy every 10 years [6]. These opportunistic screenings are reimbursed by mandatory health insurance since 2013 but are not exempted from deductible. Cantonal screening programs exist since January 1st, 2015 in the canton of Uri, and since September 1st, 2015 in the canton of Vaud. Screenings at the ages 50 to 69 years within these cantonal programs are exempted from deductible, but participants still owe a 10% co-payment. Further cantonal programs did not start before 2019. So, in the years 2016 and 2018, 90.4 and 90.5% of the Swiss population lived in a canton without a colorectal cancer screening program. Between the age of 50 and 69 years (or 74, depending on the canton), mammography is recommended for breast cancer screening biennially [26]. Opportunistic screenings are reimbursed by mandatory health insurance, but they lack quality control of mammography and are not systematically monitored [10]. All mammography screenings in the context of breast cancer screening programs (programmatic screenings) in the cantons of Thurgau, Neuchâtel, Fribourg, Jura, Geneva, Bern, Valais, Vaud (for women between ages 50-74 years), as well as in the cantons of Grisons and St. Gallen (for women between ages 50-69 years) are exempted from deductible, but participants still owe 10% co-payment, except for Jura (up to December 31st 2017) and Valais (up to December 31st 2016), where co-payments are covered by a foundation. In mid-2014, a cantonal screening program was introduced in Basel, and beginning 2015, a further program started in the canton of Ticino (for women aged 50-69 years). Taken together, in the years 2014, 2016, and 2018, 50.6, 40.8 and 41.0% of the Swiss population lived in a canton without a breast cancer screening program. Finally, routine screening of prostate cancer is discouraged by guidelines [9]. No national or cantonal screening program exists.
For each of the observed years, men or women aged 50 to 74 years were considered eligible for prostate or breast cancer screening, respectively, while individuals aged 50 to 69 years were considered eligible for colorectal cancer screening. Collectively across all three data cross-sections, 10.3% individuals of the colorectal, 8.0% of the breast, and 6.3% of the prostate cancer screening populations with missing data (enrollees without full coverage during the observation time, enrollees living abroad, Helsana employees, and enrollees seeking asylum) were excluded. Consequently, the final study population for colorectal cancer screening comprised 270′ 576, 261′682, and 244′328 individuals in the year 2014, 2016, and 2018, respectively. The corresponding numbers were 171′186, 166′675, and 165′328 for breast, as well as 160′661, 157′269, and 155′944 for prostate cancer screening.
The present study falls outside the scope of the Swiss Federal Act on Research involving Human Beings (Human Research Act, HRA), because it is retrospective and based on anonymized routine administrative claims data. No informed consent from patients or further ethics approval was needed, as all requirements of article 22 of the Swiss data protection law were fulfilled. This was confirmed by a waiver of the ethics committee (Kantonale Ethikkommission Zürich, dated January 11, 2017).
Measures
Inpatient and outpatient codes used to identify screening services have been published elsewhere [27]. In short, colonoscopy, mammography and PSA testing were used to define colorectal, breast or prostate cancer screening utilization, regardless of whether the tests were used for screening or diagnostic purposes. In contrast to Ulyte et al., we additionally considered FOBT as a colorectal cancer screening test. Sociodemographic factors (sex and age), health-related factors (number of chronic conditions assessed by means of the Pharmacy-based Cost Group (PCG) model [28], and having had a major surgery or disease associated with the specific cancer of interest, based on inpatient and outpatient diagnoses and treatments in the preceding year (specific disease)), as well as the patient's type of health insurance plan (supplementary hospital insurance, managed care, and deductible level) were included as explanatory variables. Regional (urban/ rural residence and language region (German, French or Italian)) and system-related factors (existence of a cantonal screening program) were also considered. In the present data set, adults from two cantons belonging to two different language regions were enrolled; the canton of Bern (BE) incorporates Germanspeaking and French-speaking regions, and the canton of Grisons (GR) incorporates German-speaking and Italian-speaking regions. The Rhaeto-Romanic region of GR (hosting < 1% of inhabitants) was assigned to the German-speaking region. Furthermore, screening specific specialist physician density information of the corresponding year was provided by the Swiss Medical Association (FMH) and included as supply-related factor (gastroenterologist in colonoscopy/FOBT, gynecologist in mammography, and urologist in PSA testing utilization). Finally, beyond the respective screening (specific testing), the following healthcare utilization measures were considered as explanatory variables: the number of physician consultations, total healthcare costs, and at least one acute hospital admission, all measured in the preceding year, as well as colonoscopy/ FOBT in the same year (for mammography and PSA testing analysis).
Most variables were originally measured on a nominal scale. All continuously measured variables were transformed into categories before their use in regression analysis (age (five-year groups), height of deductible (above CHF 500 yes/no), specialist physician density (above median density yes/no), number of chronic conditions (none, one, multiple), number of physician consultations (quarters), healthcare costs (quarters), and acute hospital admissions (at least one yes/no).
Statistical analysis
The baseline characteristics of all included study subjects are presented as counts and percentages, or as mean and standard deviation for continuous variables. For each of the three screening types, we compared subjects with and without the respective testing. We calculated the testing prevalence per year (2014, 2016, 2018) for each cancer screening type, and we then tested whether the testing prevalences were equal using Chi-squared tests, pairwise between years (with Holm correction for multiple testing), as well as across all 3 years. Additionally, we calculated the agestandardized testing prevalence per canton. Small cantons with low numbers of observations were grouped with a neighboring canton where sensible (Appenzell Innerrhoden and Appenzell Ausserrhoden, Neuchâtel and Jura, Obwalden and Nidwalden for colorectal cancer screening, and Uri and Glarus for breast and prostate cancer screening). Furthermore, a simple probability-rate-probability conversion (assuming constant testing rates) was performed to estimate the longer-term testing prevalence, thereby taking the recommended screening interval into account [29].
In logistic regression models with testing in a given year as outcome variable, we calculated the average marginal effect, i.e. the averaged difference in the predicted probability of having the outcome, for each of the included covariates [30,31]. In conjunction with the fact that all included covariates are categorical, the average marginal effects facilitate the interpretation of each association (direction and magnitude) between each covariate and the outcome on the probability scale. This exploratory analysis was performed on the pooled data of all three cross-sections, for each screening type separately. One assumption in logistic regression is the independence of all observations, which is violated in the pooled cross-sections where some subjects are observed in more than one cross-section. This violation could have led to a wrong estimation of the variance in the effect estimates. A sensitivity analysis using clustered covariance matrix estimation with individuals as clusters showed similar interval estimates for most covariates [32][33][34]. Since we have very few (one to three) observations per cluster, these estimations may not work well, and we therefore show these results as supplementary material only (Additional file 3) [34].
All analyses were performed using R version 3.6.1.
Results
Overall, the mean (sd) age was 59. Looking at the adjusted testing utilization in multivariable regression analysis, there was a slight increase in colonoscopy/FOBT utilization of 0.6% (CI: 0.4-0.7%) in 2016 and 0.8% (CI: 0.6-1.0%) in 2018 as compared to 2014 (Fig. 1). These adjusted increases correspond to the raw increases of 0.6% between 2014 and 2016, and of 0.8% between 2014 and 2018 ( Table 2) In multivariable regression analysis, several determinants were associated with testing utilization ( Fig. 1 and Additional file 3). Utilization increased with increasing age for colonoscopy/FOBT and even more strongly for PSA testing but decreased slightly with increasing age for mammography use. Being female was associated with a 1% (CI: 0.9-1.1%) lower probability of receiving colonoscopy/FOBT. Having had a major surgery or disease associated with the specific cancer of interest was strongly related to receiving colonoscopy/FOBT, mammography, or PSA testing in the observed year, although this applied to a small proportion of patients. Having one or more chronic conditions was positively associated with colonoscopy/FOBT and PSA testing, whereas multiple chronic conditions were slightly negatively associated with mammography use. Regarding the patient's type of health insurance plan, having supplementary hospital insurance was consistently associated with a 1.9 to 4.8% higher probability of testing utilization, depending on cancer type, while having a higher deductible was consistently associated with a 1.7 to 5.3% lower probability of testing utilization. The positive effect of being in a managed care model on testing utilization was minimal, but slightly higher for mammography use than for receiving colonoscopy/FOBT.
When compared to the German-speaking region, living in the Italian-speaking region was associated with a higher probability of receiving colonoscopy/FOBT, whereas living in the French-speaking region had almost no effect. In contrast, living in the French-and the Italian-compared to the German-speaking region increased mammography use by 13.0% (CI: 12.6-13.4%) and 12.8% (CI: 12.3-13.3%), respectively, and PSA testing by 6.5% (CI: 6.1-6.8%) and 9.6% (CI: 9.2-10.1%), respectively. The average marginal effect of living in the rural area on testing utilization was negative, but mostly small, for all cancer types. The existence of a cantonal program had a positive impact of 7.1% (CI: 6.8-7.4%) on mammography utilization, as well as a small positive impact of 0.9% (CI: 0.5-1.2%) on colonoscopy/FOBT utilization. While the cantonal density of gastroenterologists and gynecologists seemed to have no influence on colonoscopy/FOBT and mammography utilization, the cantonal density of urologists was negatively associated with PSA testing.
High healthcare utilization, assessed by higher healthcare costs and more physician consultations in the preceding year, were both associated with a higher probability of being tested for all cancer types. Furthermore, receiving colonoscopy/FOBT in the corresponding year was highly related to In contrast, the respective testing in the previous year showed varying associations: Colonoscopy/FOBT in the previous year was associated with a slightly higher probability of receiving colonoscopy/FOBT in the observed year. Previous PSA testing was strongly associated with retesting in the observed year, while mammography in the previous year was negatively associated with present mammography utilization. All the above-mentioned observed effects hardly changed when different observations periods were analyzed separately (results not shown). Looking at the regional distribution of agestandardized testing utilization, we found significant differences between the three language regions on the one hand, and between cantons with and without screening programs on the other hand. Since these interactions cannot be captured by adjusted regression modelling, we illustrate this interrelation in Fig. 2.
In bilingual cantons incorporating more than one language region (BE, GR), both, the existence of a program and the language region seem to influence screening participation. In the German-speaking regions, the agestandardized mammography utilization rates were approximately 7% higher in cantons with a breast cancer screening program compared to cantons without such a program. Noteworthy, in 2018 in the German-speaking part of Bern, where screening programs were reorganized in 2017/2018, the utilization rate was significantly lower than in the French-speaking part of Bern, where a screening program was jointly established together with the cantons of Jura and Neuchâtel in 2011. As all French-speaking regions belonged to cantons with existing screening programs, the effect of a program in these regions could not be evaluated. In the Italian-speaking region (mainly represented by the canton of Ticino), the utilization rate increased significantly after the introduction of the cantonal screening program in 2015. In contrast, this increase was much smaller in the canton of Grisons, where only a small part of the population lives in an Italian-speaking region. The supplement shows corresponding bubble plots for colonoscopy/FOBT use and PSA testing (Additional files 4 and 5).
Discussion
Variations in cancer screening utilization were modest over time, but considerable between regions. Regional variation was highest for mammography use where recommendations are debated most intensively, and the implementation of programs differed considerably. The present study showed an increasing trend of 0.8% (0.6-1.0%) for colonoscopy/FOBT and of 0.5% (0.2-0.8%) for PSA testing, while mammography decreased by 1.5% (1.2-1.7%) between 2014 and 2018.
Although colorectal cancer screening by means of colonoscopy or FOBT is clearly recommended and has been promoted since 2014 in Switzerland, e.g. by pharmacies, colonoscopy/FOBT utilization in this population-based study was rather low and has hardly changed since then. Considering the recommended ten-year screening interval for colonoscopy and the two-year interval for FOBT, about 58% of the eligible population would have been tested by 2018. This proportion is slightly higher compared to previous Swiss and Italian findings, but slightly lower to screening participation in the US. In a Swiss cross-sectional study conducted in 2017 [35], 41% of patients who visited a primary care physician had a colonoscopy within 10 years and 4% had a FOBT within 2 years. According to an earlier population-based Swiss survey in 50 to 75 year old persons, colorectal cancer screening defined as endoscopy (either colonoscopy or sigmoidoscopy) in the past 10 years or FOBT in the past 2 years increased from 18.9% in 2007 to 22.2% in 2012; this increase within 5 years was more substantial compared to what we found, and was due to growing endoscopy numbers in 2012, while FOBT decreased [12]. The overall higher screening utilization in our study might be owed to the addition of colorectal cancer screening to the benefit basket of the basic insurance coverage in Switzerland in 2013. Moreover, our inability to discriminate between diagnostic and screening colonoscopy/FOBT, and the differences in study designs (claims-based versus survey-based) might partially explain the different findings. A recent Italian study in women aged 50-54 years found participation rates within the last 2 years for colorectal cancer screening (FOBT) of 45.1% [36]. In the US, 64.5% of respondents aged 50 to 75 years reported having participated in colorectal cancer screening by 2010: [37]. The slight decline in mammography utilization in our study was similar to previous Swiss and European findings. For example, the proportion of Swiss women with any mammography in the last 12 months decreased from 19.1% in 2007 to 11.7% in 2012 in a survey data-based study [13]. Annual participation rates for breast cancer screening varied between 23 and 84% in 17 European countries with mostly organized national or regional breast screening programs, with a decreasing trend even before 2014 [14]. Thus, mammography use within 1 year of approximately 20% (or 37% within 2 years) presented in our study is low when compared internationally. In a recent Italian study, mammography use within 2 years amounted to 85.1% [36]. Moreover, mammography use increased from 48% in 2007/08 to 54% in 2011/12 in a German city after the implementation of a mammography screening program by the end of 2005 [38]. The decline found in our analysis is likely to be influenced by the public debate about benefits and harms of breast cancer screening [39]. By the end of 2013, the Swiss Medical Board recommended that no new systematic mammography screening programs be introduced in Switzerland due to lack of cost-effectiveness and undesirable effects outweighing desirable effects [40,41]. The relative risk reduction or lifesaving effect is small, while false-positive results and overdiagnosis can cause considerable harm in screened patients [42,43]. Therefore, a more personalized approach is now recommended in the US, meaning that physicians should have a more informed discussion with patients.
Our analysis showed that proportions of PSA testing remained above 30% between 2014 and 2018. These rates seem rather high, given the uncertainty of the usefulness of PSA screening and the potential harm caused by overdiagnosis and by associated overtreatment. This is the reason why most organizations and Societies in Europe and America, as well as the Swiss Medical Board, recommend against routine PSA screening without prior shared decision making [9,44]. According to our findings, the impact of the top five list by the Smarter Medicine Initiatives (www.smartermedicine.ch), published in 2014, does not seem to have considerably impacted PSA testing rates. Our results are in line with former Swiss findings. Between 1992 and 2012, use of PSA screening within the last 2 years increased from 32.6 to 42.4% in Swiss men aged 50 years and older [11]. In contrast, a US study demonstrated a decline in PSA testing after the publication of the 2012 USPSTF recommendation discouraging testing in asymptomatic men [44]. Since PSA testing in our study is strongly associated with further measures of healthcare utilization like the number of consultations in the preceding year and concurrent colonoscopy/FOBT use, and is mainly related to patients with low deductibles and with multiple chronic conditions, we might speculate that this specific testing is done additionally in the course of other medical examinations as no special equipment is needed.
In general, recommended screenings like colorectal cancer screening have not clearly increased and discouraged screenings like prostate cancer screening have not clearly decreased over time. However, the present results need to be interpreted with caution, as we were unable to discriminate between screening and diagnostic or follow-up testing (except if screening occurred within a cantonal program and was reimbursed as such). Particularly in patients with a major related surgery or disease, the colonoscopy/FOBT, mammography or PSA testing might be attributable to diagnostic or follow-up testing rather than screening purposes. This holds especially true for colonoscopy which is only recommended once in 10 years. However, the number of patients with related disease or surgery is low.
Screening utilization was associated with a variety of individual, regional, insurance-related, as well as with supply-, and system-related factors. The direction of the average marginal effects on testing utilization are comparable across all cancer types for most of these factors. However, age was positively associated with colonoscopy/FOBT and PSA testing, but inversely associated with mammography. The decline in the latter in older age is mostly owed to a lower probability in women aged over 70 years where screening is no longer supported by all cantonal programs. Being male was associated with a higher prevalence of colonoscopy/FOBT use, similar to a former Swiss study conducted in 2012 [12], but contrary to a Flemish study, where utilization rates in 2013 and 2014 were lower for men [45].
Having supplementary hospital insurance was consistently associated with a higher, while having a higher deductible with a lower probability of screening utilization. Similar findings were found for colorectal [46], as well as for breast cancer screening [13,47]. The marginal positive effect of being in a managed care model on cancer screening utilization is in line with a previous study showing positive associations, where slightly higher effects were observed for breast than for colorectal cancer screening as well [24]. Though, the effects observed in our study are small.
Screening utilization was generally more likely in the French-and Italian-speaking regions compared to the German-speaking region, except for colonoscopy/FOBT use, where living in the French-speaking region hardly had any effect. Regional variation was highest for mammography use, where recommendations are debated most and the implementation of programs differed considerably. Correspondingly to our findings, significant differences in breast cancer screening attendance between women in the French-and the German-speaking region were found in the study by Eichholzer et al. [48] and Fenner et al. [13] Alike, prostate cancer screening rates were higher in men living in the French-or Italianas compared to the German-speaking region, and in urban rather than rural areas [11]. The proportions of patients with either FOBT or colonoscopy also varied widely between language regions [35] The increased number of screening programs as well as the higher screening utilization even in the absence of specific programs might point to a diverse attitude of patients and/ or physicians towards preventive measures in the French-and Italian-compared to the German-speaking region.
Cantonal programs for breast and colorectal cancer screening were associated with a small, but significant increase in testing utilization, although the association was stronger in the former, since only two cantonal colorectal cancer screening programs were in place by 2018, and because the overall proportion of persons receiving colonoscopy/FOBT was rather low. Generally, despite an increasing number of cantons offering breast cancer screening programs since 2011, the overall marginal effect showed a decreasing trend in mammography utilization. This decline is mainly based on cantons without any screening program. Similarly, the decline in mammography screening was more pronounced in cantons with no or with a long-standing screening program in the previous Swiss survey-based study [13]. In contrast, according to another Swiss study looking at data from the Swiss Health Survey in the years 1997, 2002, 2007, and 2012, only a small part of the (relatively high) mammography utilization rates could be attributed to organized programs, and non-use of mammography was not attributable to a lack of information or to financial barriers [47]. Another Swiss study compared participants of opportunistic with participants of organized mammography screening and found that mammography screening programs mainly attracted women in lower socio-economic strata [49]. Unfortunately, we were unable to differentiate between those two screening types by means of our data.
A high density of related specialist physicians had null or even a negative association with screening utilization in our study. This is in contrast to a German online survey where PSA testing was judged as useful by all urologists but only by half of the general practitioners, and where PSA testing practices varied between both clinician groups [50]. Higher PSA screening rates were also seen in regions where the primary care specialist was unlikely to be the predominant physician for ambulatory visits [22]. We can only speculate that PSA testing is done by primary care physicians to a very substantial extent. At least, PSA testing was higher in those with primary care physician visits in the preceding year [11].
High healthcare utilization, assessed by higher healthcare costs and more physician consultations in the preceding year, were both associated with a higher probability of being tested for all three cancer types, although this association was less strong in the highest cost category. In line with our findings, having consulted a primary care physician or a specialist physician in the last 12 months was significantly associated with a higher prevalence of colorectal cancer screening in Switzerland in 2012 [12]. This implies that physicians assume their obligation to talk with their patients about preventive measures like cancer screening. Concurrent colonoscopy/FOBT use increased the probability of mammography or PSA testing. Likewise, US women who adhered to breast cancer screening recommendations were four times more likely to have had colorectal cancer screening [23]. PSA testing in the preceding year also increased PSA testing in the observed year. This finding is congruent to the clinical practice that individuals who are being screened, are screened on a yearly basis. In contrast, mammography in the preceding year was related to lower mammography use in the observed year. This might be indicative of the biennially screening recommendations. Then again, colonoscopy/FOBT in the previous year hardly had any influence. As discussed previously, the inability to discriminate between diagnostic and screening testing on the one hand, and the difference in recommended screening intervals for colonoscopy and FOBT on the other hand, might have influenced these findings.
Strengths and limitations
Our study has several strengths and limitations worth mentioning. The major strength is the highly reliable and comprehensive, population-based data set available for analysis over three cross-sections. The major limitation is that we were unable to discriminate between screening and diagnostic testing (except if screening occurred within a cantonal program and was reimbursed as such). This misclassification issue leads to an overestimation of screening utilization, which might be more pronounced in breast and prostate cancer screening where comparably more patients had underlying diseases. In contrast, tests that have been paid out-of-pocket were not captured by means of claims data, which is more likely for PSA testing than for colonoscopy or mammography. This may have led to an underestimation of screening utilization. Furthermore, we might have missed some codes used by specific laboratories to account for cancer screening. Second, observations were not necessarily independent between the different observation periods, which may have led to an underestimation of the variance in the effect estimates. However, in a sensitivity analysis using clustered covariance matrix estimation with individuals as clusters, interval estimates altered only marginally (Additional file 3). Third, further aspects influencing cancer screening participation in individuals, like difference in life expectancy [51], screening habits or patient's preferences [23], could not be taken into account by means of our claims data. Yet, we considered concurrent colonoscopy/FOBT use in the breast and prostate cancer screening population as a proxy for screening habit. Fourth, categorization of continuous variables is sometimes discouraged, because it leads to information loss and assumes a flat relationship between the covariate and the outcome within intervals, which is less likely than e.g. a linear relation in most cases [52]. While these reservations are certainly true, we chose to categorize continuous variables because in the case of this exploratory analysis the loss in precision is outweighed by the increased interpretability of the results.
Implications
Clinical practice guidelines are an essential step forward to improve patient care and provide recommendations based on a systematic review of evidence [53]. However, although clearly recommended, colorectal cancer screening is still not performed by almost half of the eligible population. Therefore, our findings highlight the need for enhanced awareness of systematic colorectal cancer screening benefits to reduce cancer-specific mortality rates. During a physician consultation or hospitalization, strategies could be employed to counsel, educate, and motivate patients towards preventive measures like cancer screening, particularly for those who are at higher risk of disease. Furthermore, information campaigns and further actions like invitation letters should more specifically address the population who is less likely to be screened, e.g., individuals with a high deductible. Offering of prevention and health promotion to enrollees with supplementary health insurance seem to go in that direction. Additionally, further cantonal programs were established in 2019 to hopefully promote colorectal cancer screening. Yet, a standardization of screening programs and their payments in Switzerland is urgently warranted [54], and might help to increase equal access and uptake.
Unnecessary screening may not only cause adverse effects but also generate high healthcare costs [55]. Regarding prostate cancer screening, annual PSA testing may result in an overdiagnosis rate of 50% [56]. Increased awareness of initiatives such as the Smarter Medicine recommendations of the Swiss Society of Internal Medicine are therefore crucial. It should be noted that screening attendance was shown to be mainly influenced by social norms and role models [57], and not solely by guidelines, even among physicians [53]. Thus, physician training regarding informed decision making as well as the development of improved information and decision aids is warranted [11].
Although breast cancer screening is recommended biennially, and various screening programs exist in Switzerland, mammography use is low. Controversies about the value of screening and further disparities, like regional and system-related differences regarding program implementation, might contribute to these findings. For example, the risk of overdiagnosis and overtreatment has been repeatedly demonstrated and debated, particularly after breast cancer screening implementation [10,43,58,59]. Further promoting interventions for breast cancer screening, as mentioned in the systematic review by Agide et al. [60], may therefore have difficulties in being introduced in Switzerland. However, unless screening participation reaches an acceptable standard level [14], it may not achieve the warranted gains like a reduction in cancer-specific mortality.
Conclusions
Variations in cancer screening utilization were modest over time, but considerable between regions. Regional variations were highest for mammography use where recommendations are debated most controversially. Since recommended screening (like colorectal cancer screening) has not clearly increased and discouraged screening (like prostate cancer screening) has not clearly decreased over time, health policy adoptions are needed to optimize preventive care in Switzerland.
|
v3-fos-license
|
2023-09-22T06:42:19.563Z
|
2023-09-20T00:00:00.000
|
262084011
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.na.2024.113605",
"pdf_hash": "ed47eea547574aa2ce1b0befd20eb1ddb0fecf7e",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42481",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "53b6aa4ea9eeb131e1d64122b21a796218304cb5",
"year": 2023
}
|
pes2o/s2orc
|
The trace fractional Laplacian and the mid-range fractional Laplacian
In this paper we introduce two new fractional versions of the Laplacian. The first one is based on the classical formula that writes the usual Laplacian as the sum of the eigenvalues of the Hessian. The second one comes from looking at the classical fractional Laplacian as the mean value (in the sphere) of the 1-dimensional fractional Laplacians in lines with directions in the sphere. To obtain this second new fractional operator we just replace the mean value by the mid-range of 1-dimensional fractional Laplacians with directions in the sphere. For these two new fractional operators we prove a comparison principle for viscosity sub and supersolutions and then we obtain existence and uniqueness for the Dirichlet problem. We also show that solutions are $C^\gamma$ smooth up to the boundary when the exterior datum is also H\"older continuous. Finally, we prove that for the first operator we recover the classical Laplacian in the limit as $s\nearrow 1$.
This may be the best known and most famous second order differential operator.Written as in (1.1) it is an operator in divergence form.This allows to use techniques form calculus of variations a framework in which solutions are understood in a weak sense integrating against test functions (typically solutions are functions in the Sobolev space H 1 ).When one introduces coefficients in this context a natural operator to look at is (1.2) Lu(x) = div(A(x)∇u(x)), with a given matrix (that is usually assumed to be symmetric) with spatial dependence, A(x), see for instance [1,29].
A different way of writing the Laplacian is as Here λ 1 (D 2 u) ≤ λ 2 (D 2 u) ≤ ... ≤ λ N (D 2 u) stands for the eigenvalues of the Hessian, D 2 u = (∂ 2 ij u) ij .This way of writing the Laplacian is not in divergence form but as an operator for which solutions are understood in viscosity sense [21] (here solutions are just continuous functions and the operator is applied to smooth test functions that touches the solution from above or below).Introducing coefficients thinking in this way one finds (1.4) F (D 2 u)(x) = tr(A(x)D 2 u(x)) with an x−dependent matrix A(x), see [16].
For the classical Laplacian both (1.1) and (1.3) are equivalent ways of writing the same operator.For the Dirichlet problem for ∆u = 0 the notions of weak and viscosity solutions coincide (and in fact the Dirichlet problem has a unique classical solution), see [25] and [30] (the equivalence between weak and viscosity solutions include quasi-linear equations, [26], and some non-local equations, [6,15]).However, when one introduces coefficients, the operators (1.2) and (1.4) are not equivalent (in fact, the notion of weak solution using Sobolev spaces is not appropriate to deal with (1.4)).
In recent years an operator that has become quite popular is the well-known fractional Laplacian defined as (1.5) (−∆) s u(x) = c(s) p.v.
Here s ∈ (0, 1), p.v refers to the principal value of the integral and c(s) is a constant that depends also on the dimension N and goes to zero as (1 − s) as s → 1 (we will make explicit and use the constant only in dimension 1).For several different ways of writing the fractional Laplacian we refer to [27].The operator in (1.5) is also well suited for variational methods (and typically solutions are functions in the fractional Sobolev space H s ).In this context one can introduce spatial dependence in the operator using a general symmetric kernel k(x, y) (this symmetry assumption allows to integrate by parts and use variational techniques) and consider Here a natural assumption is to ask that the kernel k is comparable to the one of the fractional Laplacian, in the sense that, for two positive constants c 1 , c 2 , we have c 1 |x−y| −N −2s ≤ k(x, y) ≤ c 2 |x−y| −N −2s .One possible choice of the kernel is given by k(x, y) = |A(x − y)| −N −2s , obtaining an operator that is similar to (1.2).When one wants to look for non-divergence form nonlocal operators one can consider nonsymmetric kernels k(x, y).These kind of operators have been intensively studied recently, we refer to [17,18,19,20,23,24,28,31] and references therein.
However, up to now, there is no clear analogous to the classical way of understanding the Laplacian as the sum of the eigenvalues of the Hessian, as in (1.3).Our main goal in the paper is to introduce a new nonlocal operator that is a natural analogous to this way of looking at the classical Laplacian.To this end we first recall that, from the classical Courant-Hilbert formulas for the eigenvalues of a symmetric N × N matrix, we have (1.6) Here the maximum is taken among all possible subspaces S of R N of dimension N −i+1 and the minimum among unitary vectors in S. For the Dirichlet problem for the equation λ i (D 2 u(x)) = 0 we refer to [14].Related operators are the truncated Laplacians studied in [8,9,10].Notice that (1.6) can be written as version of the eigenvalues (that we will call fractional eigenvalues) is given by that is, we are computing the same max-min procedure as before, but now we are taking the one-dimensional fractional derivative of order 2s.The Dirichlet problem for the first fractional eigenvalue is related to fractional convex envelopes, see [22].Operators that are fractional analogous to truncated laplacians are studied in [7,11,12,22].Notice that due to their definition, the fractional eigenvalues are ordered Λ s Now, let us introduce the operator that we call the trace fractional Laplacian, Here Λ s i u is given by (1.7).Notice that (1.8) is not in divergence form and therefore we will use viscosity theory to study this operator.We remark that this fractional version of the classical Laplacian is not equivalent to the usual fractional Laplacian given by (1.5).This is a striking difference between the fractional setting and the classical local context (the variational fractional Laplacian does not coincide with the trace fractional Laplacian).
Our main goal in this paper is to show that the Dirichlet problem for the trace fractional Laplacian is well posed in the framework of viscosity solutions.Given a bounded domain Ω and an exterior datum g we will deal with (1.9) Since we want a continuous up to the boundary solution (some of our arguments requiere this property) we will assume that s ∈ (1/2, 1).Now, let us introduce a second fractional operator that we will call the mid-range fractional Laplacian.To this end, notice that the classical fractional Laplacian, given by (1.5), of a smooth function that decays at infinity can be written as Therefore, the fractional Laplacian can be written (up to a negative constant) as the mean value (on the sphere) of the function Θ s φ : That is, to obtain the fractional Laplacian one computes the mean value in the directions of the one-dimensional fractional derivatives of order 2s.With this idea in mind let us introduce a different fractional operator.Instead of the mean value we just take the mid-range (the measure of central tendency that is given by the average of the lowest and highest values in a set of data) of the function Θ s φ in S N −1 and we obtain The mid-range and the trace fractional Laplacians are closely related.They are part of a large family of operators (those given in terms of combinations of fractional eigenvalues, we will add more comments on this in the final section of this paper).Moreover, in dimension two, that is, for N = 2 in our notation, the mid-range and the trace fractional Laplacians coincide up to a constant In addition, remark that these two new fractional operators that we introduced here, (−∆) s tr and (−∆) s mid , are 1-homogeneous (it holds that (−∆) s tr (ku , and are invariant under rotations, as the usual Laplacians (both local and fractional) are.
For the mid-range fractional Laplacian we also study the Dirichlet problem, that in this case reads as Here we dropped the constant 1/2 in front of the fractional eigenvalues to simplify the notation.
Our first result says that when the domain is smooth and the exterior datum g is continuous and bounded, there is a unique viscosity solution for (1.9) or for (1.10).
These operators share many properties with the corresponding local Laplacian and the classical fractional Laplacian (as the validity of a comparison principle and a strong maximum principle and the continuous dependence of the solutions on the exterior data).
Theorem 1.2.Under the hypothesis of Theorem 1.1, a comparison principle holds, let u 1 and u 2 denote the solutions to (1.9) (or to (1.10)) with exterior data g 1 and g 2 respectively, then and, as a consequence, the solution depends continuously on the exterior data; it holds that Moreover, a strong maximum principle holds both for (1.9) and for (1.10).If there exists x 0 ∈ Ω such that then, the solution and the exterior datum are constant, u ≡ g ≡ cte.
A striking difference with the classical Laplacian (and also with the classical fractional Laplacian) is that these operators are nonlinear.
Theorem 1.3.The problems (1.9) and (1.10) are nonlinear problems.There exists data g 1 , g 2 such that but the corresponding solutions verify That the operators are nonlinear is due to the fact that there are maxima and minima involved in the definition of ∆ s tr and ∆ s mid .In fact, what is really surprising is that the usual local Laplacian, written as is a linear operator.
In addition, we study the limit as s ր 1 and prove that solutions to (1.9) converge uniformly to the unique solution to the Dirichlet problem for the classical local Laplacian (1.11) ∆u(x) = 0 x ∈ Ω, x ∈ ∂Ω.
Here we will use the explicit constant that appears front of the integral in (1.7).We just remark that any c(s) such that c(s) ∼ (1 − s) will also work when taking this limit, but the explicit form of (1.12) is the one that corresponds to the 1-dimensional fractional Laplacian.
Theorem 1.4.Let Ω be a C 2 bounded domain and fix g ∈ C(R N \ Ω) and bounded.Let u s denote the unique solution to to the problem for the trace fractional Laplacian, (1.9).Then, it holds that lim sր1 u s = u in C(Ω).The limit u is given by the unique solution to (1.11).
Moreover, when u s is the unique solution to the problem for the mid-range fractional Laplacian, (1.10), it holds that lim sր1 u s = u in C(Ω) where the limit u is given by the unique solution to the local problem x ∈ ∂Ω.
Remark that the first limit result also holds for the usual fractional Laplacian (1.5) with the appropriate N dimensional constant c(s).
In these limits, as s ր 1, the limit is unique and is characterized as the solution to (1.11) or to (1.13).Hence, we have convergence of the whole family u s as s ր 1 (not only along subsequences).
Concerning regularity of solutions we quote the recent paper [13] where the authors prove a Hölder regularity result for solutions to Λ s 1 u(x) + Λ s N u(x) = f (x) in Ω with homogeneous Dirichlet boundary conditions, u(x) = 0 in R N \ Ω and s close to 1. Regularity issues for these operators is a delicate issue since they are very degenerate.Now, to end the introduction, let us comment briefly on the ideas and methods used in the proofs.
In most of our arguments, to simplify the notation and clarify the ideas used in the proofs we will only include the details for the mid-range fractional Laplacian, (−∆) s mid (u)(x), that involve the smallest and the largest fractional eigenvalues.Since for this operator the main difficulties arise, we will just briefly comment on how to extend the results for the fractional trace Laplacian, (−∆) s tr (u)(x), in which the intermediate eigenvalues appear.Recall again that for N = 2 the mid-range fractional Laplacian and the trace fractional Laplacian coincide.In addition, we will drop the notation p.v. in front of the integrals (that need to be understood in the principal value sense when appropriate) and, when we prove results for a fixed s, we will also drop the constant c(s).
Since our problem is fully nonlinear we use the concept of viscosity solutions in a nonlocal framework.We will use ideas from [2,3,4].First, we prove a comparison principle for sub and supersolutions to our problems (1.9) and (1.10).The proof follows ideas from [4] and [22].Notice that in [22] it is assumed that the domain is strictly convex.This condition is not needed here since the maximum and the minimum among directions are involved in our operator and we can choose any direction either in the max or in the min to obtain a super or a subsolution.Once we have a comparison principle existence and uniqueness of solutions are an easy consequence of Perron's method.
To show that the operator is nonlinear we find a domain in two dimensions, a smooth function u inside Ω and an exterior datum g such that the lines corresponding to directions associated with the maximum and the infimum of the 1−dimensional fractional derivatives of u inside Ω do not intersect a region outside Ω.Then, we show that the exterior datum g can be slightly perturbed in that region keeping the same u as the solution inside Ω.In this way we find two different exterior data (that, in addition, are ordered) with the same solution inside the domain and we conclude the nonlinearity of our problem from the strong maximum principle.
To recover the usual Laplacian in the limit as s ր 1 we just have to observe that with the choice of the constant c(s) given in (1.12) it holds that the 1-dimensional fractional Laplacian converges to the usual second derivative as s ր 1 (for c(s) ∼ (1 − s) we just obtain a multiple of the Laplacian in the limit).Then, the proof follows just taking care of the max and min involved in the fractional eigenvalues using viscosity tricks.The uniform convergence in Theorem 1.4 will follow from the fact that we show that the upper and lower half-relaxed limits of {u s } s coincide.
Finally, let us point out again that to compute the 1-dimensional fractional Laplacian in directions z on some particular test functions we need to restrict ourselves to consider only the case s > 1/2.We believe that without this condition solutions may noy be continuous up to the boundary of the domain.
The paper is organized as follows: In Section 2 we prove the comparison principle for viscosity sub and supersolutions to our problems and then we obtain existence and uniqueness for the Dirichlet problems.In Section 3 we show that the operators are nonlinear.In Section 4 we deal with the limit as s ր 1.Finally, in Section 5 we will comment on possible extensions of our results and describe how to introduce coefficients in fractional trace operators.
Comparison principle. Existence and uniqueness of solutions
The main result in this section is to prove a comparison principle for the problem (2.1) To this end, we borrow ideas from [4] (see also [22]).
Since in this section s is fixed we drop the constant c(s) that plays no role in our arguments.Also, as mentioned in the introduction, to simplify the notation in the proofs we will only analyze in detail the problem and at the end of the section comment on how to obtain the results for (2.1).In fact, we can consider any combination with nonnegative coefficients of fractional eigenvalues as long as Λ s 1 u and Λ s N u appear.
2.1.Basic notations and definition of solution.We use the notion of viscosity solution from [4], which is the nonlocal extension of the classical theory, see [21].
To state the precise notion of solution, we need the following: Given g : R N \Ω → R, for a function u : Ω → R, we define the upper g-extension of u as In the analogous way we define u g , the lower g-extension of u, replacing max by min.
Now we introduce the definition of the upper and lower semicontinuous envelope, that we will denote by ũ and u ˜respectively of u, that are given by ũ(x) := inf r>0 sup u(y) : y ∈ B(y, r) An important fact, that can be easily verified, is that for any continuous function g : R N \ Ω → R and any upper semicontinuous function u : Ω → R, it holds that Here 1 A denotes the indicator function of a set A in R N .
We now introduce a useful notation, for δ > 0 we write and then define Now, with these notations at hand, we can introduce our notion of viscosity solution to (2.2) testing with N −dimensional functions as usual.
Definition 2.1. A bounded upper semicontinuous function
In an analogous way, we define viscosity supersolutions (reversing the inequalities and replacing u g by u g ) and viscosity solutions (asking that u is both a supersolution and a subsolution) to (2.2).
Remark 2.1.For the definition of a viscosity solution to the Dirichlet problem for the trace fractional Laplacian, (2.1), we just have to take in the previous definition.
Comparison principle. Now, our goal is to prove a comparison principle between sub and supersolutions to (2.2).
In order to prove the comparison principle, we first show that sub and supersolutions behave well on the boundary of the domain.
Let u, v : R N → R be viscosity sub and supersolution of (2.2) in Ω, in the sense of Definition 2.1, respectively.Then, (i) u ≤ g on ∂Ω; (ii) v ≥ g on ∂Ω.
Proof.We begin by proving (i).Suppose by contradiction that there is Hence, we have that u g (x 0 ) = u(x 0 ).Since g is continuous, there exists We may with no loss of generality assume that R 0 < max{ x − y : x, y ∈ Ω}.
We now introduce two auxiliary functions: and D 2 a is bounded; for instance we can just take • b : R → R, a smooth bounded and increasing function which is concave in (0, +∞), and such that b(0 Next, we use these two functions to define for any ε > 0 the penalized test function here d is a smooth extension of the signed distance to the boundary, ∂Ω.It is at this point where we use the that is upper semicontinuous for any ε small.Then, for any ε small, Ψ ε attains a global maximum at a point x ε .Therefore, we have and hence, From here, we get that In particular, x ε ∈ B(x 0 , 2R 0 ) for any ε small enough.Now, using again the properties of a and b we get Therefore x ε ∈ Ω.Notice that we used the function a to penalize that x ε is far from x 0 and the function b with the signed distance to ∂Ω to obtain that x ε is not in Since a is non-negative, by (2.3), we have Hence, due to the fact that u is upper semicontinuous, we obtain Thus, we have that a ) for any ε small enough.Now, using that u is a viscosity subsolution of (2.2) in Ω in the sense of Definition 2.1, we have that Here we need to introduce changes in the arguments used in [5].The key point is that the directions z that are associated to the max and min that appear in the equation may be different (and we will take advantage of this fact).
We have To bound the infimum we choose Now, for the supremum we argue as follows: the main idea is that what we obtained with our choice in the infimum is negative enough to absorb the supremum and reach a contradiction at the end (the contradiction arrives from the fact that the sum of the infimum and the supremum is greater or equal to zero, see (2.4), but the infimum is negative enough in order to obtain that the sum is also negative).
Given η ε > 0, we choose a direction, z ε , such that ).We can prove both integrals are of order ε −2s in a similar way as the integrals in the infimum: we estimate I 1 z,ε (ω ε , x ε ) using the Taylor expansion of ω ε and we estimate I 2 z,ε (u, x ε ) simply by using the boundedness of u g .Thus, Adding the bounds for the infimum and the supremum and using (2.5) we obtain and we reach a contradiction taking η ε small enough.
Since we are dealing with subsolutions, in the associated inequality we can choose any direction to obtain a bound for the infimum, but we have to take care of the supremum.When one deals with supersolutions the situation is exactly the opposite.Notice that in our problem (2.2) both the infimum and the supremum appear.
Hence, as we are dealing with subsolutions, for the supremum, given η ε , we have a direction z ε as before.On the other hand, for the infimum we are free to choose the direction.We choose z ε that points inwards the domain (for example the inner unit normal to ∂Ω at the point x ε will do the job).We also choose a distance δ ε such that δε ε → 0. Now, we use exactly the same computations as in the previous case.For the infimum we use δ ε and z ε and for the supremum ε and z ε .As in the previous case, we reach a contradiction since the infimum is negative enough and we have a control of the supremum in such a way that the sum is still negative.
In the case of supersolutions the arguments works in an analogous way since we can interchange the roles of the infimum and the supremum in the previous computations (notice that for supersolutions we are reversing the inequalities).
Remark 2.2.To deal with the problem involving the trace fractional Laplacian ∆ s tr , we just observe that, for the terms that involve intermediate fractional eigenvalues, with i = 2, ..., N − 1, we can choose a direction that almost reaches this quantity.
In fact, when we deal with subsolutions we use that, since Λ s 1 u involves only the infimum, we have freedom to choose the direction.For the terms that involve the other eigenvalues we have that, given η ε > 0, we can choose a direction, z ε , such that and then, by the same computations that we made before (we just have to add a finite number of terms involving η ε ) we reach a contradiction.
Notice that, when dealing with supersolutions we use the freedom in the choice of the direction in the term that comes from Λ s N u (this is the term that involves a supremum) and bound all the other terms (that involve inf/sup).
Remark 2.3.In [5] is is used that the domain is strictly convex.Here we do not need this condition, since we have both the infimum and the supremum among directions in our operator and hence we can choose the direction in one of the terms (the one with the infimum or the one with the supremum) when dealing with sub and super solutions.Now, we are ready to state and prove the comparison principle for sub and super solutions to our problem.In this proof we again follow ideas from [5] but we have to introduce a different function Ψ ε (see below) and the integrals that appear are also split in a different way.Again, here we are not assuming that the domain is strictly convex (as was needed for the arguments in [5]).
Theorem 2.2.Assume that g ∈ C(R N \ Ω) is bounded and that Ω is a bounded C 2 −domain.Let u, v : R N → R be a viscosity sub and supersolution to (2.2) in Ω, in the sense of Definition 2.1, then As usual, we argue by contradiction, that is, we assume that M > 0. Since u g and v g are upper and lower semicontinuous functions, For any ε > 0, we define Observe that Moreover, M ε1 ≤ M ε2 for all ε 1 ≤ ε 2 .Then, there exists the limit lim On the other hand, since u g and −v g are upper semicontinuous functions, for any ε, Ψ ε is an upper semicontinuous function.Thus, there is ( Observe that Since B R is compact, extracting a subsequence if necessary, we can assume that Thus M = u g (x) − v g (x).This limit point x cannot be outside Ω, since M > 0, and by Theorem 2.1, it cannot be on ∂Ω.Consequently, x ∈ Ω and we may assume (without loss of generality) that provided ε is small enough.
On the other hand, by (2.6), for any w ∈ R N such that we have ε are test functions for u and v at x ε and y ε , respectively.Then, we have that for all δ ∈ (0, d ε ).
At this point we have to choose two sequences of directions, one to approximate the supremum and another one for the infimum.As before, for the subsolution we are free to choose directions in the infimum; while for the supersolution we can choose a direction close to the supremum.
By the definition of E δ , for each h > 0 there exists and thus, we get (2.9) Now, our goal is to obtain upper estimates for the differences 2 .In this way, when we substract the second expression to the first one in (2.9), we get a contradiction, provided we controlled the differences properly.
We can assume that To estimate the first difference, let us write where and We first observe that there is a positive constant C independent of δ, ε and h such that For the estimate of a.e. as ε, h → 0. Hence, Then, we divide the integrals I 2 and J 2 further Now, we observe that the terms in which we have the difference of characteristic functions go to zero, using the boundedness of u g , v g and dominated convergence theorem.The difference of the terms that have a segment in common is negative thanks to (2.8).
Collecting all these bounds we get lim sup δ,ε,h→0 Finally, we observe that we have a.e. as ε, h → 0, and hence, arguing similarly to the previous case and using that g is a bounded continuous function, we obtain lim ε,h→0 The negative term, −M , comes from the fact that Therefore, letting first δ → 0, then ε → 0, and h → 0, we get For the supremum part the limit can be bounded by exactly the same quantity (with a possible different limit direction z 0 ), The fact that we get a negative bound came from having and not from the fact that we analyze the infimum.Thus, substracting both expressions of (2.9) and taking limits we obtain we get and we end up with the desired contradiction.
Remark 2.4.This proof can be extended to deal with the problem involving the trace fractional laplacian (−∆) s tr .As before we just observe that, for the terms that involve the intermediate fractional eigenvalues, with i = 2, ..., N − 1, we can choose a subspace and then a direction that almost reach the associated quantity With the same computations one can obtain a comparison principle (and then existence and uniqueness of solutions, see below) for C 2 domains and continuous and bounded exterior data, g, for problems of the form (2.10) as long as a 1 > 0, a N > 0 and a i ≥ 0, i = 2, ..., N − 1 with g continuous and bounded and Ω a C 2 bounded domain.
If only one of the maximum or minimum fractional eigenvalues (Λ s 1 u or Λ s N u) is involved in the operator, then the proof still works (as in [5]) with the extra assumption that the domain is strictly convex.This assumption (strict convexity) ensures that, close to the boundary, the line in any direction reaches the boundary close to the nearest point on the boundary.We refer to Section 5 for extra comments on extensions of our results.
2.3.
Existence and uniqueness of a solution.This part is standard in the viscosity theory once one has at hand a comparison principle, but we include the details here for completeness.Now our goal is to show existence and uniqueness of a solution to (2.2) (the equation involves only Λ s 1 u and Λ s N u) and (2.1) (the equation is given by the trace fractional Laplacian, that is, the sum of the fractional eigenvalues).The same proof works for any of the operators that involve fractional eigenvalues as long as we have a comparsion principle, see Remark 2.4.
The proof of existence is by using Perron's method and uniqueness is immediate from the comparison principle.
Theorem 2.3.Assume that g ∈ C(R N \ Ω) is bounded and Ω is a bounded C 2 −domain.Then, there is a unique viscosity solution u to (2.2) or to (2.1) in Ω, in the sense of Definition 2.1.This unique solution is continuous in Ω and the datum g is taken with continuity, that is, u| ∂Ω = g| ∂Ω .
Proof.Again we use ideas from [4] to obtain the existence of a viscosity solution to our Dirichlet problem.
Existence of u, v : R N → R that are a viscosity subsolution and a viscosity supersolution in Ω, in the sense of Definition 2.1, follows easily taking large constants (here we are using that g is bounded).
We take a one-parameter family of continuous functions Then for all k ∈ N we consider the obstacle problem (2.11) for x ∈ R N , which is degenerate elliptic, that is, it satisfies the general assumption (E) of [3].It has ± g ∞ as viscosity super and subsolution.Recall that these viscosity supersolution and subsolution do not depend on the L ∞ bounds of ψ k ± .Then, in view of the general Perron's method given in [3] for problems in R N , since condition (E) holds, we conclude the existence of a continuous bounded viscosity solution u k to (2.11) for each k and, in addition, this family of solutions is equal to g in R N \ Ω for all k.Moreover, we have that The following corollary is immediate and we just estate it here since it is a part of Theorem 1.2.
Corollary 2.1.The comparison principle implies that if g 1 ≥ g 2 are boundary conditions of solutions u 1 and u 2 respectively, we have that u 1 ≥ u 2 .
Proof.This follows from considering u 2 a supersolution for the boundary condition g 1 or considering u 2 a subsolution for the boundary condition g 2 .
Also as an immediate corollary of the comparison principle we obtain continuous dependence of the solution with respect to the exterior data, this gives another part of Theorem 1.2.
Corollary 2.2.The comparison principle implies that the solution depends continuously on the exterior data; it holds that Proof.Just observe that is a supersolution to the problem with exterior datum g 1 , hence, by comparison we get In a similar way, we get that since the left hand side is a subsolution to the problem with exterior datum g 1 .Now, our goal is to show that (2.2) has a strong maximum principle, a viscosity subsolution to the problem can not attain the maximum inside Ω unless it is constant.
Theorem 2.4 (Strong maximum principle).Let u be a subsolution to problem (2.2) such that u g attains a maximum at some point x ∈ Ω.Then u is constant.
Proof.We argue by contradiction.Let us assume u is not constant.That means there is some x 0 in Ω such that u(x) > u(x 0 ).Since x ∈ Ω, we can take 0 < δ < |x − x 0 |/2 and φ(x) = u(x) for x ∈ B δ (x).We extend this φ to all R N in a smooth way.Since u is a subsolution, inf For the infimum, we can choose a direction freely, so let us choose z 0 = x 0 − x.For the supremum, fix z k such that sup This way we get Both expressions lack the integral where the test function appears due to the fact that the test function is constant.What we have to estimate is For every z we have that I(z) ≤ 0 thanks to the fact that the difference inside the integral is nonpositive.But for z 0 , we get an estrict inequality.The reason behind this is that upper continuity grants the existence of a ball centered at x 0 of radius sufficiently small so that for every y in that ball, u(y) < u(x).This yields that for a certain interval centered around t = 1, the difference u(x + tz 0 ) − u(x) < 0, so we can conclude that I(z 0 ) < 0. Hence, Taking the limit k → +∞ we get 0 < 0, arriving at the desired contradiction.
With the same idea one can show that a supersolution to the problem (2.2) that attains a minimum at some point x ∈ Ω must be constant.
Remark 2.5.In our proof of the strong maximum principle we need to choose directions.When we deal with subsolutions we can choose the direction that is involved in the infimum and hence any non-constant solution to an equation given in terms of a sum of fractional eigenvalues can not attain an interior maximum provided Λ s 1 u appears in the operator.Analogously, when Λ s N u appears in the operator non-constant solutions can not have interior minima.
Corollary 2.3.For an exterior datum g 0 we have that the corresponding solution to (2.2) is strictly positive in Ω, u(x) > 0, x ∈ Ω.
Finally, we show a strong comparison principle, provided that the exterior data verify that g 1 ≥ g 2 and there exists a point x in every line that passes trough Ω such that g 1 (x) > g 2 (x).Theorem 2.5 (Strong comparison principle).Assume g 1 , g 2 ∈ C(R N \ Ω) with g 1 ≥ g 2 and in every line that passes trough Ω there exists a point x ∈ R N \ Ω such that g 1 (x) > g 2 (x).Let u 1 , u 2 be solutions of our problem with boundary conditions g 1 and g 2 respectively.Then, u 1 > u 2 in Ω.
Proof.We already know, thanks to the comparison principle, that u 1 ≥ u 2 .Arguing by contradiction, assume that there exists and define Next, we proceed to construct an auxiliary function that will yield a test function for u 1 from below and a test function for u 2 from above like the one used in the proof of Theorem 2.2.We aim to use the fact that u 1 is a viscosity supersolution and u 2 a viscosity subsolution. Let It is simple to observe, following similar steps to the ones in the proof of Theorem 2.2, that S ε2 ≤ S ε1 for ε 1 ≤ ε 2 and S ε ≤ 0. Using the continuity of ψ ε we know that there exists (x ε , y ε ) Taking a subsequence, we may assume x ε and y ε converge to x and y, and we know that x = y thanks to lim Hence, we get This implies x ε and y ε converge to a minimum point of u 1 − u 2 , and since this point belongs to Ω −η , it follows that it is inside Ω.With no loss of generality we will assume x = x 0 .
Thanks to the previous arguments, we have found test functions for u 1 and u 2 at the points x ε and y ε respectively.We now use the fact that u 1 is a viscosity supersolution and u 2 a subsolution so that Now we use the same strategy as in Theorem 2.2 to get rid of the infimum and the supremum of the expressions.For each h > 0 there exists z Then, Here the contradiction will follow from the fact that, lim By hypothesis, for some t 0 , (x 0 + t 0 z 0 ) ∈ Ω we get Now, using the continuity of the exterior data g 1 and g 2 , the strict inequality holds on some interval I. Doing exactly the same computations on the other set of directions, after taking limits, we find the desired contradiction Therefore, we conclude that u 1 > u 2 in Ω.
Remark 2.6.As a simple example where Theorem 2.5 can be applied, one can take functions g 1 , g 2 that verify g 1 ≥ g 2 with g 1 (x) > g 2 (x) for x inside an annulus r < |x| < R and a domain Ω ⊂ B r (0).
Remark 2.7.To obtain that u 1 > u 2 we need to assume that in every line that passes trough Ω there exists a point x ∈ R N \ Ω such that g 1 (x) > g 2 (x).This condition is necessary, in fact, in the next section we will construct data with g 1 g 2 in R N \ Ω and such that the corresponding solutions coincide, u 1 ≡ u 2 in Ω.
The trace fractional Laplacian is nonlinear
Our goal in this section is to show that the trace fractional Laplacian and the mid-range fractional Laplacian are nonlinear operators.Since in R 2 both operators coincide up to a constant we perform the argument for N = 2.
Theorem 3.1.The problems (1.9) and (1.10) are nonlinear problems.There exist data g 1 , g 2 such that Proof.We will argue in R 2 and denote a point as (x, y).To begin, we consider a(x) a viscosity solution in R to . This viscosity solution is going to be a strong solution in (−1, 1), and in fact we can assume it is at least C 2 , so we can drop the principal value.
Define in R 2 the function u(x, y) = a(x) − a(y).
Observe that, for a unitary vector v = (v x , v y ), we have We can easily check that the supremum and the infimum of this expression are achieved at v = (1, 0) and v = (0, 1) respectively.Thus, this particular u(x, y) satisfies our equation in Ω = (−1, 1) × (−1, 1).We have, This problem has the unique solution that we already constructed, w(x, y) = u(x, y) (uniqueness comes from the comparison principle).Now, if we adequately add a little perturbation far from the origin supported close to a point (x, x) on the diagonal to obtain a new exterior datum, we will get that for this different exterior datum we get the same solution.
Let f (x, y) be a radially non-increasing nonnegative and nontrivial cut-off function such that f (x, y) = 1 for (x, y) ∈ B r (x, ŷ) and f (x, y) = 0 for (x, y) ∈ R 2 \ B r (x, x) for some far away point (x, x) in a diagonal and some small r.Now, let us take g(x, y) = u(x, y) + εf (x, y) as exterior datum.Then, using calculations similar to those performed before we get that R g(x + tv x , y + tv y ) − u(x, y) The ε > 0, which we can assume to be small, is the effect of the perturbation f .Both the supremum and infimum are still going to be achieved for v = (1, 0) and v = (0, 1).This is quite immediate for the infimum, since ε is positive.The supremum does not change because, if |v x | ∼ |v y |, then in a diagonal direction the above expression will be something similar to 2ε, which can be chosen smaller than 1 and thus, z = (1, 0) achieves a greater value.Therefore, even after altering the exterior datum by adding a nonegative and nontrivial perturbation, u(x, y) is still a solution inside B 1 (0).Now, let us substract the two functions in order to obtain a nonnegative and nontrivial exterior datum (the perturbation).If our operator was linear, from our previous results we obtain that the solution with this exterior datum is strictly positive inside B 1 (0) (we use the strong maximum principle, Corollary 2.3).This proves the nonlinearity of the problem, since the difference of two solutions that coincide in B 1 (0) is not the solution for the difference of the exterior data.
Limit as s ր 1
In this section, for a fixed C 2 domain Ω and a fixed continuous and bounded exterior datum g we study the limit as s ր 1 of the solutions u s (we make explicit the dependence of the solution in s along this section).
Recall that the fractional eigenvalues are given by Here we will use the explicit constant given by however, as we have mentioned in the introduction, any c(s) with c(s) ∼ (1 − s) will give the same limit.
Next, our goal is to show that u s , the unique solution to converges uniformly as s ր 1 to the unique solution to x ∈ ∂Ω.
In addition, with the same arguments, we obtain that when u s is the unique solution to in C(Ω) with the limit u is given by the unique solution to the local problem (4.5) x ∈ ∂Ω.
First, we show that the half-relaxed upper limit of the u s is a subsolution to the limit problem.Then u is a viscosity subsolution to the Dirichlet problem for the classical local Laplacian (4.4).
Proof.First, we notice that u s are uniformly bounded in s.This fact can be easily obtained using the comparison principle and that w = g ∞ is a supersolution to (4.3).Therefore, the half-relaxed upper limit, u(x) := lim sup is well-defined and bounded.By definition, u is an upper semicontinuous function.
Again, to simplify the notation, we will prove the result in R 2 , that is, we take N = 2, since in this case we have only two eigenvalues (one is given by an infimum among directions and the other by a supremum).At the end of the proof we will add a few lines on how to treat the general case.
Choose x ∈ Ω and φ a test function such that u − φ attains a maximum at x.We can assume that (φ − u)(x) = 0 and that (ū − φ)(x) > 0 for x = x.Since u s is upper semicontinuous by definition, ψ s = u s − φ reaches a maximum point at x s ∈ Ω.By the definition of half-relaxed limit, we have \ {x} , ∀k ∈ N Choose some k.For this k, we can find s k , y k such that By extracting a subsequence, we can assume x s k → x 0 for some x 0 ∈ Ω.Then, Since the only maximum point of ψ was x, we deduce x 0 = x, and thus lim that is, the sequence of maximum points converge to the maximum of the halfrelaxed limit.Now, we can use φ as a test function at x s k for the subsolutions u s k .Let us choose z k , z k unitary vectors such that inf for some 0 < δ < d (x,∂Ω) 2 < d (x s , ∂Ω), a condition that we can assume with no loss of generality.Here E s z,δ (•, φ, x) is given as in Section 2 (without multiplying by c(s) the corresponding integrals).Notice that here we write E s z,δ (•, φ, x) to make explicit that the operator depends on s.Now, as before, we write From this previous estimate it is clear we must take limits first in s, and then in δ.
For the first integral, using a second order Taylor expansion of φ(x x k + tz k,ε ) around t = 0, we get Hence, using the precise expression for c(s) (that implies c(s) ∼ (1 − s) as s ր 1), taking a subsequence such that z k → ẑ for some ẑ, and using continuity of D 2 φ(x)z, z in both x and z, we obtain lim Therefore, collecting the previous results, we obtain that for any convergent sequence z k → ẑ, it holds that lim We would like to check that We can get this last inequality taking limits in In fact, if we take the limit as s ր 1 (k → ∞) and as δ ց 0 we have lim Then, from (4.6), we obtain as we wanted to show.
For the supremum the proof is exactly the same, and hence we conclude that As u s is a subsolution, we have 0 ≤ inf and then in the limit we get that showing that u is a viscosity subsolution.
Moreover, from Theorem 2.1 we obtain that the half-relaxed limit satisfies u g ≤ g on ∂Ω (we have a uniform barrier at every boundary point that implies that u s ≤ g on ∂Ω).
For the general case of N eigenvalues, we just observe similar arguments to the previous ones, imply that the limit lim from where the result follows as before.
Remark 4.1.Analogously, we can obtain that the half-relaxed lower limit of supersolutions u s , given by, u(x) := lim inf s→1 − y→x u s (y), is a supersolution to the limit problem (4.4).Now, we are ready to prove the convergence result.It also holds that the solutions to the mid-range fractional Laplacian converge to the solution to the limit problem (4.5).
Proof.By the previous result, we know half-relaxed limits of viscosity subsolutions and supersolutions converge respectively to subsolutions and supersolutions of the equivalent Laplacian problem.Using the definition of the half-relaxed limit and comparison, we conclude that the half-relaxed limits coincide lim inf and is the solution to (4.4) (since it is both a sub and a supersolution).Hence, we have that the solutions u s converge to the solution of the limit problem in the following sense, lim s→1 − y→x u s (y) = u(x).
We only have to check that the convergence is uniform.If the convergence was not uniform, for any sequence s n ր 1 there exists some ε 0 > 0 and a corresponding sequence {x sn } ⊂ Ω such that |u sn (x sn ) − u(x sn )| ≥ ε 0 But in a compact set Ω we can assume x sn converges to some x 0 ∈ Ω after taking a subsequence.Due to its definition, lim n u sn (x sn ) = u(x 0 ).We arrive at a contradiction and the convergence had to be uniform.
Possible extensions
In this last section we briefly comment on possible extensions of our results.
5.1.
A nontrivial righthand side.One can obtain similar existence, uniqueness and comparison results for as long as f is continuous in Ω.For Hölder regularity results for the solutions when g = 0 and s is close to 1 we refer to [13].Here, to use our previous arguments, we need that a i are continuous in Ω and nonnegative with a 1 and a N strictly positive.
Associated with this idea of introducing coefficients in our model problem we can define fractional Pucci operators (described in terms of the fractional eigenvalues) considering, for two real constants 0 < θ ≤ Θ, Λ s i u(x) + θ These operators are extremal operators in the class of fractional trace Laplacians with coefficients between θ and Θ.
Existence, uniqueness and a comparison principle for the Dirichlet problem for the operators P + θ,Θ (u) and P − θ,Θ (u) in C 2 domains with continuous and bounded exterior data can be proved as in Section 2. Notice that here we are computing the supremum (or the infimum) of fractional Laplacians of the function u restricted to subspaces of dimension j and hence the singularity of the kernel is of the form |y| −j−2s .
Then, with these operators at hand one can define a different version of the fractional Laplacian, Remark that the same procedure with the maximum and minimum of the local usual Laplacian acting on subspaces gives the Laplacian in the whole space.In fact, we have that is, the supremum over subspaces S of dimension j os the Laplacians of u restricted to S is given by the sum of the j largest eigenvalues of D 2 u(x) and similarly the infimum among subspaces of dimension i is the sum of the smallest eigenvalues of D 2 u(x).Therefore, for any pair (j, i) such that j + i = N we have Here we are computing the supremum among subspaces S of dimension j of the infimum of fractional Laplacians (of dimension i) of u restricted to subspaces T included in S. Notice that the fractional eigenvalues Λ j (u)(x) that we used here to define the trace fractional Laplacian are given by With these operators W + j,i (u) we can obtain a fractional version of the Laplacian adding them (taking care of the fact that the dimensions of the corresponding subspaces add up to N ), that is, just consider (− ∆) s (j1,i1)...(j k ,i k ) (u)(x) = − k l=1 W + j l ,i l (u)(x).
It should be interesting to know if there is a comparison principle for operators like this.
Then, we consider ψ k ± in such a way ψ k ± (x) → ±∞ as k → ∞ for all x ∈ Ω and denoting ū(x) = lim sup k→∞,y→xu k (y); u(x) = lim inf k→∞,y→x u k (y),which are well defined for all x ∈ R N , we clearly have that ū ≥ u in R N .Thus, we have u = ū = g in R N \ Ω and ū and u respectively are a viscosity subsolution and a viscosity supersolution to our problem.Thus, by comparison we get ū ≤ u in R N , and therefore we conclude that ū and u coincide and that u := ū = u is a continuous viscosity solution that satisfies the boundary condition in the classical sense.Uniqueness of solutions follows from the comparison principle.
Theorem 4 . 1 .
Let u s be viscosity subsolutions to our Dirichlet problem for the trace fractional Laplacian (4.3), and define u as the half-relaxed upper limit, u(x) := lim sup s→1 − y→x u s (y).
Theorem 4 . 2 (
Harmonic convergence).Let u s be a viscosity solution to our problem for the trace fractional Laplacian, (4.3).Then u s converge uniformly to the solution u of the Dirichlet problem for the local Laplacian, (4.4).
|
v3-fos-license
|
2018-07-30T17:16:05.000Z
|
2018-07-30T00:00:00.000
|
118966969
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.physletb.2018.12.042",
"pdf_hash": "496016ffa2cec55225ea2b2afe5564adf38ea24e",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42484",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "f24251d419015b98a7156014851242e76b4bb26b",
"year": 2019
}
|
pes2o/s2orc
|
Probing the topological charge in QCD matter via multiplicity up-down asymmetry
Relativistic heavy ion collisions provide the possibility to study the topological charge in QCD matter through the event-by-event fluctuating net axial charge or nonequal numbers of left- and right-handed quarks they generate in the produced quark-gluon plasma. Based on the chiral kinetic approach for nearly massless quarks and antiquarks in the strong vorticity field produced along the normal direction of the reaction plane of non-central heavy ion collisions, we show that a unique signal for the topological charge in QCD matter can be identified from the asymmetric distribution of particles with momenta pointing in the upper and lower hemispheres of the reaction plane as a result of the fluctuating net axial charge.
Relativistic heavy ion collisions provide the possibility to study the topological charge in QCD matter through the event-by-event fluctuating net axial charge or nonequal numbers of left-and right-handed quarks they generate in the produced quark-gluon plasma. Based on the chiral kinetic approach for nearly massless quarks and antiquarks in the strong vorticity field produced along the normal direction of the reaction plane of non-central heavy ion collisions, we show that a unique signal for the topological charge in QCD matter can be identified from the asymmetric distribution of particles with momenta pointing in the upper and lower hemispheres of the reaction plane as a result of the fluctuating net axial charge.
I. INTRODUCTION
The topological charge in QCD matter [1][2][3][4] can be studied in relativistic heavy ion collisions through the effect of the event-by-event fluctuating net axial charge or non-equal numbers of left-and right-handed quarks they generate in the produced quark-gluon plasma [5]. In the presence of the magnetic field created in non-central heavy ion collisions, the finite net axial charge can lead to a separation of positively and negatively charged particles in the transverse plane of a collision as a result of the vector charge current it generates along the direction of the magnetic field [5]. This so-called chiral magnetic effect (CME) [5][6][7], which has been observed in condensed matter systems such as the Weyl semimetals in external magnetic fields [8] and studied in other areas of physics [9], has been suggested as a possible explanation for the observed charge separation in experiments [10][11][12][13]. Realistic studies of this effect based on anomalous hydrodynamics [14][15][16][17] and chiral kinetic approach [18,19] require a magnetic field of lifetime of at least τ B = 0.6 fm/c to account for the experimental data. Such a long-lived magnetic field does not seem to be supported by the small electric conductivity of QGP from lattice QCD calculations [20,21].
On the other hand, the vorticity field produced in noncentral heavy ion collisions also affects quarks and antiquarks of right-handness differently from those of lefthandness, although independent of their charges, and it decays slowly with time [22]. Similar to the separation of charges due to the CME, the vorticity field can lead to a separation of baryons and antibaryons with respect to the reaction plane of a heavy ion collision [23,24]. This chiral vortical effect (CVE) depends, however, also on the net baryon in the produced QGP, which is unfortunately very small in relativistic heavy ion collisions and thus makes the signal of CVE hard to detect. Since the vorticity field not only leads to a partial alignment of the spins of both positively and negatively charged quarks and antiquarks along its direction, as evidenced by the observed spin polarization of Λ hyperons in these collisions [25], but also tends to make quarks and antiquarks of opposite handedness to move in opposite directions. With different numbers of right-and left-handed quarks and antiquarks in the quark matter, there will appear a difference between the numbers of quarks and antiquarks that move along and opposite to the direction of the vorticity field, resulting in a multiplicity up-down asymmetry with respect to the reaction plane of a heavy ion collision.
In the present paper, this effect is studied quantitatively in the chiral transport approach with initial quark and antiquark distributions taken from a multiphase transport model [26], which includes the essential collision dynamics of relativistic heavy ion collisions through its fluctuating initial conditions and strong partonic scatterings. Our results show that the multiplicity up-down asymmetry induced by the strong vorticity field created in the direction normal to the reaction plane of a heavy ion collision depends sensitively on the net axial charge fluctuation in the produced quark matter, providing thus a more promising probe to the topological charge in QCD matter than the CME and CVE. This paper is organized as follows. In Sec. II, we briefly review the equations of motion of nearly massless quarks and anitquarks and their scatterings in the chiral kinetic approach. The initial conditions and the magnetic and vorticity fields that are needed for carrying out the chiral kinetic calculations are described in Sec. III. Results on the multiplicity up-down asymmetry and its event-byevent distribution in Au + Au collisions at √ s N N = 62.4 GeV and centrality of 30-40% are given in Sec. IV to illustrate the effect of the vorticity field on the luctuating net axial charge in the produced partonic matter as a result of its nonzero topological charge. Finally, a summary is given in Sec. V.
II. THE CHIRAL KINETIC APPROACH
In the chiral transport approach to massless quarks and antiquarks in both magnetic B and vorticity ω fields, their equations of motion are given by [27][28][29][30] where Q and λ = ±1 are the charge and helicity of a quark or antiquark (parton), and b = p 2p 3 is the Berry curvature that results from the adiabatic approximation of taking the spin of a massless parton to be always parallel or anti-parallel to its momentum. Corrections to above equations due to the small light u and d quark maasses (m u = 3 MeV and m d = 6 MeV) [31] can be included by replacingp, p and b with p Ep , E p andp 2E 2 p , respectively, as in Ref. [32].
The factor √ G = 1 + Qλb · B + 6λp(b · ω) in the denominator of Eqs. (1) and (2) modifies the phase-space distribution of partons and ensures the conservation of vector charge. The modified parton equilibrium distribution can be achieved from parton scatterings by requiring the parton momenta p 3 and p 4 after a two-body scattering, which is determined by their total scattering cross section, with the probability G(p 3 ) G(p 4 ) [30]. For the parton scattering cross section σ tot , we choose it to reproduce the small shear viscosity to entropy density ratio η/s in QGP extracted from experimentally measured anisotropic flows in relativistic heavy ion collisions based on the viscous hydrodynamics [33,34] and transport models [35,36]. This empirically determined value is close to the conjectured lower bound for a strongly coupled system in conformal field theory [37] and the values from lattice QCD calculations [38]. For a partonic matter dominated by light quarks as considered here, we can relate η/s to the total cross section σ tot by η/s = 1 15 p τ = p 10nσtot [39] if the cross section is taken to be isotropic, where τ is the relaxation time of the partonic matter, n is the parton number density, and p is the average momentum of partons. Taking η/s = 1.5/4π as determined in Ref. [40] from anisotropic flows in relativistic heavy ion collisions using viscous hydrodynamics, we then calculate the parton scattering cross section as a function of parton density and temperature or energy density.
III. INITIAL CONDITIONS AND THE MAGNETIC AND VORTICITY FIELDS
For the initial phase-space distribution of partons, we take it from the string melting version of the AMPT model [26] with the values a = 0.5 and b = 0.9 GeV 2 in the Lund string fragmentation function to give a bet-ter description of the charged particle multiplicity density, momentum spectrum, and two-and three-particle correlations [41,42] in heavy ion collisions at RHIC. For the event-by-event fluctuating net axial charges in heavy ion collisions due to the topological charge fluctuation in QCD [1][2][3][4], we let each event to have either more right-handed quarks and antiquark or left-handed quarks and antiquarks with the probability (1 + p)/2, where p = N 2 5 /N with N 2 5 being the initial axial charge fluctuation and N is the total number of partons in an event.
For the magnetic field, we obtain it from the Lienard-Wiechert potential produced by the spectator protons in the colliding nuclei. As shown in Refs. [43,44], the resulting magnetic field in the overlap region of the two nuclei is in the direction perpendicular to the reaction plane and has a very large strength but a very short lifetime. Since the partonic effect on the magnetic field is small [45], due to the small electric conductivity in QGP from lattice QCD calculations [20,21], we neglect it in the present study and also assume that the magnetic field is uniform in space. As to the vorticity field, it is calculated from the velocity field v(r, t) of partons, which is determined from the average velocity of partons in a local cell of the partonic matter via ω = 1 2 ∇ × u with u = γv and γ = 1 √ 1−v 2 as described in Ref. [30]. We note that contrary to the short lifetime of the magnetic field, the vorticity field produced in non-central heavy ion collisions decays slowly with time [22].
The partonic matter from the AMPT model after including event-by-event fluctuations in the net axial charge is then evolved according to the chiral kinetic equations of motion and parton scatterings described in the above until its energy density decreases to ǫ 0 = 0.56 GeV/fm 3 , similar to the critical energy density from LQCD for the partonic to hadronic transition [46] and also that corresponding to the switching temperature T SW = 165 MeV from the partonic to the hadronic phase used in viscous hydrodynamics [47].
IV. RESULTS
To illustrate the effect of vorticity field, we consider Au+Au collisions at √ s N N = 62.4 GeV and centrality of 30-40% for the two cases of without (p = 0) and with (p = 0.4) initial axial charge fluctuation in the partonic matter. For the latter, only events of more right-than left-handed partons are considered.
In the presence of both magnetic and vorticity fields and with a net axial charge density in a partonic matter, the distributions of the azimuthal angles (φ) of positively and negatively charged partons can be expressed by where Ψ RP is the azimuthal angle of the reaction plane in a collision, v 2 is the elliptic flow, and a CVE and a CME are the multiplicity up-down asymmetry of charged partons induced by the vorticity and magnetic fields, respectively. Both a CVE and a CME can be positive or negative depending on the sign of the net axial charge in the partonic matter. Because of the opposite effects of magnetic field on the multiplicity up-down asymmetry of positively and negatively charged partons, we can study the effect of vorticity field by considering the azimuthal angle distribution of both positively and negatively charged quarks and antiquarks to remove the contribution of a CME in Eq.(3).
A. The multiplicity up-down asymmetry Although the multiplicity up-down asymmetry a mo 1 is nonzero and positive for events with more right-than left-handed quarks and antiquarks obtained from a finite initial axial charge fluctuation p, it is negative with same magnitude if events with more left-than righthanded quarks and antiquarks are obtained with the same value of p. As a result, its average value over all events a mo 1 is zero as the average of net axial charge N 5 is zero. On the other hand, the event-by-event distribution N (A) of the normalized multiplicity up-down asymmetry A = NU −ND NU +ND , where N U and N D are the numbers of partons with momenta pointing in the upper and lower hemispheres of the reaction plane, respectively, is wider as a 2 CVE becomes larger. We therefore introduce the multiplicity up-down asymmetry event distribution N (A) and consider the ratio N with (A)/N without (A) of those with [N with (A)] and without [N without (A)] a net axial charge fluctuation. In upper panels of Fig. 3, we show by the blue band this ratio for mid-pseudorapidity light quarks of transverse momenta in the range of 0.05 < p T < 2 GeV/c. The upper left panel is for the case using initial reaction plane of a collision, and it shows a distinct concave shape. Using the event plane determined by the azimuthal angles of emitted particles leads to the same conclusion as shown by the blue band in the upper right panel of Fig. 3. For particles in the smaller transverse momentum range of 0.15 < p T < 2 GeV/c, their multiplicity up-down asymmetry event distribution ratio N with (A)/N without (A), shown in lower panels of Fig. 3, also has a concave shape whether the initial reaction plane or the event plane is used, although its curvature is smaller than the for partons in the larger transverse momentum range.
The above results can be understood as follows. According to the particle azimuthal angle distribution given in Eq. (3), a particle has the probabilities 1+a 2 and 1−a 2 to have a positive and negative value for sin(φ − Ψ RP ), repectively, where a = 4a CVE /π. The event-by-event distribution of the up-down asymmetry A then has zero average value and a fluctuation given by where N = N U +N D . Taking into account the fluctuation of particle number N in each event for a given momentum range, the fluctuation of A is thus If the number of particles N is sufficient large and the particles in each event have no correlations, the final distribution of A then has the normal Gaussian distribution N (0, (∆A) 2 ) according to the central limit theorem. From the value 1/N = 2.133 × 10 −3 and a CVE = ±9.82 × 10 −3 for mid-pseudorapidity light quarks in the transverse momentum range of 0.05 < p T < 2 GeV/c for the collisions considered in present study, where the plus and minus signs are for events with more right-handed particles and more left-handed quarks, respectively, and the average is taken over particles in corresponding events, the resulting ratio N with (A)/N without (A) is shown by red solid circles in upper left panel of Fig. 3. They are seen to be similar to those calculated with p = 0.4 and p = 0 for mid-pseudorapidity light quarks in the same transverse momentum range within the uncertainty of using the normal distribution for the multiplicity up-down asymmetry A. This similarity is also seen in the case of using the event plane from final particles, as shown in the upper right panel of Fig. 3, as well as for particles with momenta in the range of 0.15 < p T < 2 GeV/c, where 1/N = 2.914 × 10 −3 and a CVE = ±8.29 × 10 −3 , as shown in lower panels of Fig. 3. To help identify this effect in experiments, we follow the consideration of Ref. [48] on charge separation due to the CME by introducing the ratio N real (A)/N rand (A), where N real (A) is the real distribution of A and N rand (A) is the distribution of A with the momentum of each parton in an event having the same probability to be in the upper and lower hemispheres of the reaction plane. In upper panels of Fig. 4, we show this ratio for midpseudorapidity light quarks of transverse momenta in the range of 0.05 < p T < 2 GeV/c. The upper left panel is for the case using initial reaction plane of a collision, and it shows a more distinct concave shape for this ratio for the case with a nonzero axial charge fluctuation than the case with zero axial charge fluctuation except the statistical fluctuation. Using the event plane determined by the azimuthal angles of emitted particles leads to the same conclusion as shown in the upper right panel of Fig. 4. For particles in the smaller transverse momentum range of 0.15 < p T < 2 GeV/c, their N real (A)/N rand (A) ratio, shown in lower panels of Fig. 4, for the case without net axial charge fluctuation shows a convex shape whether the initial reaction plane or final event plane is used. Including a nonzero axial charge fluctuation in the partonic matter, this ratio changes to the concave shape. We note that the convex shape for the N real (A)/N rand (A) ratio in the case without net axial charge fluctuation is due to the fact that partons in the string melting version of the AMPT model are from decays of hadrons produced in the HIJING model [49], which is used as its initial conditions, and are thus partially correlated in momentum space even after undergoing multiple scatterings. Without any momentum correlations, the N real (A)/N rand (A) ratio should have a constant value of one.
V. SUMMARY
To summarize, we have proposed to study the effect of the vorticity field in non-central relativistic heavy ion collisions on the particle multiplicity up-down asymmetry relative to the reaction plane in order to probe the net axial charge fluctuation in the produced partonic matter, which is related to the topological charge in QCD. Solving the chiral transport equation in the presence of a selfconsistent voriticity field and including the vector charge conserved scatterings among quarks and antiquarks from the AMPT model, we have found that the multiplicity up-down asymmetry, which is quantified by the multiplicity up-down asymmetry event distribution N (A), which is related to the distribution of the difference between the numbers of quarks and antiquarks with momenta pointing in the upper and lower hemispheres of the reaction plane, is sensitive to the net axial charge fluctuation in the partonic matter. In particular, the ratio between the multiplicity up-down asymmetry event distributions for the cases with finite and zero net axial charge fluctuation is directly related to the inverse of the multiplicity of par-tons and the net axial charge fluctuation, besides depending on the strength of the vorticity field. Because of its local structure [50,51], the vorticity field has been shown to result in a local spin polarization of Λ hyperon in the direction of total orbital angular momentum that can be as large as 10% [51] and also depends less on the collision energy. This is in contrast to the global spin polarization, which has a magnitude of about 3% at √ s N N =7.7 GeV and decreases with the collision energy. Since the multiplicity up-down asymmetry distribution depends on the square of the strength of vorticity field, its sensitivity to the net axial charge can be significantly enhanced if, for example, particles with momentum components in the event plane satisfying p x p z > 0 or p x p z < 0 are used in the analysis. Measuring the multiplicity up-down asymmetry event distribution in non-central relativistic heavy ion collisions thus provides a promising method to probe the net axial charge fluctuation in partonic matter and thus the topological charge in QCD matter.
|
v3-fos-license
|
2021-09-27T18:42:15.875Z
|
2021-08-17T00:00:00.000
|
243073636
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://doi.org/10.21203/rs.3.rs-742472/v1",
"pdf_hash": "5ddaa6d75117ecfc6d9028504fc3350e1b1697f1",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42486",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"sha1": "d9d6c81e478ace73ae8a61b095faebc4410b2aa9",
"year": 2021
}
|
pes2o/s2orc
|
Modeling of Trusted Public Emergency Services for Smart Cities Using Blockchain and IoT-based Cognitive Networks
The Internet of Things (IoT) recently gained attention from the last few years due to various smart city applications deployment. The existing literature discusses different public emergency service (PES) aspects from smart-healthcare to smart-home automation. However, less work explores for the smart-fire-brigade system. The PESs require high computation, timely service fulfillment, service transparency, and trust, which are difficult to achieve through a centralized system. In recent years, blockchain technology has gained enormous popularity for immutable data management that ensures transparency, reliability, and data integrity using distributed storage. This paper presents a blockchain based model for secure and trusted public emergency service in IoT-enabled smart cities (BMSTP) to handle the PES requests in real-time fairly. An edge compute server (ECS) is introduced to enhance data processing speed and local data storage. Simultaneously, a queuing theory model is used to process PES requests quickly. The ECS manages an access control list (ACL) for smart-home IoT devices to protect against the illegal placement of any new IoT devices near smart-home to misguiding public emergency service departments (PESDs). Further, a reputation model is designed for PESDs to scale their service quality. We explored the BMSTP for smart-homes placed under different sub-areas of a smart-city. The experiment results show the proposed system model is efficient in scheduling the smart-home PES requests to an appropriate PESD and minimizing the delay to reaching the smart-home location.
Introduction
The smart city covers urban areas equipped with Internet of Things (IoT) devices. These IoT devices provide convenience to individuals through different smart applications. The IoT devices also help in data collection for various smart-city applications [1], including smart-home automation, smarthealthcare, smart-transportation, robotics, smart-agriculture, and many more, as shown in Fig. 1. The demand for these IoT devices increases day-by-day and reaches 50 billion worldwide by 2030 [2]. With the increasing number of such IoT devices, the data generated from them grows exponentially, and handling this massive data becomes more challenging soon. The IoT devices generally have low computation power, limited storage, restricted network capabilities, and vulnerable to attack. Hence, fewer options are currently available to systematically process and manage these enormous connected IoT devices and their massive data. Therefore, to provide fast access and excess storage for these IoT devices, we require to exploit the edge compute server (ECS) functionality. The ECS removes this hurdle by residing near IoT devices and protect against performing malicious activity by supporting the access control list (ACL) based on access control policies [3]. Hence it ensures better performance, fast computation, device authenticity, and scalability. Existing public emergency services (PESs) depend on a centralized system that involves social agencies and government regulation to break data sharing barriers [4]. Due to the centralized system, PESs is suffering from high congestion and a single point of failure. Whereas citizens face misusing of their personal information without any prior knowledge. In this situation, a distributed and reliable system is essential while accessing PES resources. Hence, the importance of blockchain technology is highlighted in this paper for the smart-city implementation to overcome these contradictions. Blockchain technology supports various features such as immutability, trust, transparency, and synchronization, using multiple distributed ledger maintained by blockchain nodes. Therefore, it eliminates a single point of failure. The blockchain smart contract adds a trust layer between citizens and PES providers and vice-versa. The public blockchain takes a long time to maintain consistency, so it is not worth adding every single piece of information. Hence, an ECS [5] acts as an intermediary between IoT devices and the blockchain. The ECSs are deploying near IoT devices, which require fast connection and low latency for PESs in the smart-city. The ECS calls blockchain application programming interface (API) to transfer PES requests gathered from IoT devices with high traffic load in a desirable time. Allocating PES provider in real-time with minimum waiting is a significant challenge. Executing these operations with a conventional approach adds additional delay. Introducing the queuing model is suitable for handling several PES requests generated from citizens and providing on-time emergency services with low latency to protect against catastrophe.
Our contributions
To overcome the challenges mentioned above, we propose a blockchain based model for secure and trusted public emergency service in IoT-enabled smart city (BMSTP). The proposed model detect fire in a smart-home and provide PES by suggesting the nearest fire brigade department to protect smarthomes from catastrophe. The functionality of the proposed system architecture is enhanced by implementing the smart contract logic. In brief, the main contributions of this paper are as follows.
1.) A three-layered architecture is proposed to define the working of each entity involved in the BMSTP. With this, a blockchain network is designed using a private blockchain platform. 2.) This proposed model has two types of entities at the second layer, i.e., IoT controller and service controller. These two behaves as an ECS. The IoT controller manages smart-homes data and forwards PES requests on behalf of smart-homes. Whereas the service controller keeps PESDs data and performs different tasks such as distance evaluation between smarthomes and PESDs and selecting a minimum request queue PESD. To fairly balance the PES requests in real-time, a queuing theory is utilized at PESDs and an ACL at IoT controller to maintain smart-home IoT devices information. 3.) A smart contract is used to implement various functions. These functions include registration of IoT controllers, service controllers, smart-homes and PESDs, confirmation of PESD service provider, and reputation generation and updation of PESDs. 4.) Implement a reputation model for PESDs to bring transparency and trust in the proposed system architecture while serving PES requests.
Organization
The rest of the paper is organized as follows. Section 2 discusses the background knowledge of blockchain technology. In section 3, previous work on blockchain and IoT based smart-city applications and uses-cases are presented. A detailed view and implementation of the proposed BMSTP is described in section 4. Section 5 presents the simulation and results, and finally, section 6 is followed by the conclusion and future scope to extend the current work.
Background
This section provides blockchain technology fundamentals and private blockchain components to understand the flow of PES requests and transaction commitment, as describes below.
Blockchain
The term blockchain came up with bitcoin in 2008 [6] by unknown people called satoshi nakamoto. After that, a satoshi is considered the smallest denomination of the bitcoin blockchain. The first transaction on the bitcoin blockchain was recorded in May 2010. The success of the bitcoin blockchain is hashing algorithm, consensus mechanism, mining technique, and longest chain rule that remove the double-spending problem. The bitcoin blockchain aims to overcome the financial crisis issues to transfer digital assets among individuals at different locations. The bitcoin blockchain individuals are connected through a peer-to-peer network to publish financial transactions based on encryption using a public-private key.
In later years the term blockchain is widely used to indicate different public and private blockchain platforms such as Ethereum, Rootstock, R3 Corda, Quorum, and Hyperledger for developing realworld applications. The blockchain is an append-only data structure to maintain immutable transactions in distributed ledgers between unknown and untrusted individuals. This leads to the removal of a centralized system that is using to hold useful information. The blockchain block is an essential component that contains a block header and a block body, as represents in Fig. 2. These two have several pieces of information such as header number, nonce, current-hash, previous-hash, signed transactions data, etc. [7]. The first block on the blockchain is called as genesis block, which holds blockchain information. The upcoming blocks build on top of the genesis block, connected through a hash function to form a chain of blocks.
Smart Contract
The smart contract was considered an impractical concept in the early years due to less advancement in information and communication technologies. Later, nick szabo, in 1994 [8], provide a conceptual view of the smart contract. In short, a smart contract is using to write the business rules or business agreements between two or more transacting parties in the form of a self-executable code. The smart contract definition changed after the invention of blockchain technology and became more famous. The smart contract [9] is a turing complete logic to write application codes which store at permanent blockchain address to roll out third party involvement. These smart contracts cannot modify at later stages, and they execute automatically in a distributed environment when certain conditions meet. Some examples of turing complete language support by different blockchain platforms are solidity powered by Ethereum blockchain. Hyperledger blockchain encourages multiple languages such as Python, Rust, JavaScript, Java, and Go as per developer comfort to write business logic. Similarly, some other blockchain platforms support modeling languages that are platform dependent.
Private Blockchain
The blockchain platform subdivides into two categories such as public blockchain and private blockchain. The public blockchain sometimes calls permissionless blockchain, whereas private blockchain is known as permissioned blockchain. The Hyperledger blockchain platform comes under the private blockchain. The Linux Foundation [10] hosts Hyperledger blockchain platform as a crossindustry collaboration project. The blockchain platform was designed to enable the customized networking rules to operate different consensus protocols and process thousands of requests per second with low latency. Compared to the public blockchain, the private blockchain allows only known entities to participate in blockchain relating operations. The consensus protocol [11] in the private blockchain requires mutual agreement among involved entities to commit transactions on the blockchain. According to application requirements, the private blockchain supports various consensus protocols such as Byzantine Fault Tolerance, Kafka, RAFT, Practical Byzantine Fault Tolerance, plenum, etc.
Private Blockchain Components
The private blockchain [12] is a collection of fabric organizations that contains multiple entities to build a blockchain network. These entities are membership service provider (MSP), fabric certificate authority (fabric-CA), orderer, peer, client, and channel. The functionality of each entity is describing below.
1.) MSP: In a private blockchain, the MSP is responsible for authorizing organization's fabric-CA. The MSP generates the digital certificates for each fabric-CA and maintains their information in the certification list.
2.) Fabric-CA: The fabric-CA resides under an organization and uses to generate digital certificates for peers.
3.) Peers:
The peers are sub-divided into two types such as endorsing peer and committing peer. The endorsing peer performs various functionality, including consensus achievement and endorsement of transactions. Whereas the committing peer checks transaction validity and manages transaction block in their ledger.
4.) Client:
The client is an entity that interacts with the blockchain network to execute smart contract functions. The client creates a transaction (i.e., read-write set of information) and sends it to endorsing peer. The endorsing peer validates the transaction by verifying the client's public key and a time-stamp. The endorsing peer signs the received transaction and sends it to the client, and maintains a local copy in its ledger. The client waits for an ample number of endorsements and sends the endorsed transaction to the orderer. The orderer generates a valid block of transactions and sends it to committing peers to update their leger with the new block. The committing peer informs the client of the successful transaction. In last, the endorsing peer discards the local copy available in its ledger.
5.) Orderer:
The orderer bundles the time-stamp transactions in a specific order to create a valid block. The transaction includes a header and a payload. The header comprises transaction and channel identification. Whereas payload contains time-stamp transactions and endorsing peer's signature. The block creates according to the maximum number of transactions limit and time out period.
6.) Channel: The channel is a medium to connect multiple organizations to receive the same information on the blockchain network. The channel also provides data privacy and confidentiality among organizations.
Blockchain Smart City
In [13] author proposed an IoT based smart manufacturing system for quality assurance application. The blockchain is utilized for building a trust relationship and for improving security concerns for manufacturing life-cycle processes. Various trust factors are mentioned with lesser security without practical guidelines. In [14] author proposed a local lightweight expandable blockchain model for a smart factory. Two defensive techniques are presented, such as whitelist and dynamic authentication to provide security and privacy. Also, an ACL is designed using Bell-La-Padula (BLP) and Biba models to prevent malicious activities. The use of bitcoin based local blockchain development limits the transactions per second and wastage resources. In [15] author proposed a useful resource utilization model for IoT devices in smart-city. The edge and miner nodes are placed together in a single blockchain network for the proper functioning of IoT devices which are connected with an edge network. The miner nodes are responsible for performing high computational tasks. The Proof-of-Work consensus is used to bring transparency and security. The utilization of consensus wastage more energy and requires heavy computational resources. In [16] author proposed the smart-city and blockchain notion by linking it with sharing economy services. The distributed technology brings trust, transparency, and privacy in the service relationship and eliminates intermediaries. Hence, it results in low operational costs and increases efficiency of sharing services. A theoretical viewpoint is provided with no practical implementation. In [17] author proposed a three-tier architecture for supporting scalable sharing economy services in the mega smart-city. The blockchain nodes perform data synchronization with the backend cloud-tier. The architecture is extended by adopting artificial intelligence models to capture the information and feed it into an artificial intelligence engine to identify the pattern through deep and convolutional neural networks. These patterns are used to share various economy services, depending on the need. In [18] author mentioned decentralized authentication and trust management for sensor-based IoT networks and designed a human like knowledge based trust model. This model determines the reputation of nodes and used pretty good privacy (OpenPGP) model for the authentication process.
Blockchain Emergency Services
In [19] author designed a decentralized authentication system using identity based signature scheme with multiple authorities (MA-IBM) approach and proposed a blockchain-based electronic health record (EHR) system. To identify MA-IBM security, a random oracle model is designed using deffiehelman assumption. The model may cause excessive communication overheads due to multiple authority signatures. In [20] author proposed a patient centric access control for securing protected health information (PHI) using blockchain. For medical healthcare record security, a lightweight double encryption algorithm (i.e., ARX ciphers) is used and deffie-helman key exchange utilizes to transfer public keys. To bring anonymity and authenticity, a lightweight privacy preserving ring signature approach is proposed. In [21] author proposed a blockchain based secure and privacy preserving EHR sharing protocol. The EHR data sharing and privacy preservation is achieved through keyword search encryption and proxy re-encryption technique while sharing EHR information between different medical institutions data requestors. The proof-of-authentication is proposed as a consensus mechanism to build consortium blockchain regulation for efficient operation. The keyword search in the system may bring to endless search.
In [22] author proposed a lightweight access control system for IoT network based on blockchain. The management hub node holds the access control policies to permit the access for registered IoT devices. For security analysis STRIDE (i.e., spoofing, tempering, repudiation, information disclosure, denial of service (DoS) attack, and elevation of privileges) model is used to check against the presence of threats. The proposed architecture may suffer from overhead due to waiting of blockchain to permit access control information. Also, the presence of malicious management hub node could temper, repudiate and disclose information of IoT devices. In [23] author proposed a blockchain based emergency service architecture for smart home. To ensure security and privacy different authentication mechanisms such asymmetric key, digital signatures with interplanetary file system, and QR code through one-time password is used. In the proposed system, the public blockchain limits the smart contract security, and a secure file system is required. In [24] author mentioned a private blockchainbased access control (PBAC) model for smart home to protect against illegal access from service providers. The administrator use two-way secure authentication and token based access control policies to grant access of smart devices in a smart home to service providers. A certain time-interval is created for service provider during session creation to get home access may cause incompletion of service. In [25] author introduced a blockchain based remote user authentication system for smart home. For authentication between user and smart home gateway a group of signature and message authentication code techniques is proposed. An elliptic curve integrated encryption (ECIES) scheme is used for data transmission. The author suggests improving the access control policy by using ABAC in the future. In [26] author proposed an intelligent agriculture system based on blockchain. The bilinear pairing and dark web technology is used to create an agriculture network and private blockchain, respectively. The identity authentication mechanism is added to verify the legitimacy of any identity and hash-based message authentication code to determine message authenticity.
In [27] author proposed a blockchain-based secure firmware management framework for heterogeneous device management. The unidirectional, bidirectional, firmware update, and update propagation protocol are proposed for secure device management. The private blockchain is used to record the firmware transmission and update history. In [28] author designed a microgrids architecture for smart energy grid (SEG) using blockchain in smart-city. The blockchain_SEG application is developed based on the proposed model for information exchange and to buy or sell energy between energy distributors. The blockchain record the quantity of energy stored, selection of available energy supplier, and visualization of final sales.
Queuing Theory Model
In [29], the author proposed a M/M/n/L queuing theory model for the mining process simulation in the blockchain. In [23], the author presented a closed loop control system to evaluate the optimal number of blocks present in the queue of miner networks for IoT system using M/M/1 queue theory model. In [31] author proposed a / / queuing theory model, which is utilized by hospital managers to guide nursing staff decisions. This model identifies the nurse-to-patient ratio needed to achieve patient services. The limitation is the assumption of a homogeneous workforce and not mentioned the use of distributed technology to bring more reliability into the proposed system model design. In [32] author mentioned an emergency service system to provide medical service at different geographical locations.
A hypercube queuing model with a multi-server is implemented for server-to-consumer services. So far, a lot of work has been discussed for smart-city implementation, which includes many applications, i.e., smart-healthcare, smart-homes automation, firmware management, authentication system, etc., as shown in Table 1. After observing all these approaches and solutions mention by many authors, we propose a new model for calling PES in unusual environmental conditions. A private blockchain and reputation management are utilized to bring transparency and trust between user and PES provider. To handle the massive data generated through IoT devices, the ECS is used to store and process this data.
Table1: Comparison of different properties between the proposed system model and others *Note that notation represent involved feature, not involved feature
Blockchain Based Model For Secure and Trusted Public Emergency Service
In this section, we present the design of BMSTP. First, a three-layered architecture is presented, and then the components used in each layer with their roles are specified. Next, the working of BMSTP w.r.t. private blockchain and calling of smart contract functions are explained. In this paper, we focus on fire detection in a smart-home and distributing the PES request at minimum request queue length PESD to reach the smart-home location in minimum time.
System Architecture
The BMSTP architecture comprises three layers: infrastructure layer, edge layer, and blockchain layer, as shown in Fig. 3. The infrastructure layer is the collection of smart-homes and PESDs. Whereas the edge layer is comprising IoT controller and service controller. The role of blockchain layer is to receive and transfer PES requests and store their information.
1.) Infrastructure Layer: The infrastructure layer consist smart-homes and PESDs. The smarthome holds smart IoT devices includes a fire detector, a smoke detector, a fire alarm, and an IoT gateway. On the other side, the PESD manages multiple PESD service providers (i.e., fire brigades). Each PESD maintain their request queue to provide instant service to smart-homes in the smart-city.
2.) Edge Layer:
The edge layer keeps IoT controller and service controller. The IoT controller performs multiple operations. These operations are data gathering and management of IoT devices and IoT gateway, continuously checking for IoT device's threshold values, and maintaining ACL for IoT devices and IoT gateway. The benefit of ACL is to detect the placement of new IoT devices near any smart-home by an adversary to misguide IoT controller. The service controller stores the information of multiple PESDs with their request queue. The service controller performs some local computation while sending the PES request to a particular PESD. This local calculation is useful to identify a suitable PESD while forwarding PES requests, which minimizes waiting time for smart-homes.
3.) Blockchain Layer:
The blockchain layer is a combination of fabric organization. These fabric organizations are associated with either an IoT controller or a service controller connected through a common channel. The fabric organization store the smart contract information. This smart contract triggers itself once the condition meets and generates a transaction on the blockchain.
Working Architecture
This section describes, the functionality of each entity involved in the BMSTP from data generation to data processing. The following section provides an overview of network initialization, queuing model implementation, and reputation management for PESDs.
Network Initialization
The blockchain network initialization and smart contract installation are the basic operations for proper functioning of BMSTP as shown in Fig. 4. It is assumed that a smart-city is sub-divided into multiple sub-areas, and in each sub-area there is a PESD. According to the total number of sub-areas, an IoT controller is created to handle their allocated sub-area's PES requests. A single service controller is generated to manage all PESDs and their information. The fabric organization is represented as . Whereas the IoT controller and service controller is indicated as and .
Step1: A blockchain consists of fabric organization, where ( ∈ 1 , 2 , … , ) and each , ∈ . It is assumed that no two IoT controller or service controller belongs to same fabric organization. The IoT controller and service controller connects with the client in fabric organization to send and receive blockchain related information. The MSP creates certificates for each fabric-CA indicated as _ to make fabric organization valid on the blockchain is given by Eq. (1).
After receiving certificate from MSP, the fabric-CA generate certificates for their fabric organization peers is given by Eq. (2).
Where, _ is the certificate of ℎ peer in the fabric organization, and it is assumed as per requirement a fabric organization may contain multiple peers. Once all fabric organization entities receive their certificates, they connect with a common cannel as given by Eq. (3).
So far, the basic blockchain network is established. Now, install a smart contract on all fabric organization peers and channel through the software development kit (SDK) is given by Eq. (4).
Step2: The IoT controller and service controller invoke register IoT controller ( _ ) and register service controller ( _ ) smart contract function, respectively describe in section 4.3. They both provide registration information to their respective fabric organization and in return receive a pair of public-private keys, which uniquely identify them on the blockchain.
Step3: After successful registration, the IoT controller and service controller starts registering smarthomes and PESDs, respectively. The smart-home calls register smart home ( _ ) function, and PESD invokes register public emergency service department ( _ ) function.
Selection of Public Emergency Service Department using Queuing Model
In this section, to select the PESD, a queuing theory model for igniting smart-home is defined as shown in Fig. 5. The queuing model helps the service controller to identify an appropriate PESD with a minimum request queue [33]. The selected PESD receive a PES request for an igniting smart-home and reach at the smart-home location to protect against catastrophe. Let the classical / / queuing theory model is used to represent the above scenario, where smart-home PES request follow the firstcome-first-serve (FCFS) queuing discipline. Recall from Kendall's notation, the first and second represents the inter-arrival time and service time, respectively. The inter-arrival time follows the Poisson distribution, whereas service time is expressed using Markovian Exponential distribution. In the proposed queuing theory scheme, denotes the number of PESDs according to sub-areas in a smart-city. It is assumed that each PESD contain their own request queue to handle PES requests and this information is centrally maintained at service controller. The waiting time for igniting smart-home to receive PES request confirmation depends on two parameters. These are the local computation of service controller to identify a PESD with minimum request queue and service rate of PESD. The PESD request queue length is represented as , and waiting time of smart-home is indicated as . Evaluating request queue and waiting time some mathematical notations [34,35] is derived for the BMSTP model. The utilization of ℎ PESD is represented using ( ) to assist igniting smarthomes PES requests is given by Eq. (5).
Where ( ) and ( ) is the inter-arrival rate and service rate of ℎ PESD, respectively. The queue length of ℎ PESD is given by Eq. (6) and (6a).
Where, and are request queue length and probability of idleness of ℎ PESD where is the total number of PESD. The waiting time of igniting smart-home PES requests before getting any confirmation of PESD is given by Eq. (7).
Where, , is the waiting time of ℎ igniting smart-home PES request residing in ℎ PESD request queue and waiting for confirmation of preceding PES requests to get its chance. To specify the functionality of queuing theory model in more detail, two cases are considered as discussed below.
Step1: The IoT controller continuously fetches their sub-area smart-homes IoT devices (i.e., fire detector and smoke detector) data. These IoT devices are connected with IoT gateway, which sends this data to IoT controller. Let ℎ and ℎ represents fire detector and smoke detector threshold value, respectively. If fire detector and smoke detector values reach to ℎ and ℎ a PES request for igniting smart-home is forwarded from IoT gateway to IoT controller to the blockchain.
Step2: The service controller receives the PES request via a blockchain. It first checks the smart-home sub-area represented as with all PESD sub-area indicated as . After comparison, two cases are considered as follows.
1.) Case1: If matched sub-area PESD request queue is shorter than all others, the service controller select that PESD and sends the blockchain request to it. The PESD receive the blockchain request and send its service provider to igniting smart-home location 2.) Case2: If the matched sub-area PESD request queue is longer than others, the service controller compares the request queue of all PESD and select the one with the minimum request queue. The selected PESD receives a blockchain request from service controller, and PESD sends its service provider to igniting smart-home location.
Step3: A transaction omits from service controller to confirm the arrival of selected PESD service provider at igniting smart-home location.
Step4: After completing the PES request by PESD service provider, the IoT controller sends a transaction on behalf of igniting SH on the blockchain. This transaction contains a rating for PESD service provider to indicate the satisfactory level of igniting smart-home, which is later used to generate reputation value for PESD.
Reputation Management for Public Emergency Service Department
To evaluate the reputation value for a PESD, a simple reputation management model is used [36]. The reputation management value is highly dependent on smart-homes. After getting service from a selected PESD service provider, a smart-home generate a rating for it. These ratings are used to evaluate the final reputation value for PESD. This reputation management model helps the smarthomes to see the reputation value of different PESD. It is also beneficial for PESD to see their performance and take necessary action to improve their PES in the future if required. To provide a reputation value for PESD through an igniting smart-home, we assumed two scenarios. These are intime-PES and delayed-PES as described below.
Step1: The distance between selected ℎ PESD and ℎ smart-home is represented as , , is given by Eq. (8).
Where, the location coordinate of ℎ PESD and ℎ igniting smart-home are denoted as ( , ) and ( , ), respectively. To evaluate the reaching time indicated as , of selected ℎ PESD service provider to ℎ igniting smart-home is calculated using Eq. (8) is given by Eq. (9).
Where, is assumed as an average speed of ℎ PESD. Step2: The rating for ℎ PESD generated by ℎ igniting smart-home is represented as , is given by Eq. (10).
Where, and are two parameters that control lower bound, and change in rate for rating value, respectively. To evaluate the expected reaching time of ℎ PESD at ℎ igniting smart-home is represented as , is given by Eq. (11).
Where, , is the waiting time of ℎ igniting smart-home PES request in ℎ PESD before receiving any confirmation of PESD service provider and , is time duration taken by ℎ PESD during traveling at ℎ igniting smart-home location due to high traffic. The is calculated and attached by service controller while sending a confirmation of PESD service provider through the blockchain. Once the ℎ PESD service provider reached at ℎ igniting smart-home location, an actual reaching time is recorded represented as , , which is similarly calculated using Eq. (11) by varying , value.
1.) Case1: In-time-PES: In this procedure, , is compared with , , if , is shorter then , a positive rating generate for ℎ PESD by ℎ igniting smart-home is represented as , is given by Eq. (12).
Where, , is rating generated by ℎ igniting smart-home for ℎ PESD is multiplied by +1 to compute a positive rating.
Here, , is multiplied by −1 to generate a negative rating. Step3: Using Eq. (12) and (13), the service controller obtains a final reputation value represented as for ℎ PESD is given by Eq. (14). (14) Where, and represents time-interval of twenty-four hours and the total number of smart-homes, respectively.
Step4: The service controller upload this on the blockchain for further use.
Smart Contract
In this section, the functionality of different smart contract functions is defined. These functions are _ , _ , _ , _ , _ _ , _ and _ .
Registration of IoT Controller (register_IC)
Step1: The IoT controller call register IoT controller ( _ ) smart contract function through client to become the legitimate entity. For completing the registration process, it passes required information includes IoT controller valid identity ( _ ), and sub-area ( _ ) is given by Eq. (15).
Step2: The endorsing peer receives registration request and process. The endorsing peer check the provided information and use its digital certificate to sign the registration request and send back to client using blockchain transaction ( ) is given by Eq. (16).
The client collects the signed transaction and forwards it to the orderer. The orderer verifies collected information and broadcasts the new block of valid transactions to committing peers of every fabric organization is given by Eq. (17).
where, is transaction identity (17) Step3: The committing peer informs the client of successful registration and generates a pair of publicprivate key for IoT controller ( , ). The public key is used to uniquely identity the IoT controller on the blockchain.
Registration of Service Controller (register_SC)
Step1: The service controller invokes register service controller ( _ ) smart contract function via client. The service controller provides necessary information for completing registration. This includes service controller valid identity ( _ ), category (i.e., fire brigade as PES), and predetermined threshold values ( ℎ , ℎ ) is given in Eq. (18).
Step2: The endorsing peer collects the registration request and sign the registration request using its digital signature. This signed registration request is returned to client through is given by Eq. (19).
The client receives the signed transaction and address to the orderer. The orderer check received information and broadcast the new block to committing peer to update their ledger with updated information is given by Eq. (20).
Step3: The commit peer update the client and obtain a pair of public-private key for the service controller ( , ).
Registration of Smart Home (register_SH)
Step1: The registration of smart-home perform indirectly through IoT controller. The smart-home call API of register smart home ( _ ) smart contract function via IoT controller. The smart-home provide necessary information includes IoT controller public key ( ), smart-home location ( , ), smart-home sub-area ( ), category, smart-home owner phone number ( ), fire detector identity ( ), smoke detector identity ( ), fire alarm identity ( ), and IoT gateway identity ( ) is given by Eq. (21).
Step2: The IoT controller receives this information and sign registration request using . This signed information is forward to endorsing peer is given by Eq. (22).
The endorsing peer verifies the IoT controller and sign registration request using digital signature. This signed transaction is sent back to client through is given by Eq. (23).
The client forwards this signed transaction to the orderer. The orderer validates information and generates a new block. This block is broadcast to the committing peer is given by Eq. (24).
Step3: The committing peer inform the client and return a pair of public-private key for smart-home IoT gateway ( _ , _ ). The IoT controller informs the smart-home for successful registration and provides the same key pair. The IoT controller store _ of smart-home IoT gateway and identities of multiple IoT devices in its ACL.
Step2: The service controller receive PESD registration request information and sign it using . The service controller transfer this signed registration request to endorsing peer is given by Eq. (26).
The endorsing peer checks the received information to sign the transaction using its digital signature and return it to client using is given by Eq. (27).
The client forwarded this signed request transaction to the orderer. The orderer generates a new block and pass it to committing peer to update their ledger information is given by Eq. (28).
Step3: The committing peer notifies the client about successful registration of PESD and returns a pair of public-private key ( , ). The service controller informs the PESD and forwards the same key pair to PESD and store and in its local database.
Call Public Emergency Service Department Service Provider (call_PESD_serviceProvider)
Step1: The IoT gateway use _ and send the IoT device data to the IoT controller. The IoT controller continuously monitors these smart IoT device data. When threshold reaches the IoT controller invokes call public emergency service department service provider ( _ _ ) smart contract function on behalf of smart-home. The function contains necessary information includes _ , , , , ℎ , ℎ . This information is encrypted using IoT controller is given by Eq. (29).
The PES request of smart-home is broadcast to the blockchain through fabric organization is given by Eq. (30). Step2: The service controller retrieve required information from to avail PESD service provider with minimum waiting, described above in section 4.2.2 Step3: After selecting PESD, the service controller proposes a transaction that includes and is given by Eq. (31).
The orderer receives transaction information and generates a new block and broadcast. The other fabric organization receives this information and uses it later for rating generation is given by Eq. (32). Step2: This information is forward on the blockchain through IoT controller's fabric organization to take further action is given by Eq. (34).
Step3: The orderer receives reputation updation information and generates a new block to broadcast th information to other fabric organizations is given by Eq. (35). Step1: At the end of the day, the service controller evaluates the final reputation using ratings generated by multiple igniting smart-homes after fulfilling PES requests by calling the final reputation updation for public emergency service department ( _ ) smart contract function. The function parameters include and , which is encrypted using is given by Eq. (36).
Step2: The service controller's fabric organization receives this information and forward the signed transaction to the blockchain is given by Eq. (37).
Step3: The orderer process this information to create a new block and broadcast it to other fabric organizations is given by Eq. (38).
Simulation Results and Discussion
In this section, various simulations are performed to evaluate the performance of the proposed BMSTP to demonstrate PESD functionality. Hyperledger fabric 1.2v is used to implement smart contract function and python is utilized to call blockchain API. We assumed an eight-digit identification code using a random number generator for IoT devices and IoT gateway connected with smart-home. In this way, the IoT controller identifies the IoT device and IoT gateway connected with the smart-home and stores information in ACL. The proposed system model consists of eight fabric organization docker nodes. Seven fabric organization is associated with IoT controllers while the eighth is connected with service controller. In Hyperledger fabric, the cryptogenic tool is useful for generating the certificates and a pairs of publicprivate key for multiple entities, including endorsing peer, committing peer, orderer, client, etc. Whereas the configtxgen tool is used to generate the genesis block, which contains a blockchain configuration with a channel. After the successful setup of Hyperledger fabric, install the smart contract to various functions. A comparison between the expected reaching time and the actual reaching time of PESDs is represented using blue and red line in given Fig. 5. The value of these two parameters is evaluated by using Eq. (11). The information is generated by igniting smart-home located in different sub-areas for seven PESDs. As indicated in graph, the second and third PESD is unable to reach before expected reaching time. Hence, they receive a negative rating from allocated igniting smart-homes. Whereas, the other PESDs get a positive rating from respective smart-home after fulfillment of PES request. In Fig. 6, a one-to-one relationship of the rating between igniting smart-home and PESD is represented. The seven PESD receive either a positive rating or a negative rating according to their service fulfillment time. For the given graph, the data is generated using Eq. (12) and (13) by considering parameter values of Table II. The rating for PESD lies between positive and negative range. The positive rating indicates the PESD fulfills the PES request of igniting smart-home in-time and viceversa. After successful completion of PES request of igniting smart-home an IoT controller generates a rating for allocated PESD on behalf of smart-home on the blockchain. Later, this range of ratings is useful to calculate the final reputation for PESDs. Fig. 7. To evaluate the final rating for each PESD, we used the data of Fig. 6 and inserted it in Eq. (14). According to the given Fig., the first PESD has the highest reputation among all PESDs due to more number of in-time-PES fulfillment. Whereas the seventh PESD has the lowest rank compared to other PESDs because it serves more number of delayed-PES to igniting smart-homes. Due to delayed-PES, the igniting smarthomes generate a negative rating for PESD. This reputation management is useful to analyze the reason for the low rank of PESD and can take necessary action to improve the performance in real-time. 8 shows the relation between request queue length and PESDs. The data for request queue length at PESDs location is generated for six days indicated through multiple color lines. The request queue length is obtained by inserting inter-arrival rate and service rate in Eq. (6) and (6a). For the upcoming PES requests of multiple igniting smart-homes, this request queue is used to decide which PESD accepts the next PES request. For example, the request queue length for day five is {50, 46, 50, 48, 50, 50, 49}. When a new PES request reaches at IoT controller, the IoT controller first match the sub-area. If the matched PESD has a minimum request queue length, then the corresponding PESD is selected otherwise, the request forward to the PESD with the shortest request queue among all (i.e., second PESD). The comparison between BMSTP architecture with and without queuing model is presented in Fig. 9. It is evident from the results that PES request's load is adequately distributed between available PESDs according to their request queue length. Hence, it helps in minimizing waiting time of igniting smarthome with incrementing number of PES requests. Whereas due to lack of queuing model, every PESD only handles their own sub-area PES requests. Therefore, it could lead to the high waiting time for igniting smart-home and congest a PESD under high PES requests load. As shown in the given Fig., there is a drastic change in the request queue of each PESD using the without queuing model. For example, the second and seventh PESD in without queuing model has less load than others, which can be utilized in a better way to handle massive PES requests to protect against disaster in real-time. 10 represents a relation between queue length and processing time or utilization for PESDs. We assumed a service rate, which is randomly distributed between {5 − 7}/ℎ for PESDs. To evaluate the processing time and request queue length Eq. (5) and (6) are utilized. As shown in the given Fig., the blue bar indicates the request queue whereas the orange bar indicates each PESD. Due to more number of service providers (i.e., fire brigade) the first PESD request queue length is shorter and can maximize their resource utilization for future PES requests. Whereas, the second PESD has longer request queue because of fewer service providers or the worst utilization of its available resources. The processing speed for each PESD varies according to the number of PESD service provider's availability. The rest of PESDs show a balance between request queue and processing speed to fulfill PES requests. To analyze the behavior of the proposed system model a comparison of End-to-End (E2E) delay with and without blockchain is indicated in Fig. 11. We consider a sum of request and response time, which is known as E2E delay. A request time is calculated to generate with blockchain graph for an igniting smart-home PES request, which sends PES request from IoT gateway to IoT controller to the blockchain. Similarly, for response time estimation, a confirmation from the blockchain to IoT server for PESD service provider's arrival is observed. In comparison, without blockchain, the blockchain time is eliminated for request and response time. Direct communication takes place between IoT controller and service controller for confirmation and arrival of PESD service provider. For the given graph, different distribution functions are considered to identify the proposed system.
Conclusion and Future Work
The blockchain holds the promises for transparency, trust, and privacy for IoT-based smart-city. Therefore, applying blockchain directly to IoT networks is not a good option because of numerous challenges, including resource consumption, processing time, storage, and scalability. In this paper, we proposed a three-layered architecture of BMSTP that help in providing reliable PES. In the proposed system model, the ECS and queuing theory model is used to gain fast access to PES resources. The benefit of ECS is off-chain storage, proper management of IoT devices through ACL, and scalability. Whereas, queuing model helps in selecting appropriate PESD. The overall system model is designed using private blockchain, which maintains record of IoT controllers, service controller, smart-homes and PESDs in distributed ledger. The transfer of PES requests and arrival of PESD service provider is ensure using smart contract function implementation. We extended this work by maintaining reputation management for PESDs. The smart-home rate PESD according to their service fulfillment and generate either a positive or negative rating accordingly. The results indicate that our system model is sufficient to handle PES requests in real-time and ensure minimum waiting for igniting smart-homes. In the future, instead of single PES, multiple PESs may add together on a single blockchain platform. So that the users can take advantage of PES with better convenience.
Declarations
1. Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
2. The authors declare that there is no conflict of interests regarding the publication of this paper.
ACKNOWLEDGMENT
This work is supported by the SC&SS, Jawaharlal Nehru University, New Delhi.
|
v3-fos-license
|
2021-09-09T20:47:33.113Z
|
2021-07-28T00:00:00.000
|
237550969
|
{
"extfieldsofstudy": [
"Geology",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-8994/13/8/1375/pdf",
"pdf_hash": "999b00aa68a966d9f3409ec59115645e74443e2e",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42489",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "0bf3c99ed339697862e96157b30a03036befc04c",
"year": 2021
}
|
pes2o/s2orc
|
Shaking Table Test on the Tunnel Dynamic Response under Different Fault Dip Angles
: Fault ‐ crossing tunnels are often severely damaged under seismic dynamics. Study of the dynamic response characteristics of tunnels crossing faults is thus of great engineering significance. Here, the Xianglushan Tunnel of the Central Yunnan Water Diversion Project was studied. A shak ‐ ing table experimental device was used, and four sets of dynamic model tests of deep ‐ buried tunnels with different fault inclination angles were conducted. Test schemes of model similarity ratio, sim ‐ ilar material selection, model box design, and sine wave loading were introduced. The acceleration and strain data of the tunnel lining were monitored. Analysis of the acceleration data showed that when the input PGA was 0.6 g, compared with the ordinary tunnel, the acceleration increases by 117% when the inclination angle was 75°, 127% when the inclination angle was 45°, and 144% when the inclination angle was 30°. This indicates that the dynamic response of the cross ‐ fault tunnel structure was stronger than that of the ordinary tunnel, and the effect was more obvious as the fault dip angle decreased. Analysis of the strain data showed that the strain response of the fault ‐ crossing tunnels was more sensitive to the fault dip. The peak strain and increase in fault ‐ crossing tunnels were much larger than those of ordinary tunnels, and smaller fault dips led to larger increases in the strain peak; consequently, the tunnel would reach the ultimate strain and break down when the input PGA was smaller. Generally, the influence of fault inclination on the dynamic response of the tunnel lining should receive increased consideration in the seismic design of tunnels.
Introduction
Tunnel structures are restrained by surrounding rocks. It has long been thought that tunnel structures have better seismic resistance and sustain less damage compared with above-ground structures [1,2]. However, in many earthquake disasters, such as the 1999 Chi-Chi earthquake in Taiwan, China, and the 2008 Wenchuan earthquake in China, tunnels displayed varying degrees of damage. Fault-crossing tunnel structures were easily affected by the earthquake fault zone, the structure shear deformation of the lining structure was large, and the surrounding rock quality was poor [3][4][5][6]. With the continuous advancement of China's western development strategy, tunnels often inevitably crossfault fracture zones and weak interlayers, and the western region features frequent seismic activities. Therefore, study of the dynamic response law and failure mechanism of cross-fault tunnels is critically important for ensuring tunnel safety.
Tunnels crossing faults and fracture zones are the weak links of tunnel seismic resistance. Even if effort is made to avoid potentially problematic areas, high-intensity seismic zones and fracture zones cannot be completely avoided [7][8][9][10]. Consequently, much research has focused on the seismic dynamic response and anti-seismic problems of tunnels crossing faults, and several insightful conclusions have been made. Kun et al. [11] investigated the dangerous area around a shallow subway tunnel in weak fault rock and found that the deformation of the tunnel causes surface settlement and greater damage to the surface structure. Corigliano et al. [12] used a pseudo-static method combined with a kinematics method to analyze the seismic dynamic response of deep-buried tunnels across faults. Yang et al. [13] used the three-dimensional discrete element model to describe the nonlinear dynamic failure process of the tunnel fault system under the action of strong earthquakes. Based on the viscous-spring artificial boundary finite element method, Huang et al. [14] revealed that the most serious plastic damage occurred at the point where the tunnel crosses the fault under seismic dynamics, followed by the fault hanging wall. Guan et al. [15] conducted a shaking table model test of a tunnel crossing a large section, studied the seismic response and seismic performance of a large section mountain tunnel, and concluded that the existence of a large section significantly changed the seismic performance of the tunnel. Baziar et al. [16] conducted centrifugal model tests of tunnels with different buried depths through faults and characterized the interaction effects of tunnels with different depths. Shen and Yan et al. [17,18] found that the tunnel structure at the fault sustains more serious damage and that the lining of the hanging wall of the fault is more likely to be damaged than the lining of the footwall, respectively.
The research described above has mainly focused on the influence of the fault zone on the dynamic response characteristics of the tunnel. However, the location, thickness, and dip of the fault zone also have a major effect on the tunnel. Jeon et al. [19,20] studied the influence of fault location on tunnel stability and concluded that when the fault was located on the upper part of the tunnel, the displacement of the top of the tunnel was greater than that when the fault was located on the left side of the tunnel. Jeon [21] conducted a comparative study including a scale model test and numerical simulation and found that the deformation amount and the size of the plastic zone increase as the distance between the fault and tunnel decreases. Based on the finite element method, Zhang et al. [22] studied the influence of fault location and fault thickness on the deformation, stress, and plastic zone of the tunnel surrounding rock. Ardeshiri-Lajimi et al. [23] established a numerical model of the fault-underground cavern system under seismic excitation and obtained the influence law of the fault dip and location on the underground cavern. Liu et al. [24] considered the influence of the normal fault inclination on the dynamic response of the tunnel lining and conducted multiple sets of shaking table model tests. They found that the range of the fracture shear zone increases as the fault dip decreases and proposed corresponding reinforcement measures.
Most previous research has focused on the influence of fault location and thickness on tunnels; by contrast, the fault dip angle has been seldom studied. Similarly, the fault dip is a significant factor affecting the stability of the tunnel. The dip of the fault affects the propagation path of the dynamic load, resulting in different dynamic responses at different positions of the tunnel lining, and at the same position of the tunnel lining, the dynamic response will be different due to the different fault dip. Therefore, it is worth paying attention to exploring the dynamic response law of tunnels with different fault inclination angles.
Here, the Xianglushan Tunnel was studied, and four sets of tunnel model shaking table tests (three sets of fault-crossing tunnels and one set of ordinary tunnels) were carried out. In Section 2, the test device of the tunnel shaking table is introduced. In Section 3, the model design of fault-crossing tunnels and ordinary tunnel is introduced. According to the acceleration and strain data, the dynamic response difference of tunnel lining under different fault inclination angles was compared and analyzed in Section 4. Finally, the dynamic response and failure mechanism of tunnels crossing different fault dips were briefly summarized in Section 5 to provide a reference for the seismic fortification of tunnels crossing faults. Table The shaking table device of the experimental base of Nanjing Institute of Hydraulic Research was used in this experiment. The electric vibration test system model was an electric vibration test system with horizontal sliding table (DLS), which was composed of a DL-3000 electric vibration table body, SA-40
Model Box
The rigid model box was used in the model test. As shown in Figure 1a, the size of the test model box was 1000 mm × 950 mm × 850 mm (length × width × height) (inner dimensions). It was welded by 20 mm plexiglass plates and ribbed around, which ensured that the entire model test system was stiff and strong. A drill was used to make holes in the bottom glass plate, and 34 high-strength bolts were used to fix the box on the shaking table. The reliability of the box moving with the shaking table during the vibration process was guaranteed. For the boundary processing of the model box, 10 cm-thick polystyrene foam was used as the boundary condition around the model box, and the outer layer of the foam board was covered with a layer of polyethylene plastic. This way, the contact friction between the surrounding rock and the foam board was reduced [25,26]. At the bottom of the model box, a layer of gravel was bonded by solid glue. The relative slip between the surrounding rock and the bottom plexiglass plate could be effectively prevented, and the boundary conditions of the tunnel prototype could be better reproduced ( Figure 1b).
Tunnel Prototype
Xianglushan Tunnel is a controlling project in the Central Yunnan Water Diversion Project in Yunnan Province, China. The regional geological conditions are complex: many fault fracture zones with different dip angles are crossed by tunnels, and the fault fracture zone is characterized by poor surrounding rock geological conditions as well as stratum transition from soft rock to hard rock or from hard rock to soft rock. These areas were the places where the tunnel seismic damage was concentrated.
According to the geological conditions of the prototype tunnel, the model test tunnel had a circular cross-section with a diameter of 8.4 m, grade V surrounding rock, and fault angles of 30°, 45°, and 75°. Three sets of cross-fault dip angle tunnel model tests were carried out, in addition to a set of ordinary tunnel model tests (not crossing the fault). The length of the model tunnel is 80 cm, the outer diameter of the lining is 9.4 cm, and the thickness of the lining is 1 cm. The surrounding rock conditions for ordinary tunnels were the same as those for fault-crossing tunnels. The ordinary tunnel model test was used as the control group, and the influence of the cross-fault dip on the tunnel response characteristics was revealed.
Design of the Similitude Relation
The shaking table model test was a dynamic test. The physical and mechanical similarities of the static model and the dynamic model should both be satisfied. According to the law of similarity, the equation analysis method and dimensional analysis method were employed [27,28], and related physical similarity parameters could be obtained by derivation. The influence of the inclination of the fault on the tunnel under the action of an earthquake was studied in the test. Therefore, the model test and the prototype needed to be in the same gravity field. Combined with the relevant parameters of the shaking table test instrument, the length, density, and acceleration were used as the control variables. According to the prototype size of the tunnel and the size of the shaker device, the geometric similarity ratio, density similarity ratio, and acceleration similarity ratio were determined. According to the Buckingham theorem, the similarity relations and similarity ratios of other physical quantities were derived ( Table 2).
Model Materials
According to the results of the on-site rock core drilling test, metamorphic rocks (schist sandwiched mainly by shallow metamorphic limestone), magmatic rocks (mainly basalt and andesite), and sedimentary rocks (carbonate and sand-mudstone) crossed by the tunnel. The surrounding rock was mainly grade V. According to the engineering rock mass test method, a batch of 50 100 mm surrounding rock test blocks were made, and the RMT150 testing machine was used to perform uniaxial compression tests on the core drilling test. The mechanical parameters are shown in Table 3. With the similarity relationship and similarity ratio of the shaking table model test in the previous section, the similar material of the tunnel surrounding rock was determined. Quartz sand, cement, and detergent were used as similar materials for the surrounding rock in the test [15]. Through several indoor uniaxial compression tests, the mass ratio of similar materials of grade V surrounding rock was determined as m (quartz sand):m (cement):m (washing detergent) = 5.6:2.8:1 (Table 3). The lining material was gypsum-based material with similar properties to the original model C25 concrete lining. Gypsum, quartz sand, barite powder, and water prepared in a certain proportion were used as lining model materials. Standard cylindrical specimens of 50 100 mm were used, and the above-mentioned materials in different proportions were poured into prefabricated molds. After the specimens achieved a certain strength, the molds were removed and placed in an incubator for curing. A uniaxial compression test was conducted on an RMT150 testing machine. Finally, the mass ratio of gypsum, quartz sand, barite powder, and water was 1.0:1.0:1.6:1.2. Cement, fine sand, and sawdust were used as materials of the crushing belt, and the volume ratio was 1:1:1 ( Table 4). The setting of the inclination of the fault was based on the previously marked partition position, the calculated position was placed in parallel, and the partition was tilted to the angle required for the test. For the fault layout, the inclination angle, position and thickness of the fault were determined by two parallel diaphragms, and then the broken belt material was poured in layers and compacted by layers.
Sensor Layout
Accelerometer sensors and resistance strain sensors were used for every test condition. Seven accelerometers (model JY901) with the measuring range of ±16 m/s 2 , sensitivity of 2000 mv/g, and dimensions of 20 mm × 20 mm × 5 mm were used. Forty-eight resistance strain sensors (model BF350) with a resistance value of 120 ± 0.2 Ω, dimensions of 40 mm × 8 mm, sensitivity of 2.0 ± 1%, and accuracy class of A were used. The strain gauge was connected with the special copper wire by the connector.
The sensor layout is shown in Figure 2. The layout scheme of the acceleration of the transverse section of the tunnel is shown in Figure 2a. Figure 2a shows the layout plan of the transverse section acceleration of the tunnel. The layout plan of the cross-section of the four test conditions was the same. A0 was the measuring point on the shaking table, and A1-A6 were the monitoring points on different sections, which were arranged on the vault and the inverted arch. Figure 2d shows the layout of the strain gauges on the three monitoring surfaces. A symmetrical layout of the inside and outside was used.
Model Fabrication
To better simulate the Xianglushan Tunnel, the tunnel lining, surrounding rock, and broken zone materials were prefabricated in sections and poured in layers. (1) For the lining production, the steel mold was prefabricated in sections and cured for 7 days at room temperature. The sensor was installed at the marked position, and then the section lining was bonded and formed with solid glue. (2) The surrounding rock material was mixed on site, and the elevation position of the layered pouring was marked in the model box in advance; the layered layer was used for compaction. The thickness of each compaction was 100 mm to ensure compact compaction [29]. (3) The crushing belt material was also mixed on site, and two partitions were made, which were placed in parallel according to the calculated position. This could prevent the surrounding rock material from slipping to the broken zone area; the broken zone material was poured by layered pouring and layered compaction. (4) According to the geological conditions of the site, the upper covering layer of the prototype tunnel was simulated with a grade V surrounding rock material with a thickness of 20 cm. After all of the pouring was completed, it was cured at room temperature for two days, and then the vibration test was carried out. The concrete construction drawing of the model is shown in Figure 3.
Test Schemes
Sine waves of different frequencies were used as the input seismic motion, and the time history and Fourier spectrum of the input motion are shown in Figure 4. There were a total of 18 test cases in which the same sine wave was multiplied by an increasing factor, and the peak ground acceleration (PGA) was 0.1 g, 0.2 g, 0.4 g, 0.6 g, 0.8 g and 1.0 g (g = 9.81 m/s 2 ). The input frequencies of the model were 5 Hz, 7.5 Hz and 10 Hz. Transverse vibration under uniform seismic excitation was used in the test. This model test was a destruction test. Before the lining structure was destroyed, the above working conditions were loaded as much as possible. The sine wave peak intensity incremental loading method was used in the test; at each PGA, there were three loading frequencies, and the frequencies were also loaded sequentially from low to high. The test plan is shown in Table 5.
Boundary Effect
The surrounding rock near the prototype tunnel is in a semi-infinite state, whereas the model tunnel is in a constrained state. In the model box, the restraint of the model tunnel can be effectively alleviated by the surrounding flexible boundary materials. Therefore, a deviation index based on 2-norm is introduced to quantitatively describe the research boundary effect [30,31]. It can be calculated by the following formula: || || || || where is the 2-norm deviation index, and and are the acceleration peaks of the acceleration sensor on the lining and the acceleration sensor on the vibrating table, respectively, as the target signal and the reference signal. In this model experiment, corresponds to A1, A3, and A5, and corresponds to A0. Figure 5 shows the 2-norm curve diagram under different input PGAs under various working conditions. Figure 5a-c are the 2-norm deviation diagrams of section B-B, section C-C, and section D-D, respectively. The deviation index of 2-norm reflects the difference between the two signals. If the value of μ is zero, the two signals are exactly the same. Under different sections, the value was the largest when the fault dip was 30°, followed by 45°. The ordinary tunnel was the smallest, which meant that a smaller fault dip corresponded to a greater deviation in the acceleration response signal from the reference signal. The value of each section was less than 0.315, indicating that the test results would not be affected by boundary effects.
Acceleration Time History
Because of the large amount of test data, the A3 (crown) and A4 (invert) test points on section C-C were used for analysis in the four sets of test conditions. Figure 6 shows the acceleration time history curves and their corresponding Fourier spectra of different working conditions at an input acceleration of 0.6 g and a frequency of 7.5 Hz. The acceleration time history curves were all sinusoidal fluctuations; the fluctuation range of the crown was larger than that of the invert. The main frequency of vibration was 7.5 Hz, which was the same as the input frequency, indicating that the fault did not affect the fluctuation variation law and the main frequency of vibration. However, there were obvious differences in the acceleration fluctuation range and Fourier amplitude under different working conditions. In Figure 6a, the acceleration peaks at the crown and invert were 0.68 g and 0.66 g, and the Fourier amplitudes were 0.68 gs and 0.65 gs, respectively. Figure 6b shows that the peak accelerations at the crown and invert were 0.94 g and 0.91 g, and the Fourier amplitudes were 0.83 gs and 0.78 gs, respectively. In Figure 6c, the acceleration peaks at the crown and invert were 0.84 g and 0.80 g, and the Fourier amplitudes were 0.77 gs and 0.73 gs, respectively. Figure 6d shows that the acceleration peaks at the dome and invert were 0.75 g and 0.74 g, and the Fourier amplitudes were 0.75 gs and 0.74 gs, respectively. The acceleration peak value and Fourier amplitude of the fault-crossing tunnels were obviously larger than those of the ordinary tunnel, indicating that the fault had an amplifying effect on the acceleration response. Figure 7 shows a histogram of acceleration peaks on three sections under four test conditions. Under the same peak acceleration (0.6 g), the peak acceleration of the crown on each section decreased as the fault dip decreased, and the peak acceleration of the ordinary tunnel was the smallest. Taking the peak acceleration of ordinary tunnel as the control group, the increase amplitude of peak acceleration under different inclination angles was calculated. Table 6 shows the peak increase range of the dome under different working conditions. At the crown, the peak increase of section C-C increased the most as the dip angle decreased, and the peak increase of section B-B was slightly larger than that of section D-D. Section C-C was at the fault location, and B-B was at the fault's hanging wall, which meant that the fault would aggravate the acceleration dynamic response and make the acceleration changes on both sides of the fault appear asymmetrical. Figure 7b shows the acceleration peak diagrams of each section at the invert measurement points under different working conditions, which have the same law as the crown. Table 7 shows the increase amplitude of the peak acceleration at the invert under different working conditions. When the dip angle was 75°, the increase amplitude was between 108% and 117%. When the dip angle was 45°, the increase range was 115%-127%. When the dip angle was 30°, the increase range was 126%-144%. As the dip angle decreases, the increase was greater, indicating that the increasing effect of the fault dip on the acceleration response could not be ignored. The acceleration peak amplification factor was defined as the ratio of the peak acceleration collected by the measuring point to the input peak acceleration [26,32,33]. The distribution diagram of the amplification factor of each cross-section measuring point under the four test conditions is shown in Figure 8. When PGA was 0.1-0.4 g, the amplification factor decreases uniformly. When PGA was 0.4-0.8 g, the amplification coefficient decreased sharply. This was because in the early stage of vibration, the surrounding rock underwent a large movement, and the input PGA value was high, leading to the nonlinear development of surrounding rock and an increase in the damping and energy dissipation of surrounding rock. On different sections, the magnification factor of each test condition in Figure 8b was greater than that in Figure 8a,c, which was related to the location of fault C-C. The fault worsened the constraint effect of the surrounding rock and strengthened the tunnel response. Smaller fault dips resulted in larger magnification factors. Because the contact area between the fault and the tunnel increases as the dip angle decreases, a smaller distance between the fault tunnel led to a greater acceleration response. The acceleration peak ratio α was defined as the ratio of the output acceleration peak value of the fault-crossing tunnels and the ordinary tunnel at the same measuring point. Figure 9 shows a graph of acceleration peak ratios under different test conditions and under different input PGAs. Figure 9a is a graph of the peak acceleration ratio of different test conditions on each section when the input PGA was 0.1 g. The figure shows that the three curves all have a triangular distribution; thus, a large value of α on the section C-C means that the vibration response at the fault was relatively strong. The dip angle was the largest at 30°, followed by 45°, and the smallest at 75°; the maximum was 1.83. Figure 9b-d show the acceleration peak ratio graphs when the PGA was 0.2 g, 0.4 g, and 0.6 g, respectively. As the PGA increased, the distribution law of the α value remained unchanged, but the α value continued to decrease. Therefore, the response at the fault is strong, and as the dip of the fault decreased, this effect became more pronounced. Cracks could be easily produced and cause structural damage, and the seismic design of the tunnel was unfavorable. (c) (d)
Dynamic Strain
Much research has shown that the crown, invert, arch shoulder, and arch springing of tunnel lining are easily damaged under strong earthquakes [23,[34][35][36][37]. Therefore, this paper focused on the monitoring points on the inner and outer sides of the crown and invert and analyzed the dynamic strain response characteristics of the tunnel lining under different fault dip angles. Figure 10 shows the strain time history curves of the inner and outer sides of the upper dome and invert of section C-C under different test conditions. When the input PGA was 0.6 g, the strain time history curve fluctuations of the monitoring points of each working condition were similar to the input wave waveform, and they were all sinusoidal fluctuation curves. The strain value of the internal monitoring point was greater than that of the external monitoring point. The strain value of the monitoring points inside and outside the crown fluctuates in the range of negative values, and the strain value fluctuates within the range of positive values inside and outside the invert, indicating that the vault was in compression and the invert was in tension. However, the strain value of the lining at each monitoring point cannot be restored to zero, resulting in residual strain. The main reason for this pattern is that the surrounding rocks of the tunnel underwent permanent deformation under the action of the input wave, which caused additional seismic strain after the lining was vibrated. However, the strain value of the measuring point on each test condition was quite different. When the dip angle was 30°, the strain value and fluctuation range were the largest, followed by the dip angle of 45°, and the dip angle of 75° was the smallest. These three experimental conditions were much larger than the normal tunnel in the control group. Therefore, a smaller fault dip results in a greater seismic load applied to the tunnel lining across the fault. Figure 11 shows the peak strain when the input PGA is 0.6 g. Histograms of the strain peaks of the three monitoring sections are shown under four test conditions. In Figure 11a, the peak strain of the crown on each section increases as the dip angle decreases. The maximum strain peaks of the monitored sections B-B, C-C, and D-D were −410 με, −500 με, and −350 με, respectively. The strain peak at the fault was the largest, and the tunnel far away from the fault had a smaller strain value during vibration. Figure 11b was a graph of the peak strain of the invert measuring points on each section. The maximum strain peaks of the monitored sections B-B, C-C, and D-D were 320 με, 420 με, and 280 με, respectively. In sum, the fault clearly increases the strain response amplitude at section C-C, the axial strain response law was changed, and the strain peak values at different fault dip angles were different.
Peak Strain
Therefore, according to Figure 11a,b, taking the peak strain of the ordinary tunnel as the control group, the increase amplitude of the peak strain at different inclination angles was calculated (Tables 8 and 9). When the dip angle was 75° at the crown, the increase range in the peak strain was 212-292% (Table 8). When the dip angle was 45°, the increase range was 283-342%. When the dip angle was 30°, the increase range was 327-417%. When the dip angle was 75°, the peak strain increase range was 191-270% (Table 9). When the dip angle was 45°, the increase range was 228-320%. When the dip angle was 30°, the increase range was 300-420%. The above data reveal that there was little difference in the increase range of the dip angles of 75° and 45°, but the increase range was greatly increased from 45° to 30°. This shows that the strain response of the tunnel was more sensitive to the dip angle of the fault. A smaller dip angle led to a greater increase in the peak strain. The above analysis reveals that the strain peak value of section C-C was the largest. Therefore, section C-C was used to study the response law under different PGAs. Figure 12 shows the strain peaks at different monitoring points on section C-C under different PGAs. Figure 12a is a graph of the peak strain changes at the monitoring points inside and outside the crown under different test conditions. The monitoring point at the crown was compressive strain, and the peak value of negative strain under various working conditions was increasing. However, with the increase of PGA, the increasing rate of peak value of strain was decreasing. The peak strains of the internal monitoring points were larger than those of the external monitoring points. As the dip angle of the fault decreases, the difference between the internal and external strain peaks increased. In Figure 12b, the monitoring point at the invert was tensile strain; as the PGA increased, the peak value of normal strain increased. In general, the peak strain and increase in the fault-crossing tunnels were far greater than those of ordinary tunnels. The dip angle of 30° was the largest, followed by the dip angle of 45°, and the dip angle of 75° was the smallest. The peak strain ratio (β) was defined as the ratio of the dynamic strain peak value of cross-fault tunnel and ordinary tunnel at the same section C-C monitoring point [27]. Figure 13 shows the strain peak ratio under different test conditions on section C-C. Figure 13a shows the β diagram of the monitoring points inside and outside the crown under different PGAs. The β values were all greater than 1 and increased as input PGA increased. When the PGA was 0.2 g, the increase in the β values was larger and then tended to plateau. The β values of different inclination angles differed; the dip angle of 30° was the largest, and 75° was the smallest. Figure 13b shows the β diagram of the monitoring points inside and outside the inverted arch under different PGAs, and the distribution law was the same as that of Figure 13a. According to the above analysis, the seismic strain of the tunnel lining was increased by about 1.2-4.2 times by the fault, which significantly increased the lining strain. As a result, the tunnel would be destroyed under a small PGA input, and the seismic resistance of the tunnel lining structure would be weakened. (a) (b)
Conclusions
Based on the prototype of the Xianglushan Tunnel Project in Central Yunnan, shaking table model tests were used to study the dynamic response characteristics of tunnel structures under different fault inclination angles (30°, 45° and 75°). The results of the four groups of experiments were compared, which were three groups of cross-fault tunnels and one group of ordinary tunnels. The acceleration and dynamic strain response characteristics of tunnel lining across different fault dip angles were discussed, and some insightful suggestions for specific engineering design were proposed. The main conclusions are as follows: (1) Compared with the ordinary tunnel, the acceleration fluctuation law of the tunnel under different fault inclination angles was similar, and the main frequency of vibration was the same as the input frequency. However, the fault-crossing tunnels had a significant amplification effect on acceleration, and the dip angle of different faults differed. When the dip angle was 30°, the acceleration amplification factor was the largest, 45° was in the middle, and 75° was the smallest, indicating that the fault intensifies the acceleration amplification: the smaller the fault dip, the more significant the amplification effect. (2) Compared with the acceleration peaks of different sections in ordinary tunnels, the acceleration response law in the axial direction was changed by the fault. The acceleration peaks of the sections at the fault were significantly larger than the sections on both sides. In addition, when the inclination angle was 75°, the acceleration range was 108%-117%; when the inclination angle was 45°, the increase range was 115%-127%; and when the inclination angle was 30°, the increase range was 126%-144%. As the inclination of the fault decreases, the tunnel acceleration response becomes stronger, indicating that the smaller the fault dip, the stronger the dynamic response of the tunnel lining structure. (3) Under the four test conditions, the difference in the peak strain of the inner and outer cross-sections was small, even at some monitoring points. Along the tunnel axis, the strain peaks of each test condition were quite different, and the strain peaks at the faults were larger than those of the other sections. Smaller dip angles corresponded to larger strain peaks. From the perspective of tunnel seismic resistance, smaller fault dip angles led to a stronger dynamic response, which was not conducive to tunnel seismic resistance. (4) Compared with the ordinary tunnel strain response, when the input acceleration was 0.6 g, the strain values of the cross-section at the fault were magnified. When the dip angle was 75°, the peak strain of the crown was magnified by 2.7 times, at 45° by 3.2 times, and at 30° by 4.2 times. This shows that as the dip angle of the fault decreases, the magnification continues to increase. Smaller dip angles led to sharper increases. Because of the limitations of the experimental conditions, only three sets of fault dip angles and one group of ordinary tunnels were examined in this experiment to analyze the dynamic response law of tunnel lining across different fault dip angles. Additional research is needed to identify the change rule under other dip angles. However, the selection of the inclination angle of this experiment and many conclusions of this study are generalizable to other similar tunnel projects.
|
v3-fos-license
|
2023-02-26T14:12:18.371Z
|
2022-05-03T00:00:00.000
|
257187055
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jesit.springeropen.com/track/pdf/10.1186/s43067-022-00049-y",
"pdf_hash": "2c784df25bf0571a23c118341e9637752f2bb614",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42490",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "2c784df25bf0571a23c118341e9637752f2bb614",
"year": 2022
}
|
pes2o/s2orc
|
Butterfly optimizer assisted Max–Min based multi-objective approach for optimal connection of DGs and optimal network reconfiguration of distribution networks
Currently, the electrical distribution system is experiencing challenges such as low system efficiency due to substantial real power losses, a poor voltage profile, and inadequate system loadability as a result of the tremendous increase in system load demand. Therefore, distribution system operators are searching for ways to improve system efficiency and loadability. Distributed Generation technology has attracted a lot of researchers’ interest in recent days because of its enormous technological advantages in dealing with the aforementioned issues. This work presents a Max–Min based multi-objective optimization approach for optimal connection of distributed generators (OCDG) in the presence of optimal distribution network reconfiguration (ODNR) to enhance the system loadability (λmax\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda_{{{\text{max}}}}$$\end{document}) and to reduce real power loss. Two scenarios are taken to achieve the proposed objectives. Scenario-1 deals with the enhancement of loss mitigation & system loadability. In scenario-2, to extract maximum benefits with less amount of real power injection by DGs into the system, DGs real power injection is taken as one of the objectives. Under each scenario, three cases are investigated. Case 1 and case 2 deal with single-objective optimization, whereas case 3 deals with multi-objective optimization. The butterfly Optimization (BO) technique is implemented for the optimization of proposed objectives. The proposed method is tested on 33 bus, 69 bus radial distribution test systems. To test the potential of the BO algorithm, the outcomes are contrasted with the suitable results that are accessible in the literature. From the outcomes, it was observed that real power loss of the system is reduced to (75–89)%, loadability enhanced to (94–121)% with the injection of 64% KVA by DGs into 33 & 69 bus systems.
find the best location for Type-1 DGs and Type-3 DGs (0.85 leading power factor) under three different load levels of the distribution network to reduce power loss. The best percentage of loss reduction is obtained when Type-3 DGs are placed optimally in the distribution network, according to the results. A TLBO-GWO optimization technique was used by the authors of [18] to place Type-1 DG's and Type-3 DGs with optimum power factors in the distribution network in order to reduce I2R loss and improve reliability. For Type-3 DGs operating with an optimized power factor, the appropriate placement of these DGs minimizes network power loss to the lowest possible value, as evidenced by the above-mentioned articles. Therefore, in this work, we have chosen the best possible placement of Type-3 DGs with an optimized power factor in order to meet our goals. In order to fulfil the goals of reducing real power loss, improving voltage profile, and balancing the load, an optimal distribution network reconfiguration issue is designed to determine the on/off status of tie and section switches positioned in the system by satisfying the technological restrictions. Bacterial foraging optimization techniques [19,20] are used to construct an improved selective BPSO algorithm for the optimal network reconfiguration problem to reduce I 2 R loss. In [21,22], a mathematical objective function is devised to reduce real power loss and improve the voltage profile of the network. According to [23], there is a Multi-objective Max-Min strategy for minimizing I 2 R loss, load balancing between branches and feeders, and the amount of switch operations. Type-1 DGs have been handled in [8,24] to reduce the system's I2R loss using simultaneous ODNR and appropriate connection of Type-1 DGs. With the novel UVDA optimization technique, researchers in [25] attempted to minimize the real power loss in the distribution system by connecting Type-3 DGs optimally while also addressing optimal network reconfiguration issues.
Some researchers took the OCDG and ODNR problems for enhancing the loadability of the network. Loadability ( max ) of the system is termed as the maximum increase in network load level before the system voltage instability occurs. Figure 1 depicts loading curves A & B of a system without & with the connection of DGs, respectively. From Fig. 1, it is observed that curve B has a better system loadability than curve A, due to the connection of DGs optimally in the system and reconfigured network. And also from Fig. 1, it is noticed that enhancement in system loadability also improves the network voltage profile, i.e., at each loading level, curve B has a better voltage magnitude in comparison with curve A.
Authors in [26][27][28] addressed the ODNR problem to enhance the system loadability make use of a fuzzy adaptation of the optimization algorithm, discrete ABC algorithm, enhanced HSO algorithm, respectively, and deduced that the ODNR enhances system loadability. In [29], the OCDG problem has used for the improvement of system loadabilty employing the hybrid PSO-k-matrix algorithm and drawn a conclusion that with 40% of real power injection by DGs into the system, real power loss mitigated to 65-70%, loadability improved to 15-40%. Researchers in [30], taken the OCDG and ODNR problems at a time for enhancement of system loadabilty and concluded that utmost enhancement of system loadabilty is noted in the case of DGs connected optimally in the optimal reconfigured network. From the latter two papers, it was observed that even though the system loadability is improved to utmost value but the percentage of real power loss mitigation is not up to the mark. And also, from the papers [29,30], it was observed that improving the loadability of the system also improves the voltage profile of the system. Therefore, in this work authors considered the improvement of real power loss reduction and loadability of the system only which in turn also improves the voltage profile of the system. Since loadability of the system should be improved concerning the 100% load level of the system, in this work authors had not considered the load variations of the system. To extract the maximum number of benefits with less amount of power injected by DGs into the system, in this work authors have taken DGs penetration level as one of the objectives. To improve more than one objective at a time, researchers in [7,[10][11][12]17] used either weightedfactor or Pareto-based or Max-Min based multi-objective methods. Among them, Max-Min based multi-objective method had advantages like no need to bother about weights or formation of the fronts. And also, since DGs penetration level is taken as one of the objectives, drive the authors of this work for selecting max-min multiobjective method rather than pareto-based multi-objective method.
Therefore, in this work, the multi-objective approach with the Max-Min method is used to mitigate the real power loss and maximize the system loadability ( max ) . To improve the desired objectives two scenarios are considered, i.e., without and with DGs real power injection objective function. Under each scenario three cases are considered, i.e., optimization of single objectives is considered in case-1 & case-2, multi-objective optimization is considered in case-3. And each case having two sub-cases, the optimal connection of DGs in the initial configured network and the optimal connection of DGs in the optimal reconfigured network. BO algorithm is chosen to optimize the proposed objectives. The rest of the paper is organized as follows, section-2 introduces the mathematical formulation aspects of the work done in this work, section-3 will give brief insights of the BO optimization technique and thorough implementation aspects of it, Sect. 4 will illustrate the scenarios taken in this work and the associated results.
Network real power loss
Real power loss (P loss ) have to be minimized for the enhancement of distribution system efficiency.
where, J and R are branch current and branch resistance vectors of size nbr (number of branches). Backward/forward sweep-based load flow [31] is used to obtain P loss .
Loadability of the system
System loadability ( max ) have to be maximized with a view for future load enhancement on the system.
To obtain the max of the system, authors had used the method developed in [32].
DGs penetration level
Placing of Distributed Generators in the distribution network changes the distribution system characteristics [33,34] as bi-directional power flows, changing the passive distribution system network to active distribution network, change in fault current levels, etc. Therefore, to maintain the quality of the network, some of the authors in the literature limited the DG's real power injection into the distribution system. Authors in [9,29,35] limited the DGs real power penetration into the distribution system to 40% and 50%, respectively, and Authors in [36] taken DGs penetration level as one of the objectives and limited the DGs real power penetration into the system without violating stability margins. And also, from the literature, it was observed that at lower DGs penetration levels, a significant increase in DGs penetration level results in significant improvement in technical parameters. But at higher DGs penetration levels, a significant increase in DGs penetration level results in an insignificant improvement in benefits of the system. Therefore, in this paper, instead of limiting DGs real power injection to a fixed percentage say 40% or 50%, authors considered DGs penetration level as one of the objectives in scenario-2 along with the objectives considered in scenario-1, and a detailed analysis is presented in result section between scenario-1 and scenario-2 outcomes.
The mathematical modeling of the DGs real power injection into the system is taken as one of the objectives.
where P DG,k is the real power delivered by the kth DG unit, P T ,DG is the total real power delivered by the DGs units. (1)
Max-Min method
In [23], the authors addressed the multi-objective Max-Min method for optimal network reconfiguration problem to select the comprised solutions between the objectives. The Max-Min method contains a Membership function for each objective function and has a value in the range [0-1]. The membership function for the minimization of the kth objective function is given as follows.
where F k ,F max k , and F min k are the kth objective function value, maximum and minimum values of the kth objective function, respectively. For maximization of the kth objective function, reciprocal of kth objective function value, minimum and maximum values of the k th objective function have to take to get F k , F min k , and F max k , respectively. The value of F min k is taken from the outcome of that single-objective optimization, the value of F max k is taken from the base load flow results. Since the F max k is subtracted from the F k in the numerator of the Membership value ( MF k ) of an objective function, the objective function with the highest MF k value is well improved and the objective function with the lowest MF k value is less improved in terms of minimizing the objective function. Then a fuzzy decision for a comprised solution is defined as the choice of maximizing the lowest MF k value. In other words, the multiobjective function is transformed into a single objective by maximizing the minimum value among all membership values as follows: The above maximization problem is converted into a minimization problem is as follows
Constraints
The following constraints need to satisfy for the optimal network reconfiguration and connection of DGs to the distribution system. a. The voltage magnitude of the buses in the system should be within the permissible limits.
where nb is the total number of buses are there in test system. In this paper, we have taken.
|V min |= 0.95 p.u. and |V max |= 1.05 p.u. b. The magnitude of current in each branch should be less than the maximum current rating of the respective branch. where nbr is the total number of branches. c. Power injected by each DG ( P DG,k ) must be less than the maximum power limit of DGs.
where ndg is the number of DGs connected to the system. In this paper, the maximum real power injection by DGs ( P max DG,k ) limited to the total real power demand supplied by the DGs. d. Power factor of DGs must be between the minimum ( pf min k ) and unity power factor limits.
In this paper, the minimum power factor of the DG unit is limited to 0.8. e. Total real power ( P T ,DG ) and reactive power injected ( Q T ,DG ) by DGs must be less than the distribution system real ( P load ) and reactive power ( Q load ) demand.
f. Power balance constraints.
where P sub , Q sub are the real and reactive power demands at the substation. g. The ODNR problem requires checking the radiality status of the network. In this work spanning tree technique is used for checking the status of network radiality [37].
DG Placement performance indices
The following performance indices are considered to evaluate the impact of optimal DGs connection and optimal network configuration on the distribution system.
a. Percentage Real power loss reduction where P b loss is the base case real power loss of the system, P (DG+NR) loss is the real power loss of the system after placement of DGs and network reconfiguration.
b. Percentage Maximum Loadability improvement
Where b max is the base case maximum loadability of the system, is the maximum loadability of the system after placement of DGs and network reconfiguration.
Butterfly optimization algorithm
In the literature, various researchers have taken several optimization algorithms for the OCDG and ODNR problems. According to the "No Free lunch theorem, " no optimization algorithm gives exceptional results for all optimization problems. An optimization algorithm may give admirable results for some set of optimization problems and may give inferior results for another set of optimization problems. Performance-wise, all optimization algorithms are indistinguishable while solving a whole set of optimization problems. However, while choosing an optimization problem author of this paper have taken care of few things like since finding loadability of the distribution system is a very tedious process, authors try to avoid optimization algorithms with a two-stage evolutionary process like in cuckoo search algorithm, TLBO algorithm, etc., and algorithm should be easy in implementation. Since the Butterfly optimization (BO) algorithm is a new one and advantages like the ease in implementation have driven the authors to use this algorithm [38][39][40].
Sankalp A and S Singh developed the butterfly optimization (BO) method, a population-based meta-heuristic optimization strategy [41]. By drawing inspiration from butterfly mating and food seeking habits, the algorithm was created. They will rely on their sense of smell to find food and a partner for mating. In the process of searching for food, butterflies will release aromas with some force, and the potential of the scents/aromas is relative to the quantity of food source in the butterfly's neighborhood. It will emit a scent that will be picked up by others. If the other butterflies in the cluster are able to detect the aroma, they will move toward it. From one location to the next, butterflies will travel about in search of a good food source in this manner.
It is assumed that all butterflies are searching agents in the BO algorithm. Each agent will be assigned a specific location and a distinct fragrance. The scent of the agents is linked to the performance of the objective functions. In Eq. 18, the aroma's mathematical representation is provided.
where f, I, c & a are the magnitude of the aroma, stimulus intensity, sensor modality and power exponent. In the algorithm, I is taken as the fitness of the respective searching agent.
All agents will move to the new positions as per mathematical formulated Eqs. 19 & 20. where in the above vectors, L indicates DGs locations, S indicates DGs sizes and pf indicates DGs power factors. 4. Generate initial solutions using Eq. 23 as follows A set of initial solutions generated using Eq. 23 is depicted in the matrix as follows (19) . And then calculate objective function value or fitness value for each solution set in the matrix X using Eq. 26.
As a whole, the whole fitness calculation method for all the agents is depicted in Eq. 27 Find the solution with minimum objective function value (of ) value and declare the corresponding solution set from matrix X as the global best solution.
6. Set iteration count = 0. 7. Update the aroma/fragrance of butterflies using Eq. 18. 8. Update the solutions of each agent using Eq. 19 & Eq. 20. 9. Calculate the objective function value or fitness value of each updated agent using the sequential process followed in Step 5. 10. Perform greedy selection between updated solutions and old solutions. 11. Update global best solution. 12. If the iteration count is less than the maximum number of iterations repeat steps 6-11 else print out the results such as global best solution, objective function values.
Results and discussion
In this section, the proposed BO technique for enhancement of the system loadability ( max ) and real power loss mitigation is applied on 33 & 69 bus distribution test systems for the scenarios and cases shown in Table 1. In turn, each case is divided into two sub-cases a) optimal connection of DGs in the initial network without application of ODNR problem b) Optimal connection of DGs in the optimal reconfigured network which is obtained from the ODNR problem. The tuned BO algorithm parameters are shown in Table 2. All the simulations are implemented in MATLAB R2017a platform and carried out in computer having Core i7 7200U 3.10 GHz, 16 GB RAM.
Bus radial distribution system
The line & load data of the system is taken from [29]. The system has 33 section switches and 5 tie switches. Normally tie switches are in open condition. The load on the system is 3.715 MW + j 2.3 MVAR. The base case real power loss is 210.98 kW, system loadability is 3.4, and the minimum voltage is 0.9038 p.u. From the results of the ODNR problem, the points observed are.
1.
In case of f 1 optimization, the real power loss is reduced to 138.5513 kW. And also, in this case, system loadability is improved to 4.87. For this case, switches given by the algorithm are 7, 9, 14, 32, and 37. 2. In case of f 2 optimization, the system loadability is enhanced to 5.23. And also, in this case, system network power loss is reduced to 139.9782 kW. For this case, switches given by the algorithm are 7, 9, 14, 28, and 32. 3. From the above observations, it is perceived that in the case of f 2 maximization, both objectives is improved. Therefore, the optimal switches determined by the algorithm for enhancement of f 2 are considered for case-3b. Table 3 shows the outcomes of the OCDG problem for scenario 1. From the outcomes tabulated in Table 3, the succeeding points are observed. In case-1a & case-1b, the real power loss is reduced to 12.7458 kW & 18.7531 kW, respectively. It is observed that the real power loss of the system is reduced to the lowest value in the case of DGs placed in the initial configured network. In Case-2a & case-2b, system loadability is improved to 5.1 & 7.23 from 3.4 & 5.23, respectively. It is noticed that system loadability is improved Table 1 Scenarios and cases considered in this paper
Scenario-1 Objectives
Power loss Minimization ( f 1 ) Loadability Maximization ( f 2 ) Optimization of ( to the utmost value in the case of DGs connected optimally in the optimal reconfigured network, i.e., in case-2b. From the outcomes of case-2a & case-2b, it is also noticed that real power loss is only reduced to 86.5804 kW and 98.8904 kW, respectively. To improve both loss reduction and system loadability, a multi-objective approach with the Max-Min method is taken in case-3. For case-3a, the minimum F max k and maximum (F min k ) objective function values taken for real power loss are 12 kW,210.98 kW and for maximum loadability are 1/5.1, 1/3.4. For case-3b, the minimum and maximum objective function values taken for real power loss are 18 kW, 139.9782 kW, for maximum loadability are 1/7.23,1/5.23. The convergence graphs for all cases of scenario-1 are shown in Fig. 2.
From the results of case-3a & 3b, the points observed are as follows.
1. In case-3a, the system loadability is enhanced to 4.78 from 3.4, and loss is reduced to 39.1317 kW from 210.98 kW shows an improvement in both the objectives unlike in case-1a & case-2a. 2. In case-3b, the system loadability is enhanced to 6.76 from 5.23 and loss is reduced to 42.7188 kW from 139.9782 kW shows an improvement in both the objectives unlike in case-1b & case-2b. 3. In scenario-1, the utmost percentage of improvement in both the objectives is observed in case-3b, i.e., in the case of DGs optimally connected in the optimal reconfigured network while optimizing f 1 and f 2 using the Max-Min method.
The minimum F min k and maximum F max k objective function values taken in scenario-2 for case-1a are 12 kW,210.98 kW, for case-1b are 1/5.1, 1/3.4, for case-2a are 18 kW, 139.9782 kW and for case-2b are 1/7.23,1/5.23 for system loadability. The minimum limit for DGs real power injection is taken as 50% of the system real power demand i.e., 3715*0.5 = 1857 kW, and the maximum real power injective power limit by DGs is taken as 100% injection level. Table 4 shows the outcomes of the OCDG problem for scenario 2. Figure 3 depicts the comparison between the performance indices of scenario-1 & 2. From Fig. 3 it is observed that even though there is a significant difference between the % KVA injection by DGs into the distribution system in scenario-1 & scenario-2 cases, but the difference between the performance indices is very less. Therefore, it can be concluded that the optimal placement of DGs in scenario-2 gives a better improvement in objectives (% PLR & % MLI) with less amount of % KVA injection by the DGs into the system.
From Table 4, the succeeding points are noticed. In the case of f 1 and f 3 optimization, loss is reduced to 23.715 kW & 23.446 kW in case-1a & case-1b, respectively. It is noticed that the amount of loss reduction is almost the same for both cases. In the case of f 2 and f 3 optimization, system loadability is improved to 4.73 & 6.69 in case-2a & case-2b, respectively, but the loss is reduced to 55.4613 kW and 56.2606 kW only. Therefore, to improve the real power loss reduction along with loadability, optimization of f 1, f 2 , and f 3 are considered in case-3a & case-3b. The points observed from case-3a & case-3b are real power loss is reduced to 45.1702 kW, 46.3242 kW, respectively, system loadability is increased to 4.7, 6.64. From case-3a & 3b of scenario-2, it is concluded that the optimal connection of DGs in the reconfigured network shows better improvement in both the objectives, i.e., loss reduction and system loadability enhancement. The convergence graphs for all cases of scenario-2 are shown in Fig. 4. Based on the above discussions it can be concluded that among all the cases in scenario-1 & 2, the highest percentage of improvement in both the objectives is observed in case-3b of scenario-1, i.e., by the injection of 74.92% kVA into the system, real power loss is reduced to 79.75%, system lodability is increased by 98.92%. An almost equal percentage of improvement in both objectives with less amount of % kVA injection by DGs into the system is observed in case-3b of scenario-2, i.e., with 64.69% kVA injection into the system, the loss is reduced to 78.04%, system loadability is increased by 95.29%. To access the capability of the BO optimization technique to the proposed methodology, the results obtained are contrasted with the befitting methods and algorithms that are accessible in the literature and shown in Table 5. From Table 5, it is observed that in case of power loss minimization by the optimal placing of DGs in the initial configured case & optimal reconfigured case, the proposed BO algorithm reduces the real power loss to 93.95% & 91.11, respectively, whereas HTLBO-GWO, HAS-PABC, UVDA reduces real power loss to 93.51%, 92.51%, and 87.98%, respectively. In the case of loadability maximization, the BO algorithm improves it to 50% whereas HPSO improves it to 48.23% only. In scenario-2, in the case of loss minimization, the loss is reduced to 88.76% with 53.01 kW injection by DGs into the system, whereas the BSOA algorithm reduces it to 85.94% with % 50 kW real power injection by DGs into the system. In [29], with 40% kW or 47.05 kVA injection by DGs into the system, real power loss reduced to 71.75%, system loadability increased to 26.76%. But with the proposed method in this paper, with 64.69% kVA injection by DGs into the system, real power loss reduced to 78.09%, maximum loadabilty increased to 95.29% that shows an improvement in both the objectives unlike the method in [29] which shows the efficacy of the proposed method.
Bus radial distribution System
The line & load data of the system are taken from [29]. The system has 69 section switches and 5 tie switches. Normally tie switches are in open condition. The load on the system is 3.801 MW + j 2.693 MVAR. The base case real power loss is 224.9515 kW, loadability of the system is 3.21 and the minimum voltage is 0.9091 p.u.
From the results of the ODNR problem, the following points are observed. In the case of individual optimization of objective functions f 1 & f 2 , switches given by the algorithm are the same, i.e., they are 14, 58, 61, 69, and 70. For these switch combinations real power loss is mitigated to 98.55 kW, lodability enhanced to 5.23. Therefore, the abovementioned optimal switches are considered for the OCDG problem in the optimal reconfigured network case. Table 6 shows the outcomes of the OCDG problem for scenario 1. In case-1a & 1b, the power loss is reduced to 4.487 kW & 5.3082 kW. It is observed that the power loss is reduced to the lowest value in the case of DGs connected optimally in the initial configured network. In case-2a & case-2b, the system loadability is improved to 4.91 &7.71, respectively, but the real power loss is only reduced to 89.8601 kW & 93.9651 kW. In case-3a & 3b, the system loadability is improved to 4.61 & 7.07 and real power loss is reduced to 30.2921 kW & 25.313, respectively. From scenario-1 outcomes, it can be deduced that both the loadability and real power loss reduction are well improved in case-3b. The convergence graphs for all cases of scenario-1 are shown in Fig. 5. Figure 6 depicts the comparison between the performance indices of scenario-1 & 2. From Fig. 6, it is noticed that the optimal connection of DGs in scenario-2 gives a better improvement in objectives (% PLR & % MLI) with less amount of % KVA injection by the DGs into the system. Table 7 shows the outcomes of the OCDG problem for scenario 2. In case-1a & 1b, the real power loss is reduced to 9.6078 & 7.0345 kW, respectively, but the system loadability is improved to 4.09 & 6.4 only. In case-2a & 2b, the system loadability is enhanced to 4.51 & 7.04, respectively, but the power loss is reduced to 35.096 kW & 46.448 kW only. Among case-3a & case-3b, better enhancement in both objectives is observed in optimal connection of DGs in optimal network reconfigured case, i.e., real power loss is reduced to 23.8112 kW and system loadability is enhanced to 6.94. The convergence graphs for all cases of scenario-2 are shown in Fig. 7. Based on the above discussions it can be concluded that among all the cases in scenario-1 & 2, better improvement in both objectives with less % KVA injection by DGs is observed in case-3b of scenario-2, i.e., real power loss is reduced to 89.414%, maximum loadability is increased to 116.19%.
To access the capability of the BO optimization technique to the proposed methodology, the results obtained are contrasted with the befitting methods and algorithms that are accessible in the literature and shown in Table 8. The proposed algorithm yields to produce the same result produced by the HPSO algorithm in the literature concerning loadabilty of the system as an objective function and the proposed algorithm performs well in mitigating the real power with comparison to the HTLBO-GWO algorithm.
In [29], with 40% KW or 47.06 KVA injection by DGs into the system, real power loss reduced to 87.206%, system loadability increased to 27.72%. But with the proposed method in this paper, with 63.98% KVA injection by DGs into the system, real power loss reduced to 89.414%, system loadabilty increased to 116.19% that shows an improvement in both the objectives unlike the method in [29] which shows the efficacy of the proposed method.
Conclusion
In this work, OCDG and ODNR problems on radial distribution systems have been addressed to enhance the system efficiency and too apt upcoming load growth via I 2 R loss mitigation and system loadability enhancement. To achieve the objectives, two scenarios each consisting of three cases and each case having two sub-cases are considered. The concept of a spanning tree has been taken for confirming the radiality status of the system.BO optimization technique has been taken to optimize the proposed objective functions and implemented on 33 & 69 bus test systems. In both the test systems, the highest percentage of improvement in both the objectives with less amount of % KVA injection by DGs into the system is observed in case-3b of scenario-2. From the outcomes, it has observed that loss of the system is reduced to (75-89) %, loadability enhanced to (94-121) % with the injection of 64% KVA by DGs in 33 & 69 bus systems. BO algorithm has performed well in optimizing the proposed objectives when compared with the other algorithms in the literature.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-11-30T00:00:00.000
|
2355640
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/omcl/2016/1716341.pdf",
"pdf_hash": "bb047c56ec2453b6e93d7e790298d34389d9010a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42492",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "45a2aede3fa43372a5aa7c1fa3729ca69aa8cf3c",
"year": 2015
}
|
pes2o/s2orc
|
Bypassing Mechanisms of Mitochondria-Mediated Cancer Stem Cells Resistance to Chemo- and Radiotherapy
Cancer stem cells (CSCs) are highly resistant to conventional chemo- and radiotherapeutic regimes. Therefore, the multiple drug resistance (MDR) of cancer is most likely due to the resistance of CSCs. Such resistance can be attributed to some bypassing pathways including detoxification mechanisms of reactive oxygen and nitrogen species (RO/NS) formation or enhanced autophagy. Unlike in normal cells, where RO/NS concentration is maintained at certain threshold required for signal transduction or immune response mechanisms, CSCs may develop alternative pathways to diminish RO/NS levels leading to cancer survival. In this minireview, we will focus on elaborated mechanisms developed by CSCs to attenuate high RO/NS levels. Gaining a better insight into the mechanisms of stem cell resistance to chemo- or radiotherapy may lead to new therapeutic targets thus serving for better anticancer strategies.
Introduction
One of the hypotheses explaining tumor progression suggests the existence of a group of cells with a stem phenotype which preserves tumors through a continuous production of progeny [1]. In recent years, the CSCs hypothesis has gained ground in several cancers [2]. The CSCs mediate tumor resistance to chemo-and radiation therapy and are also capable of invading and migrating to other tissues [3]. Similarly to cancer cells (CCs), the CSCs features include self-renewal capacity, the ability of proliferation, migration to and homing at distant sites, and resistance to toxic agents. Accordingly, CSCs identification and isolation include in vitro (sphere forming, Hoechst dye exclusion, aldehyde dehydrogenase ALDH enzymatic activity, surface markers, colony forming, lable retention, and migration) and in vivo (tumor propagation, xenotransplantation) assays. This theory has been recently supported by the findings that, among all malignant cells within a particular tumor, only CSCs have the exclusive potential to generate tumor cell population [4]. Given these shared attributes, cancer was proposed to originate from transforming mutation(s) in normal stem cells that deregulate their physiological programs [5]. In turn, intrinsic or acquired resistance of CSCs involves mechanisms such as genetic aberrations, quiescence, overexpression of drug transporters, DNA repair ability, and overexpression of antiapoptotic proteins [6]. Intrinsic resistance to chemotherapy is emerging as a significant cause of treatment failure and evolving research has identified several potential causes of resistance most of which end up in increased apoptosis [7]. Among the mechanisms of CSC-related therapy resistance may include ROS resistance, activation of ALDH, active developmental pathways (Wnt, Notch), enhanced DNA damage response, deregulated autophagy, altered metabolism, and microenvironmental conditions [8]. Surprisingly, most of the above-mentioned pathways in CSCs are mediated by redox misbalance and involvement of mitochondriamediated antioxidant capacity [9]. Tumor regression Figure 1: CSCs survival after chemo-/radiotherapy. The percentage of CSCs in a tumor varies depending on tumor type and tumor stage but generally comprises 0.5-5%. Most CCs in a tumor are killed after radiation or conventional chemotherapy (i.e., CDDP). The most important consequence of this is that although the tumor disappears in some cases (i.e., by image such us nuclear magnetic resonance), the percentage of CSCs has not diminished; quite the contrary it increased in proportion to the whole number of microscopically tumoral cells (reaching till 50% or more). CSCs left behind unaffected, due to their chemo-and radioresistance, eventually will experience metabolic reprograming to give rise to new CCs and CSCs, nesting the gap left by the tumor often with more aggressive phenotype. The cotreatment of conventional therapy with a more specific drug against CSCs (i.e., LND) in parallel will solve this problem.
The major exogenous source of reactive species in eukaryotic cells is mitochondria. In normal cells, RO/NS concentration is maintained at certain threshold required for signal transduction or immune response mechanisms, and CSCs, which exhibit an accelerated metabolism, demand high ROS concentrations to maintain their high proliferation rate [10]. The imbalance between ROS generation and detoxification, known as OS, is thought to be involved in cancer development and progression [11,12].
Chemo-/radioresistance to cancer therapy is an unsolved problem in oncology [13]. Numerous studies have attempted to explain mechanisms of resistance over the last decades. CSCs may be innately resistant to many standard therapies due to a high antioxidant capacity and inability to perform apoptosis thus surviving cytotoxic or targeted therapies ( Figure 1) [14]. Here we review the progress of CSCs studies made for the last years focusing on possible mechanisms of CSCs radio-and chemoresistance in connection to oxidative stress (OS) and summarizing some therapeutic approaches to overcome that issue.
Resistance of CSCs to Conventional Chemo-and Radiotherapeutic Regimes in Connection to Oxidative Stress (OS)
Although conventional chemotherapy kills most cells in a tumor, it is believed to leave CSCs behind causing chemoand radioresistance ( Table 2). As a consequence, CSCs persist in the body of cancer patients and in the middle-long term will migrate to the blood to nest in distal organs to metastasize. In the last five years, several protective CSC pathways have been proposed. The multifunctional efflux transporters from the superfamily of human ATPbinding cassette (ABC) are among them. They comprise seven subfamilies with 49 genes grouped into seven families (from A to G) with various functions, and at least 16 of these proteins are implicated in cancer drug resistance [15]. These ABC proteins have been known to also participate in multidrug resistance (MDR) of tumor cells [16]. Recent data demonstrate their role in protection of CSCs from chemotherapeutic agents [17]. Importantly, they are engaged Oxidative Medicine and Cellular Longevity 3 in redox homeostasis and protection from OS in mammals [18]. Malfunction of the ABCD1 gene impairs oxidative phosphorylation (OXPHOS) triggering mitochondrial ROS production from electron transport chain complexes [19]. ABCC9 is required for the transition to oxidative metabolism [20]. Deficiency of a transregulator of mitochondrial ABC transporters PAAT decreases mitochondrial potential and sensitizes mitochondria to OS-induced DNA damages [21]. Drug resistance in colon CSCs is mediated by the ABC G member 2 (ABC-G2) and regulated by Ape1 redox protein [22]. Overall, one may conclude that redox dysregulation of one or several ABC members may significantly impact CSCs survival after chemotherapeutic treatment. Decreasing the activity of ABC transporters may therefore overcome drug resistance [23].
On the other hand, developmental pathways such us the Epithelial-Mesenchymal Transition (EMT) play crucial roles in tumor metastasis and recurrence. EMT process resembles very much the fate of CSCs and is involved in de novo and acquired drug resistance [24]. Altered production of RO/NS is involved in the regulation of CSC and EMT characteristics [25]. Moreover, microRNAs play also key roles in this aspect. For example, miR-125b suppressed EMT by targeting SMAD2 and SMAD4 [26]. Moreover, secreted frizzledrelated protein 4 (sFRP4) chemosensitized CSC-enriched cells to the most commonly used antiglioblastoma drug, temozolomide (TMZ), by the reversal of EMT. Significantly, the chemosensitization effect of sFRP4 was correlated with the reduction in the expression of drug resistance markers ABCG2, ABCC2, and ABCC4 [27]. These findings could be exploited for designing better targeted strategies to improve chemoresponse and eventually eliminate CSCs.
Apoptosis and CSCs Resistance due to Increased Antioxidative Properties
An increasing number of conventional and novel generation chemotherapeutical drugs induce apoptosis through the induction of OS. If decreased RO/NS detoxification in CSCs is indeed a prime factor for chemo-or radioresistance prooxidant chemicals as, for example, malonohydrazides, targeting the redox state of pathogenic versus nonpathogenic cells may represent a challenging solution. The most developed drug of this class, STA-4783 (elesclomol), targets OS by Hsp70 induction and induces ROS within CCs [28]. Shepherdin is one of the first rationally designed mitochondrial drugs targeting Hsp90/TRAP1 functions through inhibiting ATPase activities. The tumor necrosis factor (TNF) receptor-associated protein 1 (TRAP1) is a mitochondrial homologue of Hsp90 [29]. Phosphorylation of TRAP1 by PTEN is responsible for the protection of ROS-mediated cell death [30]. Therefore, blocking the ATP pocket in the Hsp90 by shepherdin or geldanamycin causes inhibition of the TRAP1 chaperone function and may provide a novel strategy to design an anti-CSCs drugs [31]. SMIP004 (N-(4-butyl-2-methyl-phenyl) acetamide), a novel anticancer drug, induces mitochondrial ROS formation and disrupts the balance between redox and bioenergetics states [32].
Recent works by Kim et al. identified CD13(+) liver CSCs surviving in hypoxic lesions after chemotherapy, presumably through increased expression of CD13/aminopeptidase N, a ROS scavenger [45]. CD13 also enhances the generation and accumulation of mutations following DNA damage. Therefore, the CD13(+) dormant cancer stem cells must be eradicated fully to achieve complete remission of cancer [46]. The resistance of CD133 positive CSCs to chemotherapy can also be linked with higher expression of BCRP1 and MGMT, as well as the antiapoptosis protein and inhibitors of apoptosis protein families [47,48].
Resistance of glioma to chemo-or radiotherapy is associated to inability of glioma CSCs to undergo apoptosis. Combined therapy aiming to inhibit AKT/mTOR signalling pathway and reactivate TP53 functionality allowed triggering cellular apoptosis [49]. Rottlerin (ROT) is widely used as a protein kinase C-delta (PKC-) inhibitor has been found to induce apoptosis via inhibition of PI3K/Akt/mTOR pathway and activation of caspase cascade in human pancreatic CSCs [33].
Nuclear factor erythroid 2-related factor 2 (Nrf2) is an essential component of cellular defense against a variety of endogenous and exogenous stresses [50]. NRF2 is an inducible transcription factor that activates a battery of genes encoding antioxidant proteins and phase II enzymes in response to oxidative stress and electrophilic xenobiotics [51]. NRF2-silencing in CSCs models, known as mammospheres, demonstrated increased cell death and lack of anticancer drug resistance [52]. Moreover, dedifferentiated cells upregulate MDR genes via Nrf2 signaling and suggest that targeting this pathway could sensitize drug-resistant cells to chemotherapy [53]. Interestingly, bardoxolone methyl (also known as CDDO-Me or RTA 402) is one of the derivatives of synthetic triterpenoids acting via Nrf2 and has been used for the treatment of leukemia and solid tumors [34].
Chemo-and Radioresistance of CSCs due to Impaired Autophagy: Novel Therapeutic Targets
Autophagy, also referred as "cell cannibalism," is the degradation of cytoplasmic components, protein aggregates, and organelles through the formation of autophagosomes, which are degraded by fusion with lysosomes [54]. This process depends on a group of evolutionarily conserved autophagyrelated (ATG) genes [55]. Although autophagy and apoptosis are apparently two different mechanisms, one promoting cell survival and the latter cell death, they are quite coordinated in the cells. For example, Beclin-1 (Bec1), the mammalian orthologue of yeast Atg6, is part of the class III phosphatidylinositol 3-kinase (PI3K) complex that induces autophagy. Beclin-1 interacts with the antiapoptotic protein Bcl-2 and its dissociation is essential for its autophagic activity [56]. Hypoxia-mediated autophagy has been previously suggested to promote the survival of CSCs of various origin. Hypoxia-inducible factor-1 (HIF-1 ), one of the key players of cell survival response to hypoxia, was shown to convert non-stem pancreatic cancer cells into pancreatic cancer stem-like cells through autophagic mechanisms [57]. HIF1 induction and NF B activation are sufficient to induce the autophagic degradation of breast CSCs [58]. Inhibition of Wnt by resveratrol in breast CSc [35] and Notch by honokiol in melanoma SCs [36] suggests involvement of these autophagy-related players in regulation of CSCs signaling pathways.
Autophagy plays a critical role in adaptation to stress conditions in CCs and can enhance the radio-and chemoresistance of CSCs by limiting OS and protecting CSCs stemness properties [59]. Although mechanisms inducing autophagy are not fully understood, the connection of the CSCs resistance to the chemo-and radiotherapy is supported by a number of indirect evidences. Platin-derived drugs, which are used commonly in the conventional chemotherapeutical treatments, have a role in autophagy. For example, cisplatin (CDDP) preferentially induces autophagy in resistant esophageal CCs EC109/CDDP but not in EC109 cells (parental or sensitive to CDDP) [60]. Moreover, abolition of autophagy by pharmacological inhibitors or knockdown of ATG5/7 resensitized EC109/CDDP cells. In particular, the chemotherapeutic drug oxaliplatin induced autophagy, enriched the population of colorectal CSCs, and participated in maintaining the stemness of colorectal CSCs, thus making the cells more resistant to chemotherapy [61].
The Janus-activated kinase 2-(Jak2-) signal transducer and activator of transcription 3 signaling pathway may play a role in autophagy-dependent chemoresistance of CSCs derived from triple-negative breast tumors. In a recent study by Choi et al., chloroquine (CQ), an antimalarial reagent which blocks autophagy, was identified as a potential CSC inhibitor [37]. The CQ is known to evoke mitochondrial ROS and ROS scavengers may decrease CQ-induced mitochondrial autophagy [62]. All these facts support the note of ROSdependent autophagic survival of CSCs. Recently explored inducible mouse model of mutated Kras revealed that a subpopulation of dormant tumor cells surviving oncogene ablation have features of CSCs and their tumor relapse is dependent on expression of genes governing OXPHOS, mitochondrial respiration, and autophagy [63].
Highly synergistic growth inhibition was observed in patient-derived lung CSCs exposed to a multitarget folate antagonist pemetrexed followed by a histone deacetylase inhibitor ITF2357, a known autophagy inducer [38]. A few studies using cultured cells found that melatonin promoted the generation of ROS at pharmacological concentrations [64]. Treatment with melatonin induced glioma CSCs death with ultrastructural features of autophagy [65].
Reduced glutathione (GSH) is considered to be one of the most important scavengers of reactive oxygen species (ROS), and its ratio with oxidised glutathione (GSSG) may be used as a marker of oxidative stress [39]. The side population (SP) cells from bladder cancer cell lines which resemble characteristics of CSCs had low ROS levels and high GSH/GSSG ratio and might contribute to radioresistance of CSCs [66]. The SP cells also showed substantial resistance to gemcitabine, mitomycin, and cisplatin compared with the non-SP counterparts and revealed a high autophagic flux associated with the ABCG2 expression. Importantly, pharmacological and siRNA mediated inhibition of autophagy potentiated the chemotherapeutic effects of gemcitabine, mitomycin, and CDDP in these CSCs. This may represent a potent target for the treatment of bladder carcinoma [67]. Screening studies by Jangamreddy et al. identified molecules that were preferentially toxic to CSCs, in particular, K+-ionophore salinomycin [40]. Salinomycin causes mitochondrial dysfunction, decreases ATP production, and induces autophagy [68]. Under hypoxia or/and low glucose level (the primary energy source for CCs) its toxicity towards CCs is amplified [69]. The mechanism includes activation of the AMP activated protein kinase (AMPK) that triggers autophagy making salinomycin to be anti-CSCs chemical [70]. The combination of AMPK agonist such as metformin and a glycolysis inhibitor 2deoxyglucose (2DG) led to significant cell death associated with a sustained autophagy inhibiting tumor growth in mouse xenograft models [71]. Since AMPK activation was shown to mediate the metabolism reprogramming in drugresistant CCs including promoting Warburg effects and mitochondrial biogenesis, both salinomycin and corresponding inhibitors of AMPK are now suggested to combat chemo-and radiotherapeutic resistance of CSCs [72].
Another type of selective autophagy, called mitophagy is served to the removal of dysfunctional mitochondria from the cells and is often controlled by moderate level of ROS [73,74]. During mitophagy dysfunctional mitochondria are engulfed by a double-layered membrane (phagophore) that forms so-called autophagosome followed by degradation [75]. Among several drugs inducing mitophagy proton pump inhibitor ESOM damages mitochondria through NADPH oxidase and ROS accumulation [76]. The ESOM may work as a synthetic lethal reagent which increases cytotoxicity if used upon knockdown of Beclin-1 [77]. Another drug DCA (dichloroacetate) is a small molecule and a mitochondriatargeting agent. In CCs, the DCA induces mitophagy through accumulation of ROS and reduction of lactate excretion followed by the increase of NAD(+)/NADH ratio [78]. Importantly, paclitaxel-resistant cells contained sustained mitochondrial respiratory defect. DCA specifically acts on cells with mitochondrial respiratory defect to reverse paclitaxel resistance. DCA could not effectively activate oxidative respiration in drug-resistant cells but induced higher levels of citrate accumulation, which led to inhibition of glycolysis and inactivation of P-glycoprotein [79].
Overall, the above data provide multiple lines of evidence supporting the idea that impaired autophagy coupled with OS plays an essential role in the development of drug resistance, self-renewal, differentiation, and tumorigenic potentials of CSCs, implying the therapeutics potential of autophagy inhibitors to overcome that issue (Table 2).
OS, Mitochondria, and CSCs
In mammalian systems RO/NS presumably include so-called free ( • OH, RO • , ROO • , NO • , hydroxyl, alkoxyl, peroxyl, and nitroxyl), superoxide (O 2 •− ) radicals, and peroxides (H 2 O 2 , RO 2 H) and are mainly generated by OXPHOS in mitochondria, whereas, in pathological conditions, high level of RO/NS can be mitochondria dependent (ischaemia, loss of cytochrome c, low ATP demand and consequent low respiration rate, diabetes, DNA damage, and mutations), independent or indirect (cancers, tissue injuries, and inflammatory events) [80,81]. Importantly, being the main source of RO/NS generation, mitochondria are also their primary and the most susceptible target. This may evoke a "secondary wave" of OS generated by damaged mitochondria followed by formation of extra RO/NS or by inhibition of detoxifying enzymes and generation more RO/NS flux thus forming a vicious cycle [82]. In fact, decreased mitochondrial priming in colon CSCs responsible for resistance to conventional chemotherapy has been recently determined [83]. The relevance of OXPHOS has also been shown in glioblastoma (GBM) sphere cultures (glioma spheres). Insulin-like growth factor 2 mRNAbinding protein 2 (IGF2BP2) expression provides a key mechanism to ensure OXPHOS maintenance by delivering respiratory chain subunit-encoding mRNAs to mitochondria and contributing to complex I and complex IV assembly [84]. Several antioxidant enzymes such as Mn, Cu, Zn-containing superoxide dismutases (SODs), glutathione peroxidase, glutathione reductase (GPx), glutathione S-transferase (GSTs), and catalase protect DNA from OS [85]. Unlike CSCs, CCs have higher bioenergetic metabolism, higher ROS level, and higher capacity to detoxify RO/NS [86]. These facts may explain overall better cancer survival. In CSCs, the level of RO/NS is not that high, comparatively to surrounding CCs [87][88][89][90]. There can be several reasons to that. The mitochondrial mass can be higher in CSCs or mitochondrial functions (ATP production, Δ ) can be impaired. However, in the recent experiments with lung CSCs no difference in mitochondrial mass between CSCs and non-CSCs was found [91]. The Δ level and the intracellular concentrations of ATP and ROS were also lower than in non-CSCs. Another possible scenario of low ROS in CSCs could be metabolic reprogramming, which is critical to sustain self-renewal and enhance the antioxidant defense mechanism. This fact is closely related to the adaptation of CSCs to hypoxia requiring a biochemical trim characterized by a glycolytic-oriented metabolism that counterbalances a poor mitochondrial apparatus. In metabolic shift, CSCs showed a greater reliance on glycolysis for energy supply compared with the parental cells [92]. On the other hand, ALDH are a group of enzymes that oxidize aldehydes formed in the process of alcohol metabolism. High levels of the detoxifying enzyme ALDH1 were frequently associated with CSCs, and this marker was used for the identification of CSCs [93]. Recently, Honoki et al. evaluated the cancer spheroid subpopulation of cells from human sarcoma with high ALDH1 activity and found that these cells possess strong chemoresistance and detoxifying capability [94]. The identification of CSCs from human lung CCs identified cells with high ALDH1 activity, which was attributed to high self-renewal capacity, differentiation, and resistance to chemotherapy [95]. Breast CSCs identified as ALDH1-positive play a significant role in resistance to chemotherapy [96]. It seems like ALDH protects the drugtolerant subpopulation of cells, including CSCs, from the potentially toxic effects of elevated levels of RO/NS. Not surprisingly, pharmacologic disruption of ALDH activity leads to accumulation of ROS to toxic levels, even within the drug-tolerant subpopulation [97].
Suggested Principles of Drugs Design towards CSCs Resistance
Some physiological metabolites such as pyruvate, tetrahydrofolate, and glutamine act as powerful cytotoxic agents on CSCs when supplied at doses that perturb the biochemical network, sustaining the resumption of aerobic growth after the hypoxic dormant state [98]. This indicates that the metabolic state of CSCs must be crucial for their resistant to therapy because when CSCs need to differentiate and proliferate, they shift from anaerobic to aerobic status. The principles of drug resistance in CCs can be also applicable to CSCs. Cells can be resistant to the drug by (1) active drug efflux by drug transporters, such as Pgp, MRP, and BCRP; (2) loss of cell surface receptors and/or drug transporters or alterations in membrane lipid composition; (3) compartmentalization of the drug in cellular vesicles; (4) altered/increased drug targets; (5) metabolic disruption due to OXPHOS; (6) alterations in cell cycle; (7) increased drug metabolism/enzymatic inactivation; (8) active damage repair; (9) inhibition of apoptotic pathways. However, targeting RO/NS upon designing novel therapeutic strategy to overcome chemo-and/or radioresistance of CCs is associated with some difficulties and should be considered with extra care. This is because antioxidant systems not only remove oxidants but also maintain them at an optimum level [99]. Therefore, besides obvious pharmacological properties (low toxicity, subnanomolar active concentrations, solubility, and oral bioavailability), the following principles should be taken into account when rationally designing such drugs: (i) they should transiently interact with proteins that block autophagy or promote apoptosis to allow sufficient RO/NS accumulation; (ii) ideally, those drugs should have an antagonist with higher affinity to the drug and lower affinity to surrounding molecules; (iii) specific moiety for selective delivery to these organelles should be considered; (iv) low adverse side effects should be taken into account. Although a number of drugs triggering apoptotic or autophagic events have been produced for the treatment of cancer, only few of them can meet the above criteria and are summarized in Table 1. In addition, few other drugs have to be added. Alpha-tocopheryl succinate ( -TOS), an anionic analogue of vitamin E [100] of which mechanism of action involves interaction with ubiquinonebinding site of mitochondrial complex II and concomitant inhibition of succinate dehydrogenase (SDH) activity [101]. It is accompanied by recombination with molecular oxygen to yield ROS and permeabilization of mitochondria [102]. Proapoptotic drug BMD188 (cis-1-hydroxy-4-(1-naphthyl)-6-octylpiperidine-2-one) generates mitochondrial ROS and triggers apoptosis by activation of caspase-3. It was reported to inhibit the primary growth of prostate CSCs [41,103]. Antineoplastic drug LND, ionidamine, 1-(2,4-dichlorobenzyl)-1H-indazole-3-carboxylic acid has been shown to inhibit glycolysis and induce mitochondria-mediated apoptosis by activation of caspase-9, caspase-3, and Akt/mTOR pathways [104]. Natural terpenoid aldehyde gossypol has been shown to increase ROS and induce apoptosis and necrosis via inhibition of Bcl-2, activation of caspase-3, cytochrome c release from mitochondria, and displacing BH3-only proteins from Bcl-2 [42]. Both gossipol and its derivative, apogossypolone (ApoG2), were also shown to induce autophagy in several CCs through Beclin-1-mediated ROS upregulation [105][106][107][108]. Polyunsaturated fatty acids (PUFAs) induce apoptosis and autophagy by means of mitochondrial ROSmediated Akt-mTOR signaling [43,109]. Finally, inhibitors of oxidoreductase thioredoxin (TrxR) scavenging ROS provide a promising therapeutic target for CSCs intervention.
Concluding Remarks
It is becoming clearer that a single drug against cancer would not be effective to cure the disease as CCs learn how to become resistant in the middle-long term along the treatment and persist hidden in the body of cancer patients upon reactivation. In principle, conventional treatments are effective to induce apoptosis or autophagy in the bulk of the tumor particularly on CCs but without affecting the CSCs. The fatal consequences of this are not only that conventional therapy favors the presence of the CSC but also that they resume growth more aggressively. The reasons of CSCs resistance to the induction of apoptosis, autophagy, or hypoxia are closely related to their metabolic status which in turn depends on mitochondria as the main source of energy. A synthetic lethality or combinatorial therapy followed by animal studies to specify the dose and timing and minimize side effects should be considered for effective targeting of CSCs.
Conflict of Interests
Matilde E. Lleonart is a FIS Investigator (CP03/00101). Alex Lyakhovich visit is sponsored by ICRC/Masaryk University, Brno, Czech Republic.
|
v3-fos-license
|
2018-12-05T14:21:57.759Z
|
2015-01-01T00:00:00.000
|
55498707
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.sajs.co.za/article/download/3522/4393",
"pdf_hash": "9260ef0c77173f9ea5dddfe1b23278e30563a808",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42494",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "9260ef0c77173f9ea5dddfe1b23278e30563a808",
"year": 2015
}
|
pes2o/s2orc
|
Plagiarism in South African management journals
Plagiarism by academics has been relatively unexplored thus far. However, there has been a growing awareness of this problem in recent years. We submitted 371 published academic articles appearing in 19 South African management journals in 2011 through the plagiarism detection software program TurnitinTM. High and excessive levels of plagiarism were detected. The cost to government of subsidising unoriginal work in these journals was calculated to approximate ZAR7 million for the period under review. As academics are expected to role model ethical behaviour to students, such a finding is disturbing and has implications for the reputations of the institutions to which the authors are affiliated as well as that of the journals that publish articles that contain plagiarised material.
Introduction
In 2003, an editorial 1 in this journal alerted readers to the developing concern about misconduct in the sciences, and acknowledged that the extent of such misconduct and its various manifestations were largely unknown.In 2012, Honig and Bedi 2 published the findings of a study in the prestigious Academy of Management Learning and Education journal in which they examined 279 papers submitted for the 2009 Academy of Management conference.They found that 25% of papers contained some degree of plagiarism, with over 13% evidencing significant plagiarism (defined as comprising 5% or more of the content).In addition, they reported that a greater amount of plagiarism appeared to emanate from countries outside North America.Against the background of these studies, and given the paucity of research relating to this problem, in the present study, located in a country outside North America, we have attempted to contribute to deliberations in this area.
The objective of the study was to investigate the degree of plagiarism evident in articles published in 2011 in South African management journals that attract subsidy from the Department of Higher Education and Training (DHET).As a subcategory of research dishonesty, plagiarism is the representation of the work of another, or one's own work, without acknowledgement of such work and can include careless paraphrasing, the copying of identical text or providing incomplete references that mislead the reader into believing that the ideas expressed belong to the author of the text. 2,3er the past years student plagiarism has commanded much research attention [4][5][6][7][8] , with increasing focus on the detection of plagiarism 9 and ways of addressing it 4 .However, relatively little has been published about plagiarism committed by academics [10][11][12][13] , with research thus far regarded as largely anecdotal and speculative 2 .In this regard, Honig and Bedi note: p.105)Plagiarism is intellectual theft 14 and transgresses the fundamental values of the academy 15 , preventing learning, the dissemination of new knowledge, and the integrity of the scientific record 16 .Schminke 13 notes how plagiarism is sometimes committed by experienced and established authors for whom the blame is apportioned to junior co-authors.
The DHET remits approximately ZAR120 000 to higher education institutions for each peer-reviewed academic article published by a member of the institution in any of the local or international journals that appear on a list compiled by the DHET each year; this funding is an essential income stream for universities. 17Accordingly, increasing pressure has been placed on academics to publish in these accredited journals; and such publication is usually linked to financial and promotional rewards. 1,18This pressure can contribute to a research culture in which output is promoted at the expense of research quality, which can manifest as plagiarism by those who attempt to achieve the greatest publication output in the shortest time. 19In this regard, self-plagiarism -which portrays previous work as new -also contributes to this problem. 20ademics have a role to play in developing student moral literacy 21 and a link has been shown to exist between the dishonesty of academics and student cheating behaviour 22 .Furthermore, academics have been found to be reluctant to report and take action on student academic dishonesty. 23Accordingly, it is important to understand research integrity or the lack thereof amongst academics themselves.
Methods
We submitted 371 peer-reviewed articles that were published in 2011 in 19 South African management journals (spanning the major fields of management) through the Turnitin™ software program to identify similarities between the articles and other published material, i.e. to identify plagiarism.Once a manuscript is submitted to the program, it is compared against billions of Internet pages, online publications, journal articles and student assignments, dissertations and theses, and a report is generated that highlights the actual text that has been copied and indicates the percentage of similarity between that manuscript and those documents that appear on the Turnitin™ database.In the remainder of this article, this percentage is referred to as the similarity index.
Only South African journals that appeared on the Thomson Reuters Web of Science (WoS, previously ISI) or the International Bibliography of the Social Sciences (IBSS) lists or on the local list of journals compiled by the DHET, thereby qualifying for subsidy, were included in the study.Two journals (not included in the 19), containing 17 articles, could not be accessed.The results for each article were checked twice and a conservative approach was adopted in the interpretation of the similarity indices, in which the benefit of doubt was in favour of the authors.For each article, the following content was not included in the assessment of similarity: bibliography/list of references, quotations, strings of words of less than 10, student write-ups on which the article was based, conference proceedings and abstracts detailing the main features of the article.In addition, during the second inspection of the data, specific methodological terms and statistical or mathematical formulae were excluded in the analysis of similarity.The Turnitin™ software program has been used in other studies to detect plagiarism. 2,24It has been reported that the Turnitin™ program itself is conservative in the generation of the results. 25
Results
Across the 371 submissions, the similarity index (i.e. the percentage of similarity between an article and the documents in the Turnitin™ database) ranged from 1 (indicating almost no similarity) to 91 (indicating almost complete similarity).The latter pertained to a single article that was published in two journals under two different titles.Figure 1 shows that the distribution of the similarity index across the 371 submissions was positively skewed.In addition, several outliers were detected, which called for the use of robust statistics in subsequent analyses. 26The mean similarity index across the 371 submissions was 17.10 (SD=12.15), the mode was 9, the median was 14 and the 20% trimmed mean was 14.70 (95% confidence intervals: 13.61 and 15.89, Winsorised SD=6.67).To gain an overview of the relative frequency of plagiarism we categorised the similarity indices as follows: 1 to 9 as low; 10 to 14 as moderate; 15 to 24 as high and >24 as excessive.Table 1 summarises the frequencies in these categories.The most striking aspect of Table 1 is the proportion of submissions that fell into the high (27.2% of the submissions) and excessive (21.3% of the submissions) categories.Whereas one might have expected the bulk of the submissions to fall into the low to moderate categories, the results show that high levels of plagiarism are relatively common in these journals.If we use a cut-off point of 9% for the similarity index, then it is evident that 68.2% of the submissions were above the cut-off point.It is noteworthy that 21.3% of the submissions contained an excessive amount of similarity.
We compared the 20% trimmed means of the similarity indices among the types of submissions.For submissions to journals in the DHET list (n=201), the trimmed mean=13.69 and Winsorised SD=6.15; for submissions to journals indexed in WoS (n=62), the trimmed mean=14.84 and the Winsorised SD=5.65; and for submissions to journals on the IBSS list (n=108), the trimmed mean=16.71 and the Winsorised SD=7.90.Robust ANOVA 26 showed that there were no statistically significant differences in the trimmed means across the different journal categories (F=2.2, df1=2, df2=96, p=0.11).
We also isolated the 10 journals with at least 20 submissions during the period under review (n=270 submissions).Across these journals the trimmed means of the similarity index ranged from 11.67 to 27.24.
Robust ANOVA 26 revealed statistically significant differences in the trimmed means (F=2.6,df1=9, df2=62, p=0.012), with a medium effect size (ξ =0.40).Robust post-hoc tests 26 revealed that the differences could be traced to excessively high levels of similarity in one journal only (i.e. the journal with a trimmed mean similarity index of 27).
We also examined whether single versus multiple authorship played a role in the similarity index of an article.The difference in trimmed means between three categories of authorship -single (n=169, trimmed mean=15.75,Winsorised SD=6.76), dual (n=148, trimmed mean=15.42,Winsorised SD=7.08) and three or more authors (n=54, trimmed mean=10.65,Winsorised SD=4.28) -was statistically significant (F=9.6,df1=2, df2=115, p=0.0001) with a medium effect size (ξ =0.32).Robust post-hoc tests revealed that the similarity index of articles with three or more authors was significantly smaller than that of a single or dual authored article.No significant difference between single and dual authored articles was observed.
We complemented the three robust analyses of variance reported above with standard analyses of variance and non-parametric Kruskal-Wallis tests, both of which yielded a similar pattern of results as the robust tests.
Discussion
1][12][13] The findings also indicate that although one journal appeared to contain more plagiarised articles than the others, the problem of plagiarism existed across the board.The type of journal (i.e.whether it appears on the DHET, WoS or IBSS lists) was not a factor in the level of plagiarism.However, the findings indicated that articles submitted by three or more authors contained significantly less plagiarised material than did those articles submitted by a single or by dual authors.A possible explanation for this finding is that potential plagiarism can be more readily detected and corrected when several authors are involved.Conversely, a single author may more easily be able to hide plagiarised work.
We suggest that the intense pressure on universities and their academics to increase their research output within short time periods, plays a role in this problem.In addition, academics are rewarded in a variety of ways for such output 1,19 , which can contribute to a culture of expedience and opportunism 18 .
An additional problem of governance also emerges when one considers the payment of government subsidy to universities based on research output.If at least one author of an article is affiliated to a South African higher education institution, government will pay a research subsidy of ZAR120 000 per article, which may be proportionally split according to South African Journal of Science http://www.sajs.co.za
Research Letter
Plagiarism in South African management journals Page 3 of 3 the institutional affiliation of authors.Excluding those articles submitted by authors not affiliated to a South African higher education institution (n=47), it was estimated that government paid ZAR32 400 000 in subsidies for articles published in these 19 journals during the period under review.Given that 21.3% of these articles contained excessive plagiarism, a government subsidy of almost ZAR7 000 000 was paid for questionable publications.
The problem of human error in data coding always exists in studies such as this one, but we tried to minimise this risk by checking the data twice.The findings indicate the existence of plagiarism in the published articles we submitted for study.This finding has implications for government, for the universities to which the authors are affiliated and for the journals themselves.
The culture of research expediency that may be developing in academic institutions in order to increase subsidised research output can have longterm implications for the reputation of universities.Their contribution to society can also be compromised in terms of both the dissemination of new knowledge and the upholding of moral values transmitted through the students who graduate from these institutions and who can be expected to be influenced by unethical role models. 22It is critical that the DHET engages with universities to devise measures to subsidise research output without inadvertently promoting the sacrificing of the quality of such research and inadvertently encouraging shortcuts, such as plagiarism.In a similar vein, internal rewards to academics should not be based on the quantity of research output without considering that a greater contribution could be made by researchers who publish fewer articles but in highly cited journals with greater stringency in requirements pertaining to quality.It is also recommended that, in order to preserve the reputation of journals, editors subject manuscripts to plagiarism detection through software programs and that the penalties for detected plagiarism be severe for authors.
It is recommended that future studies of this nature explore the extent of plagiarism (if any) in journals related to other disciplines in order to ascertain whether this problem is pervasive in other fields as well.In addition, a qualitative study of the experiences of journal editors in addressing plagiarism may throw some light on how the extent of plagiarism, noted in this study, managed to appear in articles that are deemed to contain original material for which the DHET remits subsidy to academic institutions.
Figure 1 :
Figure 1: Distribution of the similarity index across 371 submissions.
Table 1 :
Similarity according to extent in categories
|
v3-fos-license
|
2020-10-28T18:55:31.686Z
|
2020-10-06T00:00:00.000
|
225113875
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-8599/2020/4/M1158/pdf",
"pdf_hash": "7fce5ac7cab6ac6a749f8a1240e2f3edb669936a",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42495",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "20fdc609d170b8c06fe63b1a77b9da7a85133e14",
"year": 2020
}
|
pes2o/s2orc
|
3-Hydroxy-2-iodophenyl-(4-methylbenzenesulfonate)
3-Hydroxy-2-iodophenyl-(4-methylbenzenesulfonate) was synthesized via a three-step procedure, starting from commercially available resorcinol, with an overall yield of 65%. The structures of the products were determined by 1H and 13C NMR, HRMS and IR.
Introduction
Halogenated hydroxyphenyl sulfonate has been considered as an important building block for the construction of functionalized molecules. Based on the difference in cleavage reactivity between C-X and C-S, the halogenated phenyl sulfonate can result in highly selective reactions in transition metal-catalyzed processes [1][2][3][4]. Arenes with adjacent halogen and sulfonyloxy groups act as precursor of aryne species in alkalic systems [4,5]. The free phenolic hydroxyl enables various approaches to further conversion [6][7][8].
Results and Discussion
The target compound was synthesized in three steps using commercially available resorcinol as the starting material. Iodination at position 2 of resorcinol was carried out to produce 2-iodoresorcinol 1 using iodine in water [9]. Sodium bicarbonate (NaHCO 3 ) was used to remove hydroiodic acid produced. The direct monotosylation of compound 1 to 3 failed. Based on the symmetrical structure and the reasonable acidity of the hydrogen of the two hydroxyls in compound 1, there was no selectivity in monotosylation when using only one equivalent of p-toluenesulfonyl chloride, and a mixture of di-, mono-and non-sulfonated compounds was obtained in this case.
On the contrary, the selective hydrolysis of iodophenyl bissulfonate 2 is an effective method for obtaining the target compound 3. By treatment with cesium carbonate in 1,2-dimethoxyethaneas solvent, 2-iodoresorcinol bis(trifluoromethanesulfonate) was desulfonylated on one side only [10,11]. Clark, Jr. et al. achieved the preparation of 3-hydroxy-5-iodophenyl-(4-methylbenzenesulfonate) via the selective hydrolysis of the symmetric substrate [12]. To our best knowledge, the monodesulfonylation of compound 2 is still unreported to date. In this research, compound 1 was sulfonylated with two equivalents of p-toluenesulfonyl chloride in the presence of potassium carbonate to generate phenyl bissulfonate 2 stoichiometrically. Compound 3 was obtained with an 87% yield by selective hydrolysis carbonate to generate phenyl bissulfonate 2 stoichiometrically. Compound 3 was obtained with an 87% yield by selective hydrolysis with potassium hydroxide in methanol at gradient temperature. It is noteworthy that general workup without further chromatographic purification for the reaction residue could provide satisfactory purity for 3. The synthetic procedure is shown in Scheme 1.
Materials and Methods
Unless otherwise noted, all the starting materials were commercially available and were used without further purification. 1 H and 13 C NMR spectra were recorded on a Bruker DMX400 (400 MHz) or Bruker DMX300 (300 MHz) in CDCl3 solutions and with tetramethylsilane as an internal standard. High-resolution electrospray ionization mass spectra were recorded on a Shimadzu HRMS-EI-TOF. Infrared spectra were obtained on a Nicolet iS5. All the spectra of the products can be found in the Supplementary Materials.
2-Iodoresorcinol (1)
Iodine (27.69 g, 109 mmol) was dispersed in an aqueous solution (80 mL) of resorcinol (11.00 g, 100 mol) in a round-bottom flask open to the atmosphere. The flask was placed in an ice-water bath, and sodium bicarbonate (9.24 g, 110 mmol) was added in portions with a spatula over 10 min at 0 °C. Vigorous gas emission from and jellying of the mixture were observed during the addition. It was of crucial importance to ensure effective stirring. If necessary, increasing the amount of water was helpful. The ice bath was removed, and the mixture was warmed to room temperature, followed by an additional 10 min of stirring at ambient temperature. The slurry was extracted three times with ethyl acetate. The combined organic layer was successively washed with 10% aqueous sodium thiosulfate solution and brine, dried over anhydrous sodium sulfate, filtered, and concentrated with a rotary evaporator. The dark brown residue was triturated in cold chloroform (-10 °C, 30 mL) for 10 min, filtered, and washed with chloroform at the same temperature to provide 2-iodoresorcinol (1) as a cream-colored solid (17.70 g, 75%). M.p. = 99-101 °C. 1
Materials and Methods
Unless otherwise noted, all the starting materials were commercially available and were used without further purification. 1 H and 13 C NMR spectra were recorded on a Bruker DMX400 (400 MHz) or Bruker DMX300 (300 MHz) in CDCl 3 solutions and with tetramethylsilane as an internal standard. High-resolution electrospray ionization mass spectra were recorded on a Shimadzu HRMS-EI-TOF. Infrared spectra were obtained on a Nicolet iS5. All the spectra of the products can be found in the Supplementary Materials.
2-Iodoresorcinol (1)
Iodine (27.69 g, 109 mmol) was dispersed in an aqueous solution (80 mL) of resorcinol (11.00 g, 100 mol) in a round-bottom flask open to the atmosphere. The flask was placed in an ice-water bath, and sodium bicarbonate (9.24 g, 110 mmol) was added in portions with a spatula over 10 min at 0 • C. Vigorous gas emission from and jellying of the mixture were observed during the addition. It was of crucial importance to ensure effective stirring. If necessary, increasing the amount of water was helpful. The ice bath was removed, and the mixture was warmed to room temperature, followed by an additional 10 min of stirring at ambient temperature. The slurry was extracted three times with ethyl acetate. The combined organic layer was successively washed with 10% aqueous sodium thiosulfate solution and brine, dried over anhydrous sodium sulfate, filtered, and concentrated with a rotary evaporator. The dark brown residue was triturated in cold chloroform (-10 • C, 30 mL) for 10 min, filtered, and washed with chloroform at the same temperature to provide 2-iodoresorcinol (1) as a cream-colored solid (17.70 g, 75%). M.p. = 99-101 • C. 1
3-Hydroxy-2-iodophenyl-(4-methylbenzenesulfonate) (3)
To a suspension of 2-iodo-1,3-phenylene bis(4-methylbenzenesulfonate) 2 (21.77 g, 40 mmol) in methanol (100 mL) was added, dropwise, a solution of potassium hydroxide (4.66 g, 83.2 mmol) in water (2.3 mL) and methanol (210 mL) at 35 • C in a 1 L Erlenmeyer flask. After the addition, the mixture was continuously stirred for about 3 h until compound 2 faded away upon TLC. The above mixture was heated to 45 • C for an additional 20 min, cooled to room temperature and diluted to 800 mL with distilled water. After filtration, the liquid layer was neutralized with hydrochloric acid (5%) and stored at 4 • C for 48 h. The precipitate was filtered, dissolved in diethyl ether and extracted in aqueous sodium hydroxide (10%). A yellow oil formed under the aqueous layer, which was separated, washed with diethyl ether and neutralized with hydrochloric acid (5%). A large amount of white suspension was observed and extracted twice with diethyl ether. The combined organic solution was dried over anhydrous magnesium sulfate, filtered and concentrated under vacuum to yield 3 as a white solid (13.57 g, 87%). M.p. = 97-98 • C. 1
Conclusions
Novel 3-hydroxy-2-iodophenyl-(4-methylbenzenesulfonate) was obtained in three steps, starting from commercially available resorcinol, and isolated easily with a good yield. The target compound could be useful for various applications in organic chemistry, pharmaceutical synthesis, etc.
Funding:
The work was financially supported by National Natural Science Foundation of China (20972037) and the Excellent Young Teacher Support Program of Hangzhou Normal University.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.